//Logo Image
Author: Shu-Gen Wang, Yeh-Liang Hsu (2004-09-23); recommended: Yeh-Liang Hsu (2005-01-18).
Note: This paper is published in Proceedings of the I MECH E Part B Journal of Engineering Manufacture, Vol. 219, p. 177 – 181.

One-pass milling machining parameter optimization to achieve mirror surface roughness

Abstract

In this paper, the possibility of just using general machine centers in one-pass milling process to finish an aluminum plate with the mirror surface roughness is studied. In particular, how to find the optimal setting of machining parameters is presented. In this optimization problem, to evaluate whether the surface roughness meets the average criteria of mirror surface requires real cutting experiments, and it is desirable to find the optimal machining parameters using as few experiments as possible. The “Sequential Neural Network Approximation Method (the SNA method)” was used to find the optimal machining parameters, including the spindle speed, feed rate, depth of cut, and the number of inserted blades in the cutter to maximize the metal removal rate while the surface roughness meets the average criteria of mirror surface.

Keywords: mirror surface roughness, machining parameter optimization, the sequential neural network method.

1.     Introduction

Mirror surfaces based on metal matrix intended for application on reflectors and optical parts have been expected to become true in mass production. The cost of mirror quality fabrication of metal surface using super-precision machines or multi-loop machining is still high. On the other hand, for cheap and fast mass production, the possibility of just using general machine centers in one-pass milling process to finish an aluminum plate with the desired mirror surface roughness is studied. In particular, how to find the optimal setting of machining parameters for one-pass milling of metal mirror surfaces is presented in this paper.

The effect of various machining parameters (such as spindle speed, feed rate, depth of cut, and different types of cutters) on surface roughness has been well studied [1-4], but few researchers have paid special attentions on mirror surface machining with one-pass milling simple procedure. This paper describes the process of optimizing the machining parameters in a one-pass milling process by a general machine center for mass production of 6061-T6 aluminum plates. The purpose is to maximize the metal volume removal rate while the finished surface will pass the desired average roughness of the mirror surface. The spindle speed, feed rate, depth of cut, and the number of inserted blades in the cutter, are the design variables to be decided.

The spindle speed of the CNC machine center used in this research ranges from 40 to 7100 rpm (cutting speed 20m/min. to 3560m/min.), with feed increment of 0.001 mm. The cutting tool was a MAPAL face miller, 160 mm in diameter, with 1 to 10 diamond face milling blades inserted. A non-contact, cutoff light type Taylor-Hobson microscope with 0.02 mm resolution of pick-up diamond probe was used to measure the cutting surface roughness. The working piece was a 55 mm3 aluminum alloy (AL6061-T6) material.

The metal removal rate Q (in cm3/min) can be calculated as:

                                                                                            (1)

where  is the width of cut in mm,  is the depth of cut in mm,  is the feed speed in mm/min. Moreover,

                                                                                               (2)

where z is the number of blades or cutter teeth, and  is the feed per cutter tooth in mm/tooth.

In this case, =55 mm, and the objective is to maximize the metal removal:

        Max.                                                               (3)

In the meantime, the surface roughness measured by the Taylor-Hobson microscope has to pass the criteria of mirror surface. In general, for a mirror surface, the centerline average roughness Ra as defined in Equation (4) has to be lower than 0.05 mm.

        mm                                                                             (4)

where yi is the measured roughness height according to each individual division from the average centerline, and n is the total number of divisions.

This is a typical engineering optimization problem that cannot be solved by directly applying the existing numerical optimization algorithms. In this optimization problem, the mirror surface constraint is the so-called “implicit constraint” [5]. It cannot be expressed as an analytical function in terms of the design variables. Many factors can affect the surface roughness of a work piece, and it is hard to analyze the surface roughness and establish a theoretical form using cutting theory. To evaluate whether the surface roughness meets the average criteria of mirror surface requires real cutting experiments. It is desirable to find the optimal machining parameters using as few experiments as possible.

There has been considerable interest in the area of non-linear discrete optimization. Some review/survey articles on the algorithms for nonlinear optimization problems with mixed discrete variables have been published [6, 7]. Among these methods, the branch and bound method, simulated annealing, and genetic algorithm are suitable implementations for problems with non-differentiable functions. But these methods require many function evaluations, which may not be suitable for engineering optimization problems with implicit constraints.

One important category of numerical optimization algorithms is the sequential approximation methods. The basic idea of sequential approximation methods is to use a “simple” sub-problem to approximate the hard, exact problem. By “simple” sub-problem, is meant the type of problems that can be readily solved by existing numerical algorithms. For example, linear programming sub-problems are widely used in sequential approximation methods. The solution point of the simple sub-problem is then used to form a better approximation to the hard, exact problem for the next iteration. In an iterative manner, it is expected that the solution point of the simple approximate problem will get closer to the optimum point of the hard exact problem. One major disadvantage of the existing sequential approximation methods is that they are usually derivative-based approximation methods, which require at least the first derivatives of the constraints with respect to the design variables.

2.     The Sequential Neural Network Approximation Method

In this paper, the “Sequential Neural Network Approximation Method (the SNA method)” [8, 9] is used to solve the problem. In this method, first a back-propagation neural network is trained to simulate the feasible domain formed by the implicit constraints using a few representative training data. The “exact optimization model” Equations (3) and (4) can be approximated below:

        min.  f(x)

        s.t.    NN(x)=1                                                                                         (5)

where f(x)=-Q. The binary constraint NN(x)=1 approximates the feasible domain of the implicit surface roughness constraint Equation (4). If NN(x)=1, the design point x (a set of values for the machining parameters) is feasible; if NN(x)=0, the design point x is infeasible, that is, the surface roughness obtained by the set of values for the machining parameters x cannot meet the average criteria of mirror surface.

Table 1 shows the possible discrete values of the design variables. The set of initial training points should reasonably represent the whole design domain. Various types of matrices are commonly used for planning experiments to study several input variables. Orthogonal arrays are highly popular in industrial applications because they are geometrically balanced in the coverage of experimental region with just a few representative experiments. The orthogonal array is also adopted in this research to form the set of initial training data. A training data consists of two pieces of information: a design point and whether this design point is feasible or infeasible. Table 2 shows the set of initial training points using the L9(34) orthogonal array. Note that 4 initial training points are feasible (), and training point No. 3 (2000, 4.5, 0.085, 10) has the maximum metal removal rate.

Table 1. Possible discrete values of machining parameters for the mirror surface

 

1

2

3

4

5

6

7

8

9

10

11

12

13

vs (103rpm)

2

2.5

3

3.5

4

4.5

5

5.5

6

6.5

7

 

 

rf (mm/t)

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

 

 

 

 

dc (mm)

.025

.03

.035

.04

.045

.05

.055

.06

.065

.07

0.75

.08

.085

z (tooth)

2

4

6

8

10

 

 

 

 

 

 

 

 

Table 2. The set of initial training data using the L9 orthogonal array

no.

L9 orthogonal array

vs(rpm)

rf(mm/t)

dc(mm)

z(tooth)

Q(cm3/min)

Ra(mm)

Feasible

1

1

1

1

1

2000

0.5

0.025

2

0.00275

0.08627

N

2

1

2

2

2

2000

2.5

0.055

6

0.09075

0.04587

Y

3

1

3

3

3

2000

4.5

0.085

10

0.42075

0.04786

Y

4

2

1

2

3

4500

0.5

0.055

10

0.06806

0.02997

Y

5

2

2

3

1

4500

2.5

0.085

2

0.10519

0.05580

N

6

2

3

1

2

4500

4.5

0.025

6

0.16706

0.05088

N

7

3

1

3

2

7000

0.5

0.085

6

0.09818

0.04744

Y

8

3

2

1

3

7000

2.5

0.025

10

0.24063

0.06227

N

9

3

3

2

1

7000

4.5

0.055

2

0.19058

0.10546

N

Table 3. Iteration history

Iteration

vs(rpm)

rf(mm/t)

dc(mm)

z(tooth)

Q(cm3/min)

Ra(mm)

Feasible

Start point

2000

4.5

0.085

10

0.42075

0.04786

Y

1

3500

4.5

0.085

10

0.73631

0.06459

N

2

3500

3.5

0.085

10

0.75269

0.05625

N

3

2500

4.5

0.085

10

0.52594

0.05230

N

4

2500

4.5

0.080

10

0.49500

0.04878

Y

5

3000

4.5

0.080

10

0.59400

0.05391

N

6

2500

4.5

0.080

10

0.49500

0.04878

Y

7 (Restart)

7000

4.5

0.085

10

1.47263

0.14341

N

8

7000

4.5

0.080

10

1.38600

0.13582

N

9

7000

4.0

0.080

10

1.23200

0.12439

N

10

7000

3.0

0.080

10

0.92400

0.10147

N

11

5000

2.0

0.065

10

0.35750

0.04401

Y

12

6000

4.5

0.065

10

0.96525

0.09150

N

13

5500

3.0

0.075

10

0.68063

0.06804

N

14

5000

4.5

0.065

10

0.80438

0.07147

N

15

4500

3.0

0.080

10

0.59400

0.05843

N

16

7000

3.5

0.055

10

0.74113

0.08766

N

17

7000

2.5

0.085

10

0.81813

0.09555

N

18

7000

2.5

0.075

10

0.72188

0.08494

N

19

7000

2.0

0.085

10

0.65450

0.08354

N

20

7000

2.0

0.080

10

0.61600

0.07850

N

21

7000

2.5

0.065

10

0.62563

0.07636

N

22

6000

2.5

0.065

10

0.53625

0.06068

N

23

5500

2.5

0.070

10

0.52938

0.05722

N

24

5000

2.5

0.065

10

0.44688

0.04953

Y

25

5000

2.5

0.080

10

0.55000

0.05773

N

26

5000

2.5

0.075

10

0.51563

0.05449

N

27

5000

2.5

0.065

10

0.446875

0.04953

Y

28 (Restart)

2500

4.5

0.080

10

0.49500

0.04878

Y

29

2500

4.5

0.080

10

0.49500

0.04878

Y

30 (Restart)

2500

4.5

0.080

10

0.49500

0.04878

Y

Opt

2500

4.5

0.080

10

0.49500

0.04878

Y

3.     Conclusion

The size of the input layer of the three-layer network depends on the number of variables and the number of discrete values of each variable. Figure 1 shows the representation of training point No. 2 [(2000, 2.5, 0.055, 6), feasible] in Table 2. In this example, a total of 38 neurons are used in the input layer. Each neuron represented with a circle or a cross sign in the input layer has value 1 or 0, respectively, to represent the discrete value in the sequence corresponding to each variable. There is only one single neuron in the output layer to represent whether this design point is feasible (the output neuron has value 1) or not (the output neuron has value 0).

There are 12 neurons in the hidden layer in this example. The transfer functions used in the hidden and output layers of the network are both log-sigmoid functions. The neuron in the output layer has a range [0, 1]. After the training is completed, a threshold value 0.25 is applied to the output layer when simulating the boundary of the feasible domain. In other words, given a discrete design point in the search domain, the network always output 0 (if output neuron’s value is less than the given threshold) or 1 (otherwise) to indicate whether this discrete design point is feasible or not.

The computational effort required in the neural network training is critical. If the computation required is larger than that of evaluating the implicit constraints, then the SNA method will lose its advantage. Here all the training data are represented in a clear 0-1 pattern, which make the training process relatively faster. A quasi-Newton algorithm is used for the training. In our numerical experience, the error goal of 1e-6 is usually met within 2000 epochs, even for cases with many training points.

A search algorithm then searches for the “optimal point” in the feasible domain simulated by the neural network, starting from the best feasible design point in the initial training set [2000, 4.5, 0.085,10], Q=0.421 cm3/min. This search algorithm is specially designed for the SNA method, and is described in details in [7]. When the “optimal point” is found, a cutting experiment is performed to evaluate whether this point is feasible, that is, whether the surface roughness meets the average criteria of mirror surface. The new training data is then added to the training set. The neural network is trained again with this added trained data, hoping that the network will better approximate the boundary of the feasible domain of the exact optimization model. This process continues in an iterative manner until the same design point is obtained repeatedly and no new training point is generated.

Table 3 and Figure 2 show the iteration history of this problem. The SNA method terminates after 6 iterations. The final optimal machining parameters are: spindle speed vs=2500 rpm, feed per cutter tooth rf =4.5 mm/tooth, depth of cut dc=0.08mm, and the number of blades or cutter teeth, z=10. The maximum volume cut off 0.495 cm3/min, increasing 17.6%. Using this set of machining parameters in the cutting experiment, Ra=0.04878mm, which meets the average criteria of mirror surface. Note that a total of 15 design points out of 11×9×13×5=6435 possible combinatorial combinations (0.23%) were evaluated by real cutting experiments.

To ensure a better chance to reach a global optimum, the searching process was restarted from 3 other feasible points in the set of initial training data in Table 2. As shown in Table 3 and Figure 2, after a total of 30 iterations, the same optimal machining parameters were obtained. Note that the final optimal design point is close to the starting point obtained by the L9 orthogonal array, which is commonly used for planning experiments to study several input variables, while the metal removal rate Q increases by 17.6%.

4.     References

[1]   Takeuchi, Y., Kawakita, S., Sawada, K., and Sata, T., “Ultra precision milling (Sculptured surface generation),” Nippon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C, Vol. 59, No 566, 1993, pp. 3193-3198.

[2]   Fuh, K. H., Wu, C. F., “Proposed statistical model for surface quality prediction in end milling of Al alloys.” Int. J. of machine tools & manufacture, v35 n8, 1995, pp.1187-1200.

[3]   Nieminen, I., Paro, J.; Kauppinen, V. “High-speed milling of advanced materials,” J. of Materials Processing Technology, v56, n 1-4, 1996, pp. 24-36.

[4]   Kin, J. D., Kang, Y. H., “High-speed machining of aluminum using diamond end-mills.” Int. J. of Machine Tools and Manufacture, v37 n8, 1997, pp.1155-1165.

[5]   Hsu, Y. L., Sheppard, Sheri D. Wilde, and Douglass J. “Explicit approximation method for design optimization problems with implicit constraints,” Engineering Optimization, v 27 n1, 1996, pp. 21-42.

[6]   Arora, J. S., and Huang, M.W., 1994. “Method for optimization of nonlinear problems with discrete variables: a review,” Structural Optimization, Vol. 8, pp. 69-85.

[7]   Thanedar, P.B., and Vanderplaats, G..N., 1995, “Survey of discrete variable optimization for structural design,” Journal of Structural Engineering, Vol. 121, No.2, pp. 301-305.

[8]   Hsu, Y. L., Dong, Y. H., Hsu, M. S., “A Sequential Approximation Method Using Neural Networks for Nonlinear Discrete Variable Optimization with Implicit Constraints,” JSME International Journal, Series C, v 44, n1, 2001, pp. 103~112.

[9]   Hsu, Y. L., Wang, S. G., and Yu C. C. “A sequential approximation method using neural networks for engineering design optimization problems,” Engineering Optimization, v 35 n5, 2003, pp. 489-511.