A Study of 18-point Mirror Cell Optimization Using Varying Forces

Jeff Anderson-Lee
jonah@eecs.berkeley.edu
January 2003

Abstract

This report documents a case study of using GuiPlop to try to design an 18-point varying angle cell using variable forces. A succession of cell models is subjected to Monte Carlo testing to simulate implementation errors. The variable force designs are ultimately found to be unhelpful, with increased sensitivity to implementation errors canceling out any purported gain in RMS error.

The Cells

This study began as an exercise to determine the feasibility of using an 18-point cell design with a thin mirror. The case in question was to design a cell for a thin 16-inch by 18mm (3/4 inch) plate glass mirror with an f/5 curve, which I am currently in the process of making. The base design uses 18 points arranged in two rings: the inner ring having 6 points and the outer ring having two sets of six points. The angles of the points on the outer ring are allowed to vary, as are both radii, with Plop optimizing their positioning. The force on the inner set of points is varied with respect to the force on the outer set of points.

Figure 1

parts.gif

I've selected a target maximum RMS error for this design of 4.25e-6 (roughly 1/120th wavelength or 1/60th wave-front) and a target maximum peak-valley (p/v) error of four times that, or 1.70e-5. For the implementation, I wanted to stay under three times that (i.e. 1/20th wave-front RMS or 1.28e-6).

Plop was first run multiple times over the basic cell design scanning a range of values for the relative force on the inner points and seeing what happens to the RMS error with refocus error calculation (herein after referred to as refocusing) both turned on and off, and looking at the peak-valley error with refocusing off.

Figure 2

error.gif

As you can see in Figure 2, with refocusing on, Plop would try to lower the force on the inner ring to below 0.7 relative to the outer ring. Without refocusing enabled, Plop will optimize to around 0.85 for the inner force value, although all values in the range 0.80 through 0.87 produce nearly equally low RMS values. Interestingly enough, with a relative force close to 1.0, the max RMS values with refocusing on and off are nearly equal.

The peak-valley error varies somewhat widely rather than in a simple curve. In private correspondence, David Lewis explained this as follows:

"If you imagine the various dips and bumps on the surface of the glass, they will change magnitude as you move the points around. Since P-V is the max of the peaks less the min of the valleys, it is effectively the max of a bunch of various functions and hence very irregular. That's why it is inadvisable to optimize for it, since an optimizer like I use needs a smooth function."

Using this graph, I initially selected a force ratio value of 0.8 to try out, based on several factors. First of all, it is in the minimum range of RMS values for the no refocusing solution. Secondly, it is far enough from 1.0 that we might see some effects due to varying the forces. Furthermore, I like round numbers and 0.8 is four-fifths, which seems like a nice ratio to work with. In addition, I decided to try the value of 1.0, since it appears to be where the two RMS curves meet and might have some interesting properties; it also makes a good basis for comparison. Next, I elected to let Plop pick what it felt were optimum values for the relative force with refocusing on and off. As a final straw-man case, I threw in a case where the angle was fixed at a symmetrical 15 degrees and the relative force was 1.

The next step was to generate optimized cell designs for each of the five selected cases. I used a step size of 0.01 for optimizing the radii and 1.0 for optimizing the angle. For the variable force cases I also used a step size of 0.01 on the force. Table 1 summarizes the fixed and variable parameters as chosen by Plop for the five test cell designs along with the RMS error reported for the design.

Table 1: Parameters of the Cells Under Comparison

 

r_inner

r_outer

alpha

f

RMS error

Straw-man

0.388063

0.842440

15

1

2.66794e-06

Even force

0.388119

0.842417

15.1972

1

2.65176e-06

No refocus

0.366095

0.828283

15.1604

0.853779

2.36323e-06

0.8 force

0.356587

0.822910

15.1395

0.8

2.36820e-06

Refocused

0.324899

0.753038

15.1037

0.555943

1.58877e-06

Monte Carlo Testing

The next step was to run a Monte Carlo analysis to see what the sensitivity of each design was reported to be. In principle, the idea is to use Monte Carlo testing to simulate the sorts of errors in measurement, fabrication and placement of the mirror in the cell that might take place in actual fabrication and use in order to see what the effect on the predicted average and maximum RMS error of a cell implemented from this design might be like. Before doing so, it was necessary to decide on the range of Monte Carlo variation that should be used for each quantity. For the radius variation values I selected 0.01 (1% or about 2mm). For the angle variation I used 0.7, which is about a 2mm displacement at the outer ring. For the force variation, I selected 0.01 (1%), assuming what I though to be a carefully made cell.

My first attempt at evaluation was simply to use Monte Carlo analysis on all parameters at the same time using the cell definitions as-is. Also, I decided to run the tests with no refocusing enabled, since the majority of cells were designed this way. The straw-man Monte Carlo test was run with variable forces and angles enabled so that the Monte Carlo tests could work on varying these parameters; otherwise it would have had a much better (albeit overly optimistic) evaluation result, since no simulation of manufacturing errors in the angle and force dimensions could be made. The results are summarized in Table 2. All decimal places reported by Plop are shown, although it is likely that with only 1000 runs, only the first two places are truly significant.

Table 2: 1000 run Monte Carlo with no refocusing

 

Cell Design RMS error

Average RMS error

Maximum RMS error

Average / Design

Maximum / Design

Straw-man

2.66794e-06

0.93301e-05

2.28982e-05

3.50

8.58

Even force

2.65176e-06

0.93257e-05

2.28940e-05

3.52

8.63

No refocus

2.36323e-06

0.94962e-05

2.33617e-05

4.02

9.89

0.8 force

2.36820e-06

3.70128e-05

6.07752e-05

15.63

25.66

Refocused

1.58877e-06

7.05692e-05

9.48971e-05

44.42

59.73

This was particularly harsh on the cell designed with refocusing in mind, and it shows. None of the cells showed up extremely well in this test in that a 2.3e-5 maximum error represents about a 1/11th wave-front cell, whereas the designs were initially 1/94th wave-front or better. The Plop designed no refocus cell came out better than cells using greater relative force, but on par with the cells without variable force. Its implementation performance loss (as measured by average error versus design error) was higher though.

Note that the fixed angle straw-man case came out neck-and-neck with the even force case. Considering that there is less than a half of a millimeter of difference between the point placement for these two designs, it is reassuring to see that they are reported to perform very similarly.

The next step was to take the same designs and run the tests again, this time with refocusing enabled. The results are summarized in Table 3.

Table 3: 1000 run Monte Carlo with refocusing

 

Cell Design RMS error

Average RMS error

Maximum RMS error

Average / Design

Maximum / Design

Straw-man

2.66794e-06

2.95923e-06

3.78868e-06

1.11

1.42

Even force

2.65176e-06

2.94796e-06

3.72807e-06

1.11

1.41

No refocus

2.36323e-06

2.66938e-06

3.53303e-06

1.13

1.50

0.8 rel. force

2.36820e-06

2.37268e-06

3.30626e-06

1.00

1.40

Refocused

1.58877e-06

1.97016e-06

3.10062e-06

1.24

1.95

All of the cells fared far better, indicating that the Monte Carlo errors were largely “systemic” in nature and able to be compensated for via refocusing. This time, the cell originally designed to be refocused came out the best for average error (1.97e-6) and maximum error (3.10e-6), but still the worst for performance loss in both the average (1.24) and maximum (1.95) cases. The cell designed with a 0.8 force had least performance loss as measured in both the average (1.00!) and maximum (1.40) cases. Once again, the straw-man design matched the even force case very closely, which is good, considering the minimal difference between the two designs. With refocusing allowed, all of the designs came in within our design limits by these tests.

The main downside of this method of Monte Carlo testing is that it represents a systemic change of the design parameters (e.g. all parts equally oversized or undersized) and does not really represent the sorts of measurement and placement errors one might actually see in making multiple parts by hand or in arranging the parts in the right positions relative to the mirror. To try and test for this sort of error, a new set of cell designs was constructed based on the parameters from the four original designs.

Monte Carlo Testing on Alternative Cell Specifications

My first attempt was to position all eighteen points independently, but Plop did not seem to know how to deal with such a design and its apparent lack of symmetry. The next and more successful approach was to group the points into three sets of six points mirrored three ways around the circle. (Two points on the inner ring and four on the outer ring.) More parameters were added to let each set have its own independent radius and angle, so that they could be varied independently by the Monte Carlo tests. This design was acceptable to Plop

For the new designs, I decided to be more measured in my choice of Monte Carlo variation values. Once again I chose to use a 0.01 radial variance, or roughly 2mm. For the outer angle I used 0.68 to 0.76 degrees and for the inner angle 1.48 to 1.76 depending on the design, which is the same 2mm variance in the perpendicular direction at each radius. For the force value I elected to use 0.05, which represents about a 1mm error in placement of the balance point on the triangles. (That’s right; a one-millimeter tolerance can cause a change of up to 5% in relative force!)

With those changes made, I proceeded with the next set of Monte Carlo tests, this time using refocusing enabled at first. The results are summarized in Table 4.

Table 4: 1000 run Monte Carlo on altered models with refocusing

 

Cell Design RMS error

Average RMS error

Maximum RMS error

Average / Design

Maximum / Design

Straw-man

2.66794e-06

4.81680e-06

1.12031e-05

1.81

4.20

Even force

2.65176e-06

4.81091e-06

1.11994e-05

1.81

4.22

No refocus

2.36323e-06

4.71431e-06

1.13730e-05

1.99

4.81

0.8 force

2.36820e-06

4.74990e-06

1.14705e-05

2.01

4.84

Refocused

1.58877e-06

4.27382e-06

1.09342e-05

2.69

6.88

This time, the results were more similarly clustered, in that the average and maximum errors were much more similar for all cases. In this case, the straw-man and even force designs had the least “loss of performance” through error, but the refocused design was still slightly ahead in terms of absolute performance, although it had lost most of its initial design spec advantage over the others. Of the non-refocused designs, the Plop designed no refocus cell is marginally ahead of the others on average error, although it is hard to say if such a slight difference is truly significant. Furthermore, the straw-man and even force cases are neck-and-neck as we would expect, which is a good sign.

On another note, with refocusing allowed, all of the cells appear to meet our implementation requirements of 1/20th wave-front RMS.

Next I ran the Monte Carlo test again on the new designs, this time with out refocusing enabled. Table 5 shows these results.

Table 5: 1000 run Monte Carlo on altered models with no refocusing

 

Cell Design RMS error

Average RMS error

Maximum RMS error

Average / Design

Maximum / Design

Straw-man

2.66794e-06

0.83871e-05

2.69461e-05

3.14

10.10

Even force

2.65176e-06

0.83828e-05

2.69408e-05

3.16

10.16

No refocus

2.36323e-06

0.85541e-05

2.79294e-05

3.62

11.82

0.8 force

2.36820e-06

0.86963e-05

2.83596e-05

3.67

11.98

Refocused

1.58877e-06

7.12914e-05

9.95401e-05

44.9

62.7

Once again we see that cells designed with refocusing in mind do not fare well when compared against those designed without refocusing when measuring the results without refocusing. The other cells all performed very similarly on maximum error, with the evenly weighted cell now marginally ahead on average error. Again, we can see that there is little difference between the straw-man and even force cases as we would expect.

Looking at the error values, a 2.7e-6 maximum error represents a 1/9th wave-front cell implementation while an average error of 8.7e-6 represents a 1/29th wave-front cell implementation. So although Plop indicates that these cells could be usable due to the fact that we can use refocusing in practice, the worst case cells do not meet up to our implementation standards, even though the average case does.

Further Refinements to the Model

To further deepen the study, I elected to refine the model even more to better simulate the construction errors. First of all, I tightened the point placement to plus or minus 1mm in both radial and angular dimensions (assuming careful construction) to see if that would help. Secondly, I modeled the force errors so as to more closely simulate the effects of misplacement of the balance points on the triangles and bars as well as skew error in the construction of the triangles. Furthermore, I extended the precision of the error variance and computed them separately for each case as appropriate so that each model would better reflect the errors of the corresponding case. More details of the refined model can be found in Appendix A.

Sets of 1000 Monte Carlo trials were once again run on the new models with refocusing both on and off. The results are summarized in Tables 6 and 7.

With the more refined models, we see that using variable force no longer appears to be a win at all. Now, the models are ranked fairly consistently, based on the amount of relative force, with the least variation in force leading to the better performance. While a small variation in force (less than 15% for the no refocus model) does not seem to be significantly harmful to expected performance in terms of average and maximum error, it also does not significantly help. It does however affect the implementation performance loss as measured by the average/design and maximum/design ratios. Thus any performance gain obtained through use of variable forces is likely to be lost back (and possibly more) through increased sensitivity to implementation errors.

Table 6: 1000 run Monte Carlo on refined models with refocusing

 

Cell Design RMS error

Average RMS error

Maximum RMS error

Average / Design

Maximum / Design

Straw-man

2.66794e-06

4.83647e-06

1.05349e-05

1.81

3.95

Even force

2.65176e-06

4.82471e-06

1.05031e-05

1.82

3.96

No refocus

2.36323e-06

4.86651e-06

1.09094e-05

2.06

4.62

0.8 force

2.36820e-06

5.28880e-06

1.20170e-05

2.23

5.07

Refocused

1.58877e-06

5.44944e-06

1.33951e-05

3.43

8.43

Table 7: 1000 run Monte Carlo on refined models with no refocusing

 

Cell Design RMS error

Average RMS error

Maximum RMS error

Average / Design

Maximum / Design

Straw-man

2.66794e-06

0.77624e-05

2.18859e-05

2.91

8.20

Even force

2.65176e-06

0.77315e-05

2.17995e-05

2.92

8.22

No refocus

2.36323e-06

0.77456e-05

2.16218e-05

3.28

9.15

0.8 force

2.36820e-06

0.84654e-05

2.32442e-05

3.57

9.82

Refocused

1.58877e-06

7.12123e-05

9.19821e-05

44.82

57.90

This time we see that all of the cells except the cell designed with refocusing in mind have acceptable average implementation specifications, but that only by using refocusing does the maximum error case become within tolerance.

Separating the Factors

Next, I wanted to try and discover where the greatest amount of error was coming from. To do this, I re-ran the tests on one of the designs, allowing only one of the dimensions to vary at a time. For instance, I would let only the inner radii vary, then the outer radii, and so on. I chose the even force cell for this case. I also opted to run these tests without refocusing, since that seemed to show-up the error the most.

In order to help show differences between positioning and balance errors, it was necessary to make some slight changes to the model. For positioning error runs, we assumed that both of the outer radii for each triangle were the same, while for balancing runs, we assumed that the average of the two outer radii was the desired value. The reason for this is that the difference in the outer radii constitutes a skew that affects the balance and hence the forces on each point. The affected rows are marked by an asterisk (*) in the table below.

The results are shown in Table 8. For the sake of easier reading, I have once again normalized all of the maximum error values to the same power of 10.

Table 8: 1000 run Monte Carlo on even force design with no refocusing

 

Cell Design RMS error

Average RMS error

Maximum RMS error

Average / Design

Maximum / Design

Inner radii

2.65176e-06

2.97511e-06

0.41518e-05

1.12

1.57

Outer radii*

2.65176e-06

4.05693e-06

0.78693e-05

1.53

2.97

All radii

2.65176e-06

4.18564e-06

0.88848e-05

1.58

3.35

Inner angles

2.65176e-06

2.66498e-06

0.27429e-05

1.00

1.03

Outer angles

2.65176e-06

2.88045e-06

0.35848e-05

1.09

1.35

All angles

2.65176e-06

2.88681e-06

0.37751e-05

1.09

1.42

Inner position

2.65176e-06

2.99012e-06

0.40853e-05

1.13

1.54

Outer Position*

2.65176e-06

4.21869e-06

0.79833e-05

1.59

3.01

All positioning*

2.65176e-06

4.34275e-06

0.96779e-05

1.64

3.65

Bar balance

2.65176e-06

3.01420e-06

0.36780e-05

1.14

1.39

Triangle radial

2.65176e-06

5.43723e-06

1.23166e-05

2.05

4.64

Triangle axial

2.65176e-06

2.65888e-06

0.26733e-05

1.00

1.01

Triangle skew*

2.65176e-06

5.30498e-06

1.28022e-05

2.00

4.83

Triangle balance*

2.65176e-06

5.46266e-06

1.20824e-05

2.06

4.56

All balance*

2.65176e-06

7.35695e-06

2.06601e-05

2.77

7.79

All variables

2.65176e-06

7.73150e-06

2.17995e-05

2.92

8.22

To compare the effects of the various factors, let us look at the ratio of difference between the design RMS error and the average RMS error of the Monte Carlo runs in Table 8. From this we see that the variation in balance (2.77) contributes most to the overall performance change (2.92). Positioning errors contribute far less (1.64), and most of that is due simply to positioning errors in the outer radii (1.53), even after we factor out the skew component. Of the balance errors, the radial (2.05) and skew (2.00) components contribute almost all of the error, while the bar balance (1.14) contributes far less, and the axial balance error (1.00) practically nothing at all.

Thus it appears that accurate construction of the triangles is essential to good performance, especially with regard to locating the support and balance points. After that, it is most important to properly locate the outer two support points of each triangle at the proper radius.

The average cumulative change in performance for this error model was a factor of 2.92 and the maximum error for the worst case found was a factor of 8.22. Keep in mind that all of these effects are caused by multiple variations of 1 to 2-millimeters or less in positioning! What this seems to indicate is that accurate positioning and/or over-engineering are key to building a successful cell. Otherwise, your 1/120th wavelength / 1/60th wave-front cell design can become a 1/15th wavelength / 1/7th wave-front implementation, or worse! Not only that, but measuring the mirror for design and centering the mirror on the cell afterwards may also need to be done to sub-millimeter precision if you chose a design that is close to the design limit.

Fortunately refocusing helps us out somewhat, in that even cells not designed with refocusing in mind have both their average maximal error factors reduced by refocusing after the fact. Unfortunately this seemingly is not quite as helpful for the “optimized” designs that use more widely varying forces.

From Table 6 we can see that the design that used the highest use of varying forces went from a design RMS error of 1.59e-6 (1/157th wave-front) to an average simulated implementation RMS error of 5.45e-6 (1/46th wave-front): a factor of 3.43 times. On the other hand, the “less optimized” design went from a design RMS error of 2.65-6 (1/94th wave-front) to an average simulated implementation RMS error of 4.82e-6 (1/52nd wave-front): a factor of only 1.82 times. While the optimization seemed to add a lot of “wiggle-room” for implementation errors, the actual implementation used up all of it—and then some! Thus, if planning on optimizing a design using variable forces to get it under design tolerance, consider using a larger cell instead, as all of that gain may get lost in the implementation, leaving you right back where you started.

Limitations of the Study

This has been a single case study where the optimization was using both varying forces and varying angles. It would be of interest to do a similar study of other designs using varying forces and/or varying angles to see if the results extend to other cases. Also, it may be useful to separate out the use of varying forces from varying angles to make certain that there is no cross-dependency. While the straw-man case was initially planned to help compare static-angle with varying-angle designs, it turned out that there was little difference between it and the constant-force with varying-angles case, leaving no room for comparison.

Likewise, the analysis was based on a model that introduced systemic variations in the cells based on six groupings of three points rather than variations of individual points. This may affect the results in some unforeseen ways.

The positioning error of each group of points was considered independently instead of recognizing that there are two sources of error: construction of the parts and placement and orientation of the parts relative to each other and the mirror. In the final refinement however, we did at least begin to sort out the contribution of various balance error factors.

Conclusions

The initial results seem to indicate that using varying forces may not give us a benefit as it first might appear, due to the designs becoming more sensitive to changes in position and loading of support points. The resulting sensitivity appears to claim back most of the gains in the presence of relatively small fabrication errors.

Likewise, it seems apparent that small errors in fabrication of parts on the order of one or two millimeters can have dramatic effects on the performance of a cell. The effects were considerably minimized when all of the parts had equal errors leading to systemic changes in the cell. This suggests that in making the parts, a “jig” should be used, so that all parts are as symmetrical and as close to identical as possible.

This study also brings into question the efficacy of pushing the envelope by trying to optimize a cell that is close to its performance limit. Instead, it may be safer to shift gears and try another design with more points of support. In this instance, the gains made by using relative force and refocusing to optimize the cell were seemingly all lost in increased performance loss due to simulated construction errors.

Also, given that designs can have average performance losses of a factor of two to four in performance and worst case performance losses of four to ten or more due to relatively small tolerances of one or two millimeters, it may be worth rethinking using a 1/120th wavelength and 1/60th wave-front criterion for cell design, at least for the average builder.

While no by means giving a definitive answer to the issue of varying force designs, I do believe that we have at least raised some interesting points and shown that the issue is worthy of more inquiry.

Ultimately it makes me wonder if John Dobson doesn’t have a good idea: use thick glass and lean it on some carpet or Astroturf with a sling.

Just kidding.

I think.

Appendix A.

A Refined Model for Monte Carlo Testing 18-Point Cell with Variable Angles

The cell is made of six triangles, each with one point toward the middle and the other two on the same radius further out. The triangles are joined together in pairs. We model one of these pairs with six separately placed points so that we can use Monte Carlo testing to simulate the sorts of errors that might occur in construction.

The two triangles are labeled 'a' and 'b', and the points are '1' (on the inner radius), '2' and '3' (on the outer radius). That make a total of six points: 1a, 2a, 3a, and 1b, 2b, 3b.

Each point is positioned by specifying its radius (rXX) and angle (aXX). There are six pairs of coordinates: (r1a, a1a) through (r3b, a3b). Each of these ordinates is allowed to vary independently by an amount equivalent to plus or minus 1mm. That leads to a total variation of plus or minus 2mm on length of the sides of the triangles and plus or minus 1mm on the placement. In reality the effects might be reversed, with the dimensions being more accurate than the placement, but the model was starting to get complex enough as it was without trying to add in that factor.

In addition, variables are used to simulate the sorts of force errors that might occur due to shifts of plus or minus 1mm in the placement of the balance points on the bars and triangles. 'eba' simulates the bar balance position error effect on triangle 'a' with 'ebb' being its mirror effect on triangle 'b'. Similarly 'eha' and 'ehb' simulate the effects of a radial (height) error on the balance point placement on the triangles and 'ewa' and 'ewb' simulate the effects of an angular (width) placement error. Finally, the relative radial errors of points 2 and 3 for each triangle is used to compute a skew error factor for each triangle 'ska' and 'skb' that in turn computes a net effect for each point.

Some of the error factors are simply summed, due to the complex nature of doing mathematical expressions within Plop, but the net error from this is minimal since (1+a)*(1+b) is approximately equal to (1+a+b) for small a and b. The error factors were similarly computed separately in a spreadsheet to help simplify the math.

The net effect is that the Monte Carlo testing will approximate the effects of construction errors of up to 2mm in triangle sizing and 1mm of net triangle and balance point placement in each direction. The downside is that we are actually testing the results of three sets of equally misconstructed triangle pairs as opposed to six independently constructed triangles, but at least it gives us a slightly more realistic effect than simply varying the parameters of the original model.