Dr Rupert Rawnsley. March 2005.

The Mandelbrot set is explored using both single-point and interval arithmetic. The application of interval arithmetic to the Mandelbrot set is discussed, and suitable implementations proposed. Results are presented for the single-point and interval versions of the Mandelbrot set, and differences between the two are discussed. These differences are clearly shown in the following figures.

The program that produced these results is availble to download.

I am very grateful to Luiz de Figueiredo, John Hedley, Steve Margetts, Simon Chapman, and Alan Leisk for reading and contributing to the draft of this paper. I am most indebted to Bill Walster, who has been extemely generous with his time and suggestions. So generous in fact that I have opted to do them justice by exploring them in a follow up paper. I have gratefully adopted Bill's recommendations for changes in the interval arithmetic background section.

The motivation behind this work began (in 1998 I think) when Dr Bill Walster came to Cardiff University to give an invited lecture about interval arithmetic. He was an entertaining and informative speaker, and I went away excited by the subject, but even more excited about the dormant abilities of the Floating Point Unit (FPU) inside every desktop computer.

One critical area identified by Dr Walster was chaotic systems, where iterative
evaluation is inherent, and the accumulation of errors inevitable.
There was only one "chaotic" system I knew anything about:
the Mandelbrot set. It had the added advantages of being easy to write
(I taught my calculator to draw one), easy to remember
(z_{n+1} = z_{n}^{2} + c),
and easy to visualize (so it's obvious when you've got it upside down).
I coaxed the FPU of my SPARC Workstation to perform
the modest rounding required for interval arithmetic and produced a
Mandelbrot that was not dissimilar to the funny looking ones above. I was really
expecting a normal Mandelbrot with slightly (possibly imperceptibly) fuzzy edges,
but the result was markedly different.
I decided to repeat the exercise
at a later date and write it up for some more insightful
researcher to explain.
Two more implementations and several years later I have got around to the write up.

The inputs to computer algorithms often represent measurements of physical quantities in the real world. The measurement process is inexact, which means these inputs actually represent a range of possible values. Errors in the inputs to an algorithm will propagate and emerge as implicit errors in the output. Furthermore, real numbers cannot always be adequately represented inside a computer because each number is only allocated a limited number of bits (typically 32 or 64). There is often a small difference between the intended number and the number actually described by the pattern of bits in the computer. These differences, which are sometimes called rounding noise, can propagate and grow during long calculations.

Standard algorithms (as exemplified by Knuth) often come with bounds on the expected error that may be combined with the assumed tolerance in the input variables. Unfortunately the algorithms used in production code are rarely standard or well studied. Algorithms may start by-the-book, but are often "hacked" for performance, efficiency, or parallelization, in which case tolerance guarantees may not be honoured. More insidiously, these tolerance figures do not include the effect of rounding noise. It can also be difficult to trace the path taken by data though a real program, which may be iterative, recursive, non-deterministic, distributed, may rely on third-party components with unknown error margins, or any combination of these obfuscating factors.

Interval arithmetic is a numerical analysis method that avoids the precision problems associated with single-point arithmetic. An interval comprises two single-point numbers that denote the minimum (infimum) and maximum (supremum) values for the quantity represented. The priniciple is simple: both numbers travel as a pair through the algorithm and the result will "bound" the real answer.

In practice, the introduction of intervals can complicate the implementation significantly, especially if one requires that the resulting intervals are "usefully narrow". To this end, the interval algorithm developers emphasis is more on preserving a tight interval for each calculation step, than peforming the calculation as fast as possible. You can either have the quick answer or the "right" answer.

Beyond the mere "accountancy" of computer science, intervals are also useful for mathematically inclined grocers who are really serious about packing oranges.

In introducing this subject I am happy to pass the buck to more competent writers. Complex numbers lie at the heart of the Mandelbrot set, and the complex plane is the canvas on which it is drawn. There are plenty of references for complex numbers, but I liked the geometric emphasis of this one. The traditional algorithmic approach to the Mandelbrot set is described here. This website is particularly good at describing the difference between the Mandelbrot and Julia sets, but also has a neat little Mandelbrot explorer and a tool that shows the Mandelbrot orbits.

Pertinent to this endeavour is the way the Mandelbrot is rendered. To determine the colour of a pixel, a point in its approximate center is tested. This sampling process is the crux of the difference between the single-point and interval approaches.

Two different methods for rendering Mandelbrots using intervals are presented. In the first, the sample point in the pixel centre (discussed above) is replaced with an interval value with the bounds of the interval co-incident at the pixel centre. The second method uses the approximate pixel edges to give upper and lower bounds for the seed interval. In both cases the usual iterative process is applied and the result is used to colour the query pixel.

Both parts of the complex number (the real and the imaginary components) are replaced with intervals. Therefore an interval on the complex plane can be visualized as a rectangle, where the left and right edges are the infimiumum and supremum of the real component and the top and bottom edges are determined by the infimum and supremum of the imaginary component. One iteration of the Mandelbrot equation can therefore be visualized as the transformation of one rectangle into another, where the new rectangle will contain the result of applying Mandelbrot's formula to any single point in the original rectangle. This must follow from the principles of interval arithmetic, but a complete mathematical derivation is beyond the scope of this paper.

In the single-point Mandelbrot calculation, points are considered within the set if their orbit (the path they take as the iterations are applied) goes off to infinity. Luckily all points beyond a circle with a magnitude of two are out of the set (avoiding the inconvenience of an infinite number of iterations), and this threshold can be used as a termination criterion. For the interval Mandelbrot, termination occurs when any corner of the interval rectangle leaves the magnitude-two circle, as at least one point in the rectangle is not a member of the set. Of course, this "escaped" rectangle may also contain points that are in the set, but this could be equally true in the single-point case, where the accumulated tolerance around an iterated point would mean there was untraceable ambiguity about set membership.

The complex intervals are seeded with the same pixel centres as are used in the single-point version. These points are rarely in the exact centre of the pixel (mathematically speaking) as it is unlikely to be a representable number. However, for the purposes of comparison, this is the smallest step that can be taken into the paradigm of intervals.

Although the intervals begin empty (i.e. the bounds are co-incident), they will probably widen with each iteration of the Mandelbrot equation.

An alternative seeding is to set the interval bounds to be coincident with the pixel boundaries. The same problems of representability make an exact match unlikely, but any slight perturbation will also apply to the adjacent pixel. One consequence of this tiling is that no point in the complex-plane escapes investigation.

One aspect of the interval approach that is interesting to explore is recursion. It is possible that complex interval rectangles larger than a pixel are contained in the Mandelbrot set, therefore the following approach should be equivalent to the pixel boundary seeding described above:

- The entire boundary is tested first (despite it being unlikely to succeed), and if it does turn out to be in the set then we can paint the whole canvas black (denoting set membership),
- Otherwise the area is subdivided into quarters and the process is repeated for each quarter.
- This continues recursively until the remaining subdivided portions drop to the size of one pixel; if they still aren't contained, then they are coloured according to the number of iterations it took to leave (as with traditional Mandelbrot algorithms).

To illustrate this effect, contained subdivisions larger than 1 pixel are marked with a grey rectangle in the following figures:

This approach gives a speed up where there are large regions of contained points, especially as the individual pixels would each have needed to be tested for the maximum number of iterations. The worst case scenario is that none of the points are within the set, and all evaluations above the size of one pixel are a waste of time. This will actually only be about 50% more effort (1/4 + 1/16 + 1/32 + ...).

Another interesting feature is that (with the grey rectangles removed) the result with recursion and without is the same, which is a useful sanity check.

A presentation called "Images of Julia sets that you can trust" by Joćn Batista Oliveira and Luiz Henrique de Figueiredo was recently brought to my attention (by Dr Walster). It is similar to the work presented here, but based on the Julia set. The question it poses is identical: are fractal images accurate?

The analysis they provide is fascinating, and they also use a recursive subdivision algorithm. They make two novel and crucial observations:

- As an interval orbits the complex plane it may fall inside an
interval it visited in a previous iteration. This "closure" implies
that the interval will repeat the same path (or an even tighter path)
in the future, and hence the seed interval
may be regarded as
**definitely**contained. - If an interval's orbit takes it completely outside of the magnitude-two circle
(not just one corner point, as is used for the termination criteria in this paper)
then all points contained in the seed rectangle are
**definitely not**contained.

They paint the sets of pixels that satisfy condition 1 black and those that satisfy condition 2 white. They allow for a third category of pixels that do not satisfy either condition, but cannot be subdivided any further, which they paint grey.

This approach does more than just accelerate the rendering process. It gives mathematical guarantees about the Julia sets that are not possible with single-point arithmetic. Points are either definitely in, definitely out, or clearly labeled as ambiguous. This confidence is echoed by the provocative title and conclusions of the presentation.

Results are presented for the three different methods: single-point,
intervals seeded with pixel centres, and intervals
seeded with pixel boundaries.
A 10000 iteration limit was used for all the runs,
as no visible difference was made by higher numbers of iterations.
The results are presented here and labelled with the defining boundaries
in the form *lower left*, *upper right*.
The XML-file links can be loaded into the exploration tool that accompanies
this paper.
There is no particular order to the results.

XML
-2 - 2i, -2 + 2i |
||
---|---|---|

XML
-0.685 - 0.75i, -0.345 - 0.41i |
||

XML
-0.4827 - 0.5483i, -0.4555 - 0.5211i |
||

XML
-0.03 + 0.44i, 0.27 + 0.74i |
||

XML
-1.85 - 0.12i, -1.61 + 0.12i |
||

XML
-1.7816 - 0.0076i, -1.7664 + 0.0076i |
||

XML
-0.77 + 0.084i, -0.714 + 0.14i |
||

XML
-0.749 + 0.105i, -0.735 + 0.119i |
||

XML
-0.74347 + 0.11221i, -0.742 + 0.11368i |
||

XML
-0.74220825 + 0.11232515i, -0.74208085 + 0.1125255i |
||

Clearly using intervals instead of single-point numbers to draw Mandelbrots makes a big difference to the result. Even the small step of replacing the pixel centered samples with intervals has a profound impact on the picture. Both interval approaches are typical of assumptions that one would make when migrating any single-point algorithm to the interval paradigm.

Features such as near right-angle corners and apparently straight diagonal lines seen in the pixel centered version are unusual (for the Mandelbrot set), and perhaps are symptomatic of the calculation process in some way rather than a feature of the fractal's geometry.

I am struggling to say anything specific about the Mandelbrot results, or to explain particular differences between the single point and interval approaches. However, the original purpose of this investigation was to see if any there would be any differences, and whether chaotic systems should be simulated with great care. This has been clearly demonstrated. I will leave you with C.J's advice to Reggie Perrin:

I don't say to you, "Reggie, pull your socks up".
I say to you that overall sales in April across the spectrum were down 0.1%.
I leave you to draw your own conclusions. And pull your socks up.

I am interested in adding the following features to the research tool:

- Add Oliveia and de Figueiredo's extensions to the recursive algorithm. This approach would elevate the resulting bitmap from a cautionary tale for computer scientists to a definitive statement about the Mandelbrot set.
- Anti-aliasing. A previous version of the tool had this feature and it greatly enhanced the visual aesthetics, but also made small features much more obvious. It can also be done using recursive subdivision.
- Animation. I wonder if any of the features are scale dependent? Do some of the ambiguous features close to the set boundary "boil" away as you zoom in? From the static pictures it is not obvious.
- Arbitrary precision interval libraries. Fractint has a really interesting feature called deep zooming that uses arbitrary precision single-point arithmetic. It would be worthwhile using one of the interval arbitrary-precision libaries (such as MPFI) to look at the effect precision has on the resulting images. Would unlimited precision give unlimited resolution?

A program was written (in C#.NET) to investigate the Mandelbrot set. The source is also available for verification, extension, and critism. The interval arithmetic library and colouring utilities are from Keima.

The tool is self explanatory if a little idiosyncratic.

The current position can be loaded and saved (including a bitmap) in XML format.

Pixels are coloured white when an adjacent pixel's complex co-ordinates are the same. This gives an indication of when the hard limits of machine representability are reached.

If you find any serious bugs (that cast doubt on the results) then please let me know. If you find minor bugs that just annoy you, fix them yourself - why do you think I'm giving away the source code ;o)