Fractals, part II
DIMENSION
There are many technically precise definitions for dimension that can be
applied to any set. However, there's a simple way to look at dimension
if you have an object that is self-similar. This is called similarity
dimension.
- Line segments are 1-dimensional.
Take a line segment. Now magnify it by a factor of 2. It now
takes 2 of your original object to make the scaled object.
21 = 2
If you magnify by a factor of 3, it takes 3 of the original to make
the scaled object.
31 = 3
(magnification factor)dimension = size change
- Squares are 2-dimensional.
Take a square. Now magnify it by a factor of 2. It now takes 4 of the
original to make the scaled object.
22 = 4
If you magnify it by a factor of 3, it takes 9 of the original to
make the scaled object.
32 = 9
- What is the dimension of a cube?
When we magnify it by a factor of 2, it takes 8 of the original to
make the scaled object.
2d = 8
ln(2d) = ln 8
d ln 2 = ln 8
d = ln 8/ ln 2 = 3
- The Cantor set.
The Cantor set is more than a point (which is 0 dimensions) and less than a
line segment (which is 1 dimension), so we expect its similarity dimension
to be between 0 and 1. If we magnify it by a factor of 3, then we get
TWO exact copies of the original!
3d = 2
ln(3d) = ln 2
d ln 3 = ln 2
d = ln 2/ln 3
d = .6309
- Koch's curve
We guess that the answer will be between 1 and 2.
When we scale Koch's curve by a factor of 3, it takes 4 exact copies
of the original to create the larger version.
3d = 4
ln(3d) = ln 4
d ln 3 = ln 4
d = ln 4 / ln 3
d = 1.262
ITERATED FUNCTION SYSTEMS
One type of fractal is the kind created by an iterated function system.
You'll write a program soon to produce images of these fractals.
An iterated function system consists of a set of transformations, each of
which may involve scaling, shearing, rotation, mirror-images and translations. To end up with a fractal image, the transformations should be contractive -
this means that the image gets shrunk each time. Now start with a black and
white image (choose any non-blank image - it turns out not to affect the final
fractal). This starting image is the 0th iteration. Transform it according
to each of the transformations in the system, and put these together to
create your first
iteration. If two of the images overlap, then a spot in the final image is
black if it is black in any of the contributing transformations (this is
a mathematical "OR"). Repeat this process indefinitely - the limit of
this process is the fractal. Each time you iterate the transformations,
your image moves closer to the final fractal. For this reason, people
sometimes refer to the final fractal image as an attractor.
Notice also that if you run the final fractal image itself through the
transformations, your resulting image is that same fractal.
Example of an IFS
This example has three transformations. For each one, shrink a square
down by a factor of 2 in each direction. Place the first one in the lower
left-hand quadrant. Place the second one in the upper-left quadrant, and
place the third one in the lower right quadrant. Now repeat this mapping
on the resulting image, ad infinitum. The result is a fractal which looks like
a Sierpinski gasket leaning to the left!
AFFINE TRANSFORMATIONS
We'd like to write a computer program to implement this method of
producing fractals. In order to do this, we first need to understand how to
describe these affine transformations mathematically.
An affine transformation involves a linear transformation followed by
a translation (a shift). Linear transformations can be described by matrices,
and are studied in-depth in a linear algebra course. To translate an
image with an affine transformation, we can imagine translating it
one point at a time. All we need are formulas that tell us where the
point (x, y) moves to. It turns out that any affine
transformation can be described by the following matrix formula:
_ _ _ _ _ _ _ _
| x' | | a b | | x | | e |
| y' | = | c d | | y | + | f |
- - - - - - - -
where (x', y') is the image of the point (x, y).
The above matrix formula can also be written as two individual equations:
x' = ax + by + e
y' = cx + dy + f
So it takes 6 numbers to describe each of the transformations in the iterated
function system. If you're given the transformations, you now know enough to
create the final images (the fractals). However, if you want to define
the linear transformations yourself, you need to understand the 6 constants
a little better.
First let's examine the constants a, b, c and d. The easiest way to view
these is to imagine how these constants affect the rectangle with the segment
from (0, 0) to (1, 0) and (0, 0) to (0, 1) as two of its sides. The bottom
horizontal segment of this square (the vector (1,0)) gets mapped to the
segment from (0, 0) to (a, c) (the vector (a,c)). The left vertical segment
of this square (the vector (0, 1) gets mapped to the segment from (0, 0)
to (b, d). This creates a parallelogram with one corner at the origin.
The square is transformed to that parallelogram, and so the transformation may
involve
rotations, shearing, scaling, or mirror images. The last possibility is
a translation, which is handled by the constants e and f. The parallelogram
is translated to the right and upwards by the constants e and f (respectively).
What we've seen is that to describe an affine transformation with the
6 numbers a, b, c, d, e, and f, we
need to figure out where we want the lower horizontal and left vertical
sides of the square to land to form a parallogram (giving us the constants
a, b, c and d), and then we need to decide how to translate this
paralellogram (giving us the constants e and f).
Here are some examples:
1. The image is rotated 90 degrees clockwise in place. This is
the same as a 90 degree clockwise rotation around the origin, followed by
a shift one unit up. (Since the parallelogram defined by
the constants a, b, c and d must have the original lower left corner
stay at the origin,
those 4 constants can only describe a rotation around the origin).
So the lower horizontal side moves to the vector (0, -1), and the
left vertical side moves to the vector (1, 0). We now take this
parallelogram and shift it up by 1 unit, or add the vector
(0, 1). This gives us the final transformation:
_ _ _ _ _ _ _ _
| x' | | 0 1 | | x | | 0 |
| y' | = | -1 0 | | y | + | 1 |
- - - - - - - -
In general, a clockwise rotation of θ degrees about the origin
is represented by the matrix
_ _
| cos θ sin θ |
| -sin θ cos θ |
- -
2. Reflect the image vertically, in place. This is the same as
a reflection over the y-axis, followed by a translation to the right
by 1 unit. The lower horizontal edge transforms to the vector (-1, 0)
and the vertical left edge stays put at the vector (0, 1). The translation
means adding the vector (1, 0). Here's the transformation:
_ _ _ _ _ _ _ _
| x' | | 0 -1 | | x | | 1 |
| y' | = | 1 0 | | y | + | 0 |
- - - - - - - -
3. Scale down by a factor of 2 in each direction, and place the image
in upper right quadrant. The vector (1, 0) maps to (.5, 0) and
(0, 1) maps to (0, .5). Then, the image is shifted by (.5, .5)
_ _ _ _ _ _ _ _
| x' | | .5 0 | | x | | .5 |
| y' | = | 0 .5 | | y | + | .5 |
- - - - - - - -
This is the 2nd transformation in the leaning Sierpinski gasket IFS.