2D Rotation

I'm hoping with this post to show one way that complex numbers come about and how they are a natural way of representing rotations in 2D. To do this I will introduce a small amount of Clifford algebra and some infinite series expansions. A benefit of understanding rotations in this way will be that it generalizes to arbitrary dimensions, though you probably only care about the two and three dimensional cases.

A friend of mine recently asked me about 2D rotations. He had started with a 2x2 rotation matrix:

\[ \left( \begin{array}{cr} cos(\theta) & -sin(\theta)\\ sin(\theta) & cos(\theta)\\ \end{array} \right) \left( \begin{array}{c} x\\ y\\ \end{array} \right) = \left( \begin{array}{c} cos(\theta) * x - sin(\theta) * y\\ sin(\theta) * x + cos(\theta) * y\\ \end{array} \right) \]

After playing with it for a while he concluded that the second row of the matrix was redundant and you could just store the first row, or really any row or column. This reminded him of something I had said a while back about using complex numbers to represent rotations. This prompted me to write a first version of this post as an email in response. Then he wrote again saying that it seemed like the two component thing was the result of \(e^{i\theta}\). So I have added a section on the exponential map and Taylor series.

Complex Numbers

First let's start with a bit of a review of complex numbers. You've probably been introduced to them at some point as a number with two components, a real portion and an imaginary portion, where the imaginary portion is signified by having a lower case \(i\) associated with it.

\[(a + b i)\]

And they were probably being used to fill in a gap when someone tried to take the square root of a negative number. Probably it was right around when you learned the quadratic formula. Some people are introduced to complex numbers when they start playing with fractals; the Mandelbrot set, Julia sets, and others are defined in the complex plane. The complex plane is what you get if you treat the real and imaginary parts of a complex number as the \(x\) and \(y\) components of a 2D vector. A lot of neat results come from that mapping. Later I'll show a different way to associate the components of a complex number with the \(x\) and \(y\) components of a 2D vector, but first, let's remember what happens when you multiply two complex numbers together.

\[(a_1 + b_1 i) * (a_2 + b_2 i)\]

You probably remember FOIL (First Outer Inner Last) which tells you how to go about doing this multiplication. First multiple the a's together, then the outer elements \((a_1 \text{and} b_2 i)\), then the inner elements \((b_1 i \text{and} a_2)\), and finally the last elements, the b's.

\[(a_1 * a_2) + (a_1 * b_2 i) + (b_1 i * a_2) + (b_1 i * b_2 i)\]

With complex numbers we're free to move the i's around inside each product, so we'll move them to the end of each one, and outside of the parentheses.

\[(a_1 * a_2) + (a_1 * b_2) i + (b_1 * a_2) i + (b_1 * b_2) i * i\]

The i * i that we get from the multiplication of the two imaginary components \((b_1 \text{and} b_2)\) of the two complex numbers could also be written \(i^2\). The whole reason for inventing \(i\) was to provide an answer to the question of what \(x\) is in the following.

\[x = \sqrt{-1}\]

So, \(i^2 = -1\), and thus the result of multiplying two imaginary components of a complex number together is a real number, just one that has the opposite sign than it would otherwise. So, taking this into account and merging the two terms in the middle that each have a single \(i\) in them we can simplify the result of multiplying two complex numbers down to the following.

\[(a_1 * a_2) + (a_1 * b_2 + b_1 * a_2) i - (b_1 * b_2)\]

Finally, moving the imaginary portion to the end we get:

\[(a_1 * a_2 - b_1 * b_2) + (a_1 * b_2 + b_1 * a_2) i\]

Which is another complex number \((a_3 + b_3 i)\) where \(a_3\) and \(b_3\) are defined as:

\[\begin{align*} a_3 &= a_1 * a_2 - b_1 * b_2 \\ b_3 &= a_1 * b_2 + b_1 * b_2 \end{align*}\]

With that basic review of complex numbers, let's move on to talk about the Clifford algebra.

Clifford Algebra

Above we had said that you could make a 2D plane by associating the real and imaginary components of a complex number with the \(x\) and \(y\) elements of a 2D vector. But we didn't really talk about what that meant or how you could distinguish an \(x\) component from a \(y\) component. There has been an enormous amount written about this subject, and I highly recommend Geometrical Vectors by Gabriel Weinreich. But for now, I'm going to introduce two new things, sort of like \(i\) from the complex numbers, called \(\mathbf{e_1}\) and \(\mathbf{e_2}\). And we will use them to distinguish between the two components of a 2D vector. So, our vectors will look like this. \((x \mathbf{e_1} + y \mathbf{e_2})\) These things, we'll call them bases, have a few properties: a base times itself equals one, and swapping the position of two bases in a term negates the term.

\[ \mathbf{e_i e_i} = 1\\ \mathbf{e_i e_j} = - \mathbf{e_j e_i} \]

These vectors behave just like you would expect a vector to behave. You can add and subtract them, and you can multiply them by scalars:

\[\begin{align*} (x_1 \mathbf{e_1} + y_1 \mathbf{e_2}) + (x_2 \mathbf{e_1} + y_2 \mathbf{e_2}) &= ((x_1 + x_2) \mathbf{e_1} + (y_1 + y_2) \mathbf{e_2}) \\ c * (x_1 \mathbf{e_1} + y_1 \mathbf{e_2}) &= ((c * x_1) \mathbf{e_1} + (c * y_1) \mathbf{e_2}) \end{align*}\]

But they also have another interesting trick, if we multiply two of these vectors together we get an interesting new object.

\[(x_1 \mathbf{e_1} + y_1 \mathbf{e_2}) * (x_2 \mathbf{e_1} + y_2 \mathbf{e_2})\]

using FOIL you get

\[(x_1 * x_2) \mathbf{e_1 e_1} + (x_1 * y_2) \mathbf{e_1 e_2} + (y_1 * x_2) \mathbf{e_2 e_1} + (y_1 * y_2) \mathbf{e_2 e_2}\]

Since \(\mathbf{e_1 e_1} = 1\) and \(\mathbf{e_2 e_2} = 1\) the first and last terms turn into scalars and you get

\[(x_1 * x_2) + (x_1 * y_2) \mathbf{e_1 e_2} + (y_1 * x_2) \mathbf{e_2 e_1} + (y_1 * y_2)\]

Now, the middle two terms can be combined by swapping the \(\mathbf{e_2 e_1}\) in the second term to be \(\mathbf{e_1 e_2}\) and negating the whole term. So the result is

\[(x_1 * x_2) + (x_1 * y_2 - y_1 * x_2) \mathbf{e_1 e_2} + (y_1 * y_2)\]

Finally, grouping the first and last term together you get

\[(x_1 * x_2 + y_1 * y_2) + (x_1 * y_2 - y_1 * x_2) \mathbf{e_1 e_2}\]

We see that the first term is just the dot product of the two vectors, and the second term is just the 2D cross product 1 times something weird, this \(\mathbf{e_1 e_2}\) term. So we've got some new odd object here: a scalar plus a scalar times a strange pair of bases. If we replace the dot and cross product values with the letters \(a\) and \(b\) we have \(a + b \mathbf{e_1 e_2}\).

As you may know, the dot product of two vectors is equal to the cosine of the angle between them times the length of each vector. Similarly, the length of the cross product of two vectors is equal to the sine of the angle between them times the length of each vector. These sines and cosines allow us to get back to rotation.

2D Rotation Revisited

Now what happens if we multiply one of our original vectors times one of these?

\[(x \mathbf{e_1} + y \mathbf{e_2}) * (a + b \mathbf{e_1 e_2})\]

We get:

\[(a * x) \mathbf{e_1} + (b * x) \mathbf{e_1 e_1 e_2} + (a * y) \mathbf{e_2} + (b * y) \mathbf{e_2 e_1 e_2}\]

Which after simplification is:

\[(a * x - b * y) \mathbf{e_1} + (b * x + a * y) \mathbf{e_2}\]

Which looks a lot like a rotation matrix application. 2 Especially when you remember that if the two vectors we started with are unit length then a and b are equal to the cos and sin of the angles between the two vectors. In which case we have:

\[a = cos(\theta)\\ b = sin(\theta)\]

and after substitution:

\[ \begin{align*} &(cos(\theta) * x - sin(\theta) * y) \mathbf{e_1} +\\ &(sin(\theta) * x + cos(\theta) * y) \mathbf{e_2} \end{align*} \]

Complex Numbers Revisited

You can probably see where this is going. Let's try multiplying two of these 'scalar plus scalar times base pairs' together.

\[ (a_1 + b_1 \mathbf{e_1 e_2}) * (a_2 + b_2 \mathbf{e_1 e_2}) \]

Again, using FOIL we get:

\[ (a_1 * a_2) + (a_1 * b_2) \mathbf{e_1 e_2} + b_1 \mathbf{e_1 e_2} * a_2 + b_1 \mathbf{e_1 e_2} * b_2 \mathbf{e_1 e_2} \]

It is fine to move the scalars around, so let's clean this up:

\[ (a_1 * a_2) + (a_1 * b_2) \mathbf{e_1 e_2} + (a_2 * b_1) \mathbf{e_1 e_2} + (b_1 * b_2) \mathbf{e_1 e_2 e_1 e_2} \]

We can combine the two \(\mathbf{e_1 e_2}\) terms:

\[ (a_1 * a_2) + (a_1 * b_2 + a_2 * b_1) \mathbf{e_1 e_2} + (b_1 * b_2) \mathbf{e_1 e_2 e_1 e_2} \]

Next we deal with the \(\mathbf{e_1 e_2 e_1 e_2}\) term. If we swap the middle \(\mathbf{e_2 e_1}\) into \(\mathbf{e_1 e_2}\) and negate the whole thing we get \(-\mathbf{e_1 e_1 e_2 e_2}\). And since a base times itself is \(1\) we have \(-1 * 1\) or \(-1\). So, the last term can be replaced with \(-(b_1 * b_2)\), causing the entire thing to turn into:

\[ \begin{align*} &(a_1 * a_2 - b_1 * b_2) +\\ &(a_1 * b_2 + a_2 * b_1) \mathbf{e_1 e_2} \end{align*} \]

And poof, we have another one of these scalar plus scalar times \(\mathbf{e_1 e_2}\) things. The fact that \(\mathbf{e_1 e_2 e_1 e_2} = -1\) should have given it away: \(\mathbf{e_1 e_2}\) could also be called \(i\) and these are complex numbers (or at least they behave like them, which in some sense means that they are them).

If complex numbers really do represent rotations in 2D then we would expect that there would be a way to combine two rotations that are represented by complex numbers, resulting in a single complex number that represents the combined rotation. And you are probably not surprised to learn that there is, and it is just complex multiplication (which we worked out above). To show this let's assume that \((a_1 + b_1 \mathbf{e_1 e_2})\) represents a rotation by \(\theta\) radians and \((a_2 + b_2 \mathbf{e_1 e_2})\) represents a rotation by \(\phi\) radians. So:

\[ a_1 = cos(\theta)\\ b_1 = sin(\theta)\\ a_2 = cos(\phi)\\ b_2 = sin(\phi) \]

If we rewrite the result of multiplying two complex numbers together and substitute these cosines and sines in we get:

\[ \begin{align*} &(cos(\theta) * cos(\phi) - sin(\theta) * sin(\phi)) +\\ &(cos(\theta) * sin(\phi) + cos(\phi) * sin(\theta)) \mathbf{e_1 e_2} \end{align*} \]

Now we have to remember (or re-derive, but this post is getting too long as it is) the sin and cos of the sum of angles formulas. They are, probably not surprisingly:

\[ \begin{align*} cos(\theta + \phi) &= cos(\theta) * cos(\phi) - sin(\theta) * sin(\phi)\\ sin(\theta + \phi) &= cos(\theta) * sin(\phi) + cos(\phi) * sin(\theta)\\ \end{align*} \]

And thus, we can rewrite the product of two complex numbers that represent rotations as:

\[cos(\theta + \phi) + sin(\theta + \phi) \mathbf{e_1 e_2}\]

And indeed, the product of two complex numbers that represent rotations, is itself a rotation, one that is the sum of the two independent rotations.

Exponential Map

The exponential map is a special function that is its own derivative. It is fascinating, and I recommend reading more about it. Here I'm just going to quickly show one fantastic result. I'll need to use the words Taylor Series and infinity, but I'm not going to justify them much. It can be thought of as an infinite sum of terms, or as the exponentiation \(e^x\).

\[exp(x) = e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!}\]

This function is valid for all complex numbers, but we will limit the input to just the imaginary numbers (the imaginary portion of a complex number). To make the notation a little simpler I'll use \(i\) instead of \(\mathbf{e_1 e_2}\). I'll show that by doing this we effectively map an infinite line to the circle of radius one centered at the origin of the complex plane. Points on the line can be thought of as representing angles in radians. And every time we move \(2 \pi\) along the line we make one full loop around the circle. So, there are many points on the line that map to the same point on the circle. These all represent the same rotation.

\[exp(x i) = e^{x i} = \sum_{k=0}^{\infty} \frac{(x i)^k}{k!}\]

First, let's look at the \((x i)^k\) term. The first few terms in the infinite sum and the generalization for an arbitrary value of \(k\) are:

\[ \begin{align*} (x i)^0 &= x^0 = 1\\ (x i)^1 &= x^1 i = x i\\ (x i)^2 &= x^2 i i = -1 x^2\\ (x i)^3 &= x^3 i i i = -1 x^3 i\\ &\vdots\\ (x i)^k &= x^k i^k = \begin{cases} -1^{\frac{k}{2}} x^k & k\;\text{is even} \\[1ex] -1^{\frac{k-1}{2}} x^k i & k\;\text{is odd} \end{cases}\\ \end{align*} \]

If we group all of the even and all of the odd terms together we get:

\[ e^{x i} = \sum_{k=0,2,4...}^{\infty} \frac{-1^{\frac{k}{2}} x^k}{k!} + \sum_{k=1,3,5...}^{\infty} \frac{-1^{\frac{k-1}{2}} x^k}{k!}i \]

We can turn the sums back into simple sums from zero to infinity by transforming \(k\) in each of them. In the first one we can replace \(k\) with \(2k\), and in the second we can replace \(k\) with \(2k+1\). The result is:

\[ e^{x i} = \sum_{k=0}^{\infty} \frac{-1^k x^{2k}}{(2k)!} + \sum_{k=0}^{\infty} \frac{-1^k x^{2k+1}}{(2k+1)!}i \]

It turns out that the first sum is actually the Taylor series expansion for \(cos(x)\) and the second sum is the Taylor series expansion for \(sin(x)\). So this can be rewritten as:

\[ e^{x i} = cos(x) + sin(x) i \]

This beautiful result is known as Euler's formula.


  1. In 2D the cross product as we normally think of it doesn't exist, but the value above can be thought of as the Z component of a 3D cross product of two vectors that lie in the XY plane. Since the two vectors have zero Z components, the X and Y components of their cross product is zero, so the overall cross product length is just the length of the Z portion.

    [return]
  2. There is a bit more subtlety to this, the actual group action we care about is ATA* 3 where A is a complex number, T is a vector and A* is the conjugate of the complex number. That action works even if the complex number isn't unit, and it generalizes up to higher dimensions (TA, what we did here, doesn't).

    [return]
  3. The construction of a complex number as a multiplication of two vectors using the odd base combination rules gives a cool way to ask for a rotation between two vectors. If you use the full ATA* group operation the rotation is twice the angle between the two vectors though. This turns out to be important in higher dimensions.

    [return]
Comment

Mandelbrot

Mandelbrot is a javascript implementation of a simple mandelbrot set calculator.

//
// Set the pixel at location <x,y> in the image data to <r,g,b,a>.  The r, g,
// and b elements are the red, green, and blue color components.  The a element
// is the alpha, or transparency value.  The r, g, b, and a values can range
// from 0 to 255.
//
function set_pixel(image_data, x, y, r, g, b, a)
{
    // Compute the index into the pixel data.  Each pixel is represented by four
    // sequential values in the image_data.data array.
    index = (x + y * image_data.width) * 4;

    image_data.data[index + 0] = r;
    image_data.data[index + 1] = g;
    image_data.data[index + 2] = b;
    image_data.data[index + 3] = a;
}

//
// Compute the number of iterations required for a single orbit of the
// mandelbrot equation to escape from a circle of radius 2 centered at the
// origin of the complex plane.
//
function compute_escape_time(c_r, c_i, maximum_iterations)
{
    z_r = 0.0;
    z_i = 0.0;

    for (i = 0; i < maximum_iterations; i++)
    {
	//
	// Compute a single iteration of Z^2 + C.  Complex numbers are of the
	// form "real + imaginary * i", so to square them you end up computing:
	//
	// (real + imaginary * i) * (real + imaginary * i)
	// =
	// real * real + imaginary * imaginary * i^2 + 2 * real * imaginary * i
	// =
	// real * real - imaginary * imaginary + 2 * real * imaginary * i
	//
	// So the new real value is (real * real - imaginary * imaginary)
	// And the new imaginary value is (2 * real * imaginary)
	//
	// The reason the first + turns into a - is because i^2 is defined to be
	// equal to -1, that's what makes the numbers imaginary.
	//
	temp = (z_r * z_r) - (z_i * z_i) + c_r;
	z_i = (2.0 * z_r * z_i) + c_i;
	z_r = temp;

	// Check to see if the magnitude of the complex number is larger than
	// 2.  We check whether the squared magnitude is greater than 4 here to
	// avoid taking a square root, because square roots are slow.
	if (((z_r * z_r) + (z_i * z_i)) > 4.0)
	    return i;
    }

    // Return the maximum iterations value if we never escaped.
    return maximum_iterations;
}

//
// Draw a Mandelbrot set filling in a canvas specified by element_id.
//
function draw_mandelbrot(element_id)
{
    // Lookup the canvas and get a 2D rendering context from it.
    element = document.getElementById(element_id);
    context = element.getContext("2d");

    //
    // Construct an image data object to hold the pixels before they are
    // drawn to the screen.
    //
    image_data = context.createImageData(element.width,
					 element.height);

    //
    // Iterate over every pixel in the canvas and compute the escape time for
    // the corresponding point in the complex plane.
    //
    for (y = 0; y < image_data.height; y++)
    {
	for (x = 0; x < image_data.width; x++)
	{
	    //
	    // This scales the <x, y> values into a smaller range of the
	    // complex plane.  But it doesn't take into account the size of the
	    // canvas element.  So if the canvas element changes size, this will
	    // need to change as well.  This can be made automatic, but it would
	    // obcure the meaning for this example.  It would be a good
	    // experiment to change these values and try and make the result
	    // general.
	    //
	    iterations = compute_escape_time((x - 350.0) / 200.0,
					     (y - 250.0) / 200.0,
					     100);

	    //
	    // This is a simple use of the iteration count.  We are only looking
	    // at the bottom bit of the resulting escape time iterations.  This
	    // results in the zebra striping look.
	    //
	    if (iterations & 1)
		r = g = b = 255;
	    else
		r = g = b = 0;

	    //
	    // Write the computed color value to the image_data at the current
	    // location.
	    //
	    set_pixel(image_data, x, y, r, g, b, 255);
	}
    }

    // Finally, copy the resulting image to the canvas for display.
    context.putImageData(image_data, 0, 0);
}

Syntax highlighted with Prism.

Comment

Git Filter Branch

This HOWTO describes the steps required to extract a portion of a large git repository into it's own separate repository.

I have found this useful numerous times because when I first started to use git I created a single repository for all of my personal projects. Since then I've come to appreciate having many smaller repositories. So whenever I want to open up some new chunk of code I use these steps to extract the history for just that chunk into a new repository and then publish that.

Setup

I always start by doing a fresh clone of the repository that I want to extract from, so that I know I'm not messing with my primary development environment. This has the added advantage that as long as you don't clone with --recursive, you won't have any submodules checked out. If you have submodules then the filter branch can get into trouble trouble.

git clone git://source.socialhacker.com/... temp_clone

Then just to make sure I'm not going to do something stupid, I remove the remote from the newly cloned repository.

cd temp_clone
git remote rm origin

Now we're ready to run git filter-branch. In particular, I run the subdirectory filter which replays all of your commits, only keeping changes to a given subdirectory. You need to add the --prune-empty option to cause filter-branch to not include empty commits.

git filter-branch \
    --prune-empty \
    --subdirectory-filter &lt;directory&gt; \
    -- \
    --all

Now, at this point I like to fix up the history a little. In particular I fix up the commit messages with the name and email address I've decided on. For a while I hadn't set these correctly and my early commits have something pretty useless. This will only work if you're the only committer to your repository. If you have multiple people committing, then this will wipe out their author names and email addresses from their commits.

git filter-branch \
    -f \
    --env-filter "export GIT_AUTHOR_NAME='Anton Staaf'; export GIT_AUTHOR_EMAIL='anton@socialhacker.com';" \
    HEAD

git filter-branch \
    -f \
    --env-filter "export GIT_COMMITTER_NAME='Anton Staaf'; export GIT_COMMITTER_EMAIL='anton@socialhacker.com';" \
    HEAD

The final step before you can push your new repository is to remove all of the old information about the commits that you no longer want to be visible. The first line below clears out the reflog, so that it doesn't maintain references to the old state of the repository. The second line does a garbage collection run on the repository. This will remove any objects that are no longer referenced.

git reflog expire --expire=now --all
git gc --aggressive --prune=now

At this point you can add a new remote and push your repository.

Comment

Diet Planner

The Diet Planner is a simple javascript application that can calculate grams of fat, protien, and carbohydrates needed to gain or lose weight given various body metrics.

Comment

Swddude

swddude is very young pre-alpha software. Caveat downloader.

swddude is a collection of simple tools for programming and using ARM Cortex microcontrollers, such as the Cortex-M0 and M3, using the SWD protocol.

Why?

Larger ARM microcontrollers have JTAG interfaces. OpenOCD and friends already do a great job flashing these micros. But for smaller parts, such as the LPC11xx/LPC13xx series, ARM has defined a new low-pin-count debug interface called SWD. OpenOCD doesn’t yet support SWD. In fact, when we started writing swddude, no software on Mac or Linux could do what we wanted!

It’s a shame not to be able to use these powerful little microcontrollers, so we wrote swddude to scratch this itch.

What Can It Do?

Currently:

  • swddude itself can flash code onto the NXP LPC11xx (Cortex-M0 based) and LPC13xx (Cortex-M3 based).
  • swdprobe can interrogate a SWD-compatible chip and dump information about what it finds. This is useful when adding support for new chips to swddude.
  • swdhost provides semihosting I/O for an attached microcontroller. With semihosting, embedded software can send printf-style messages to a host computer through the debug connection – no UART required.
  • swddump extracts the contents of Flash from a supported microcontroller.

We’re working to extend the tools to support more microcontroller varieties. Specifically, we’re focusing on microcontrollers without JTAG ports – devices that can’t be easily added to OpenOCD (yet).

How Do I Use It?

You’ll need a supported programmer and, of course, a supported microcontroller with a SWD interface. Currently swddude supports using the Bus Blaster (v2.5 programmed with the KT-link compatible CPLD configuration) or any FTDI development board with an FT232H or FT2232H chip that has been wired to the SWD lines of your microcontroller.

Wire up your micro using the configuration described in swd_mpsse.h.

Install libusb 1.0 and the libusb-compat package. The version supplied by your package manager (apt, Homebrew, etc.) should be fine.

Build a recent version of libftdi. As of this writing, you must build libftdi from HEAD – the released version (0.20) still uses the legacy libusb 0.1 APIs and is incompatible with swddude.

After checking out swddude, build it like so:

$ cd swddude/source
$ make swddude release

This will deposit a swddude binary in swddude/source/release.

To program your microcontroller, you’ll need to have your desired firmware in binary format – not ELF, and not Intel hex. Assuming it’s in a file called firmware.bin, you run:

$ swddude -flash firmware.bin -fix_lpc_checksum

This will default to the um232h programmer (FTDI’s FT232H development board) configuration. If you are using a Bus Blaster, you should add -programmer bus_blaster to the command line above.

That last option, -fix_lpc_checksum, adds the vector table checksum expected by the NXP LPC series. Without it, your firmware won’t run! If some other tool has already written the correct checksum into your firmware, you can omit that option.

Status and Known Issues

These boards are known to work:

  • LPCxpresso LPC1114.
  • LPCxpresso LPC11C24.
  • LPCxpresso LPC1343.

Note that the LPCxpresso boards will only work if you disable the proprietary LPC-Link programming device on the board. On newer boards, you can do this by clearing solder jumpers between the two sections of the board; older boards make you physically cut traces in the same position.

Known issues:

  • Error reporting is not great. Most failures just print a stack trace, which isn’t helpful if you’re not familiar with the source code. In general, it’s worth retrying at least once – sometimes the SWD communications just need to be reset.
  • swddude makes no attempt at identifying the chip it’s programming. If you try programming an unsupported chip, it may do very bad things – there is no safety net.
  • On the LPC1343 specifically, the SWD interfaces sometimes gets “stuck” and requires a power-cycle. This will show up as failures very early during communication (often referencing the IDCODE register).

Brief Tour of the Source

The source code contains the following top-level directories:

  • build: Anton Staaf’s build system.
  • libs: Anton Staaf’s support libraries, several of which we use. (We currently include more than we strictly need here.)
  • source: swddude and friends.

The source tree uses git’s submodule feature aggressively. If you check out swddude using a simple git clone, you’ll be left with empty directories that won’t build. You can fix this by running

git submodule update --recursive

Alternatively, you can make this happen automatically when you clone, like so:

git clone --recursive ${git_url}
Comment