Random header image... Refresh for more!

Posts from — September 2009

SWA: Straight Outta Redmond

Back in .Net 3.0, Microsoft slipped a little library into the Framework that was missed by most people. That library, System.Windows.Automation, was intended to allow direct programmatic access to MS UIA from .Net. UIA, or UIAutomation is Microsoft’s replacement for MSAA (Microsoft Active Accessibility), and is designed to expose window controls to accessibility devices, like screen readers for the blind. However, since it exposes all manner of window controls and operations through a direct programming interface, UIA is one of the most useful tools for UI Testers who are trying to write automation for Windows applications.1 In other words, if you want to write a program to drive the UI of another program for your automated tests, then System.Windows.Automation is where to begin.

Where to begin with SWA itself is a bit of a mystery, though. The documentation was sparse and confusing when I first started playing around with it, so most of what I know what a result of tinkering until it worked or searching the Internet until I found a similar confused person that had already solved the same problem and posted the solution. That’s why I’m writing this tutorial. I found that SWA had a learning cliff to overcome, so I hope to spare you some of the same trouble by explaining what I had to discover the hard way.

First, though, let’s take a trip through a highly opinionated aside about the general design of SWA. .Net 1.0 was beautiful and clean and easy. Everything made sense. .Net 1.1 cleaned more things up and made it even better. Then .Net 2.0 came out and the awesome was truly solidified by the introduction of generics and anonymous delegates. After that, everything fell apart. .Net 3.0 and 3.5 saw the introduction of bizarre things like WCF and Linq and semi-awesome, yet complicated things like WPF. It was like all the people who had guided the .Net Framework up through version 2 and shaped it with the mantra “Make it easy, make it clean” had been thrown out in a coup and replaced by an evil cabal of leftover COM programmers who wanted to restore the glory of MFC and ATL.

System.Windows.Automation seems to have been designed by one of these groups. At it’s core, it seems that the people who wrote it had never heard of things like interfaces or the MAUI libraries.2 When you work in SWA, you get generic AutomationElement objects, but they’re not the control type you want, and you can’t case them to the control type you want. There’s no Button class or TextBox class that you can get. Instead, you have to ask the element for the control pattern you’re interested in, and only then will you get an object that you can use. When I was first working with SWA, this approach made absolutely no sense to me. Why can’t I get an AutomationElement that I can cast to ButtonElement or IButtonElement and use directly? Why do I have to ask for these control patterns and get back some strange type? Then, about a year ago, I discovered what the model was. At that time, I was developing a toolbar for Internet Explorer, which requires extensive use of COM. This was my first exposure the the special brand of hell that is COM programming, as I mercifully had spent the late 90’s in the sheltered arena of school, and by the time I joined the real world, everyone was using .Net. When I saw QueryInterface in COM and what it was doing, it struck me that it was exactly the same thing that I’d had to do with AutomationElement.GetCurrentPattern().

The people who designed System.Windows.Automation had brought QueryInterface into the world of .Net. There is a special place in Hell for doing things like that.

Anyway, the utility of the library is enough to overcome any stupid choices in its design. So, let’s get going!

First, you’ll want to get the UISpy tool. You may already have it buried in your Visual Studio installation, but if not, head over to the MSDN and try to track it down. It’s usually part of the Windows SDK or .Net SDK, except when Microsoft apparently forgets to include it. I got mine from the Developer Tools in the Vista SDK Update, but you might want to see if there’s a better place by the time you read this.

UISpy is a lot like Spy++, which has been around since at least the VS6 days. You can walk the window control tree and find window handles and window classes and other things like that, just like Spy++, but it’s been extended with support for UIA. Once you get it, take it for a spin around your window hierarchy.3 I’d suggest turning on the “Hover” tracking mode, which lets you select a window to inspect by holding the CTRL key and hovering over a window. It’ll sometimes take a while to get to the control you’re selecting, but that’s what a full tree walk will do to you.


This screenshot shows the basic window of UISpy. On the left is the control hierarchy. On the right are the properties of the currently selected window. You’ll become very familiar with some of these properties and you’ll decide that other properties are completely useless. The determination of which fields are useful or useless is left as an exercise to the reader.


Here’s an example of what UISpy will tell you about the window used by the Windows Calculator. On the left side, you can see that it has a bunch of child controls. They’re marked as check boxes, radio buttons, buttons, even an edit box. If the window were bigger, you’d also see that it has a title bar and menu bar. You can get information on all of these objects and interact with most of these objects. Pretty much anything you see here is something you can use SWA to control. On the right side are the properties for the selected object. Things like AutomationId, Name, and ClassName are generally good identifiers, while fields like ProcessId and RuntimeId may change from run to run.

At the bottom of the property list are the Control Patterns supported by this element. Control Patterns are how SWA interacts with controls. For instance, in this screenshot, it shows that the main calculator window supports the Transform pattern and the Window pattern. The Transform pattern means that you may be able to perform move, rotate, and resize actions on this object. In this case, the calculator reports that you can move it, but that you can’t resize or rotate it. If you right click on the element in the tree on the left side and select “Control Patterns” from the menu, you’ll get a dialog where you can trigger some of the methods on a supported control pattern. When you get to writing your automation program, you’ll ask for one of these Control Patterns and be able to use it to drive the control. There are other ControlPatterns, like “Invoke” for buttons and “Text” or “Value” for things like text boxes. You’ll probably find that you only use a small handful of these patterns regularly.

If you looked at that last screenshot of UISpy, you probably noticed that odd rectangle floating over the screen. If you’re playing the home game along with me, then you’ve probably had your own rectangle floating about. It’s the UISpy highlight window, showing the outline of the last window you selected.4 It’ll go away if you close UISpy, but I’ve found that you’ll tune it out. Sometimes I’ve had it linger around for several hours after I get the information I’ve needed, until someone comes by and asks me what that strange red thing on my screen is. If you move the mouse over the edge of the box, you’ll get a tooltip with a little bit of information on the selected window.

Anyway, we’ve been playing around in the ever-important UISpy, but we haven’t gotten around to actually using SWA yet. Giving that’s what this article is supposed to be about, let’s get to it.

My example code can be pulled from SVN here: https://mathpirate.net/svn/Projects/SWAExample/

I’m creating a Console App, but there’s no reason you can’t use SWA in a test DLL or a Windows app or whatever.5 It’s just a .Net library.

To begin, add references to UIAutomationClient, UIAutomationClientsideProviders and UIAutomationTypes.6


After that, add a using for System.Windows.Automation.

Now you’re ready to get rolling. The main class you’ll be using is the AutomationElement class. The entire control hierarchy is made up of AutomationElements. Think of them as an analog to the Control class in Windows Forms. It’s important to note that you’ll never create an AutomationElement yourself. AutomationElement does not have any constructors on it that you’re allowed to use. Instead, you’ll use methods on AutomationElements to give you other AutomationElements. The AutomationElement class has a number of static methods on it. Type “AutomationElement.” to bring up your good friend Intellisense to tell you what to do.

The first thing you’ll notice is that the AutomationElement has a hell of a lot of static fields on it. They’re pretty much all Dependency Property garbage. If you don’t know what Dependency Properties are, think of them as an enum value that will let you pass the name of a property to access on an object. (And if you do know what they are, then you know that the explanation I gave is horribly oversimplified and pretty much wrong and misleading. SHHH! Don’t tell anyone!) You can ignore them for now, but they’ll come back to haunt us in a bit. Right now, there are only four things you’ll care about on Automation Element:

  • RootElement: A reference to the root automation element, otherwise known as the main desktop window. Everything else is a child of this element. If you have to search for an element, you probably want to use this element as your base, at least until you find a better parent element to operate from.
  • FromHandle(IntPtr hwnd): If you have the window handle to the window or control you want to work with, use this method to grab an AutomationElement. It’ll be faster than searching for it and it will also give you exactly what you were looking for. I almost always start here rather than starting with a search, because you really don’t want to walk the entire control tree looking for something if you don’t have to.
  • FocusedElement: If the element you’re interested in has focus, use this and go straight there. No searching and no window handles necessary.
  • FromPoint(Point pt): Need the control at 132, 526? Use this. I’m not sure if this will do a tree walk, so use at your own risk.

To begin my example, I’m going to launch an instance of the Windows Calculator application, then use FromHandle to grab the Calculator window and print out some information on it. (BTW, I’m running XP, so if you’re playing along at home, the calculator may be different in your operating system.)

//Launches the Windows Calculator and gets the Main Window's Handle.
Process calculatorProcess = Process.Start("calc.exe");
IntPtr calculatorWindowHandle = calculatorProcess.MainWindowHandle;

//Here I use a window handle to get an AutomationElement for a specific window.
AutomationElement calculatorElement = AutomationElement.FromHandle(calculatorWindowHandle);

if(calculatorElement == null)
throw new Exception("Uh-oh, couldn't find the calculator...");

//Walks some of the more interesting properties on the AutomationElement.
Console.WriteLine("AutomationId: {0}", calculatorElement.Current.AutomationId);
Console.WriteLine("Name: {0}", calculatorElement.Current.Name);
Console.WriteLine("ClassName: {0}", calculatorElement.Current.ClassName);
Console.WriteLine("ControlType: {0}", calculatorElement.Current.ControlType.ProgrammaticName);
Console.WriteLine("IsEnabled: {0}", calculatorElement.Current.IsEnabled);
Console.WriteLine("IsOffscreen: {0}", calculatorElement.Current.IsOffscreen);
Console.WriteLine("ProcessId: {0}", calculatorElement.Current.ProcessId);

//Commented out because it requires another library reference. However, it's useful to see that this exists.
//Console.WriteLine("BoundingRectangle: {0}", calculatorElement.Current.BoundingRectangle);

Console.WriteLine("Supported Patterns:");
foreach (AutomationPattern supportedPattern in calculatorElement.GetSupportedPatterns())
Console.WriteLine("\t{0}", supportedPattern.ProgrammaticName);

(Apologies for the horizontal scrollies…)

The example above will output something like this, although your specific values may vary.

Name: Calculator
ClassName: SciCalc
ControlType: ControlType.Window
IsEnabled: True
IsOffscreen: False
ProcessId: 3660
Supported Patterns:

As you may have noticed, the information that was just printed out here is the same that was in UISpy, although the output in my example has been edited for time. Of course, if you ran this sample, you probably also noticed that the Calculator remains open after the app exits. That’s not very polite. Let’s clear that up now.

One of the patterns listed supported by the main window is, surprisingly enough, the WindowPattern. If you looked at what methods are on the WindowPattern back when you were playing around in UISpy, you may have noticed that there’s a method called Close which you can call. Something tells me that method will be useful for our current situation. I think I’m going to give it a spin.

(By the way, for my sanity and to make the examples more compact, I’m going to be moving parts of the sample code into helper functions as I go. For instance, all of those WriteLine statements have been put into a method called “PrintElementInfo”. So, if you see an odd function in the samples, that’s probably all it is. I’m not going to intentionally leave out important code that you’ll need to make things work.)

In order to get a control pattern off an AutomationElement object, you have to call QueryInt- er, I mean, you have to call GetCurrentPattern on the object. The GetCurrentPattern method takes an AutomationPattern object. AutomationPattern has a static LookupById() method on it, which is completely worthless to you, and, like everything else we’ve seen, has no public constructor. So, WTF, where are you supposed to get the pattern from? In a complete failure to make the code usable from Intellisense alone, you have to use a static member off of the type of the pattern you want to retrieve. You want to use a text box? Use TextPattern.Pattern. Need to play with a dropdown combo box? SelectionPattern.Pattern. We want the WindowPattern7, so we’re going to call GetCurrentPattern(WindowPattern.Pattern). Of course, GetCurrentPattern returns the pattern object as an ever-helpful IUnkno- I mean, object type, so you have to cast it.

Once you have the WindowPattern object, a quick examination of its members shows that it has a Close() method. Calling it should close the calculator and clean up after our program.

Here’s what those lines look like in code. Add them to the end of the sample and watch the window disappear!

//Get the WindowPattern from the window and use it to close the calculator app.
WindowPattern calculatorWindowPattern = (WindowPattern)calculatorElement.GetCurrentPattern(WindowPattern.Pattern);

So, there you go! That’s all you need to know about System.Windows.Automation! You can find a window and close it, therefore, you can do anything! Have at it!

Or… Not… Let’s continue, shall we?

Since this is a calculator, let’s calculate something. Something too complex to do by hand, something we need the full power of a modern multi-core computer to figure out. Something like “What do you get if you multiply six by nine”, perhaps? To begin, you’ll need to list out the steps that you take when you manually perform this action.

  1. Open Calculator. (Hey! We did that already. We’re awesome.)
  2. Type “6”.
  3. Press the Multiplication button.
  4. Type “9”.
  5. Press the Equals button.
  6. Read the result and know the answer.

So, the first thing we need to do is type “6” into the calculator text box. So, we need to find the Calculator’s text box. Let’s bring up our friend UISpy to find out how to reference that box.


So, we’ve got a class name of “Edit”, a control type “ControlType.Edit” and an AutomationID of “403”. That should be enough to find the control we’re looking for, so let’s get to the code and grab it.

(BTW, obviously, this code will have to go before the Close method call we added. You can’t use a calculator that isn’t running anymore…)

An AutomationElement object has two methods that let you search the control hierarchy for the elements you’re interested in: FindAll and FindFirst. Since we’re only expecting a single text box, we’ll be using FindFirst. Intellisense will show you that FindFirst has two parameters: FindFirst(TreeScope scope, Condition condition);

TreeScope is an enum, so it’s very Intellisensible and clear. You’re like to use the values “Children” and “Descdendants” the most. Children limits the search to the immediate children of the element you’re searching on, while Descendants are the children and the children of children and so on, all the way to the bottom. I prefer to use Descendants by default, unless I know that I want something else. It should be noted that the Parent and Ancestor scopes are listed as “Not Supported”, so don’t expect to be able to use them. Anyway, we’ll use TreeScope.Descendants here.

Condition, on the other hand, offers no Intellisense help for you at all. That’s because Condition is an abstract base class of many conditions. There’s PropertyCondition, which will match based on a property value, and And/Or/Not conditions, which can be used to group multiple conditions logically. Off of the Condition class are static True and False conditions. And, if you need your own sort of crazy condition, I think you can derive from Condition and make one yourself, although I would question your MentalCondition if you were to do that without good reason. PropertyCondition is the only stock condition that you’ll find yourself using, and it’s also the only one that requires any kind of in depth explanation.

Warning! We are about to be haunted by Dependency Properties!

PropertyCondition allows you to specify the value you want a property to match for your control tree search. PropertyCondition actually has a constructor, to which you pass an AutomationProperty and a value to match. The AutomationProperty parameter is where Dependency Properties come in. You have to pass in one of the static values from the AutomationElement that I told you to ignore earlier. If you look, you’ll find that there’s one of these static values for each of the properties on an AutomationElement instance. So, if you want to find an AutomationElement that has an AutomationId of 403 (Which, coincidentally, is what we want to find), then you’ll use AutomationElement.AutomationIdProperty in your PropertyCondition. Like so:

PropertyCondition editBoxAutomationIDProperty = new PropertyCondition(AutomationElement.AutomationIdProperty, "403");

(Note that the value “403” is passed as a string. That’s because AutomationId is a string, and the types need to match. You’ll have to make sure that you’re passing the same type as the property yourself, otherwise you’ll get an exception at runtime.)

Then you pass that condition to your FindFirst call and presto, you get the element you’re looking for. (Or null or possibly an exception or maybe some other element that happens to match what you asked for or that isn’t what you wanted…) I’m going to do that now, but first, we need something to call FindFirst on. Doing a tree search can be very slow, on the order of 10+ seconds per call in some cases, so you want to limit the scope of the search. If you don’t have any element to go on, then you can use the static AutomationElement.RootElement property that I mentioned earlier, and that will look through EVERYTHING. However, we already have the main calculator window, so let’s just assume that anything in the window, including the edit box, will be a descendant of that window and use that as the starting point of our search. That gives us this:

PropertyCondition editBoxAutomationIDProperty = new PropertyCondition(AutomationElement.AutomationIdProperty, "403");
AutomationElement editBoxElement = calculatorElement.FindFirst(TreeScope.Descendants, editBoxAutomationIDProperty);

Printing out the element that’s returned gives you something like this:

AutomationId: 403
ClassName: Edit
ControlType: ControlType.Edit
IsEnabled: True
IsOffscreen: False
ProcessId: 1884
Supported Patterns:

Looks like the one we want, so let’s use it. We want to set the value of the control, so let’s grab the ValuePattern for the text box and use the SetValue method to set the value of the element to “6”, our first number.

ValuePattern editBoxValuePattern = (ValuePattern)editBoxElement.GetCurrentPattern(ValuePattern.Pattern);

Now you hit run and…

System.InvalidOperationException was unhandled
   Message="Exception of type 'System.InvalidOperationException' was thrown."
        at System.Windows.Automation.ValuePattern.SetValue(String value)
        at SWACalculatorExample.Program.Main(String[] args) in E:\svn\Projects\SWAExample\SWACalculatorExample\Program.cs:line 28
        at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args)
        at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
        at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
        at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
        at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
        at System.Threading.ThreadHelper.ThreadStart()


You followed the directions, you got an instance of the correct pattern, it should have worked, but didn’t. So what happened? Well, if you took the time to investigate the edit box in UISpy, you would have noticed this little tidbit down in the information about the Value pattern:

IsReadOnly: "True"

Lovely, so the edit box is read-only, which means we can’t assign the value in that way. Now, for a confession: I noticed that the box was read-only early on, but I still dragged you down this dead end for three reasons:

  1. I wanted to teach you that life sucks sometimes.
  2. I wanted to teach you how to use ValuePattern to set the value of a text box that’s not read-only.
  3. I wanted to illustrate one of the most important skills for an automation developer to have: The ability to come up with a Plan B workaround when the automation tool fails you. Because it will fail you. Frequently. And at the most annoying times.

The most common Plan B is to use System.Windows.Forms.SendKeys to send a series of keystrokes to do what you want. (If you’re using the VS Testing stuff, there’s also a Keyboard class that exposes essentially the same thing.) It’s less reliable than the SWA methods, so use them when you can, but when you can’t, SendKeys might just do the trick. It has a wide grammar for sending complex keystrokes and it’s well work browsing the documentation to see what it can do, but for now, we just need to type the number “6”. Focus the edit box first, using the SetFocus() method on the edit box AutomationElement. That will make the key strokes go where we want, then use SendKeys.SendWait(“6”) to simulate the keypress. (SendKeys also requires a reference to System.Windows.Forms, for don’t forget to set that up if you don’t have one already.)

//Since the direct ValuePattern method didn't work, this is Plan B, using SendKeys.

If you run it now, well, you’ll see the calculator open and close really fast, but trust me, if you watch really really carefully (or comment out the Close line), you’ll see that the edit box will have the number 6 that we just “typed”.

Now that that’s out of the way, it’s time to hit the multiplication button. You grab the button the same way you grabbed the edit box: Look in UISpy for something uniquely identifying the control, then run a tree search and get what you’re looking for. In this case, the multiply button has a meaningful name that we can use: “*”. Again, it will be a PropertyCondition, but this time, we’ll have to use the AutomationElement.NameProperty as the property to match against. Once you have the button, you’ll want to click on it. The button click action is handled by an InvokePattern, so ask the button for its InvokePattern and call Invoke on it to click the button.

//Grab the multiplication button by name, then click on it.
AutomationElement multiplyButtonElement = calculatorElement.FindFirst(TreeScope.Descendants, new PropertyCondition(AutomationElement.NameProperty, "*"));
InvokePattern multiplyButtonInvokePattern = (InvokePattern)multiplyButtonElement.GetCurrentPattern(InvokePattern.Pattern);

Now we need to enter the “9” and press the “=” button to get our answer. I’ll leave that up to you, since that’s pretty much a copy and paste of the last two things I just got done doing. You may even want to take this opportunity to refactor. You’ll find that using SWA will result in tons of areas in your code where one logical action takes three or four long lines of code, and you’ll end up with those lines copied all over the place. For instance, by parameterizing the last block of code, you can make a function called “InvokeChildButtonByName” and turn those three lines of ugly into one line that makes sense. I would strongly recommend moving as much of your SWA related code into various helper functions or classes because, quite simply, SWA code is ugly and will distract from what you’re actually trying to do.

At this point, we’re finished with the first 5 steps in performing the calculation, and are left only with the sixth and final step: Reading the result. If you recall, we’ve already seen how to pull the ValuePattern from the edit box. We couldn’t use it at the time, but that’s because it was read-only. Now that we only want to read from it, its read-onlyness shouldn’t be a problem. Once you have the pattern, in order to get the value, you’ll have to use the .Value property off the instance. However, .Value isn’t on the pattern itself. To get to it, you either have to go through the .Current or .Cached properties first. I’m not entirely sure what all of the features and limitations and differences are between .Current and .Cached, but I know that .Current usually works and that .Cached usually doesn’t, therefore I’d strongly recommend using .Current.

ValuePattern editBoxValuePattern = (ValuePattern)editBoxElement.GetCurrentPattern(ValuePattern.Pattern);
string editBoxResult = editBoxValuePattern.Current.Value;

At this point, the string editBoxResult will have the answer to the question.

Six by nine: 54.

That’s it. That’s all there is.

If you want the full source code to the example, go here.

At this point, you should know about as much about SWA as I learned in three days of fighting with the technology and documentation. Points to remember:

  • Use UISpy to inspect windows and controls to find some uniquely identifying set of information.
  • Use FromHandle, FindFirst, or FindAll to grab elements that you’re interested in using.
  • Use GetCurrentPattern to ask for an object that you can use to interact with the control in a specific way. ValuePattern and InvokePattern are two of the most common ones you’re likely to use, but become familiar with the others for when you find a different control.
  • If all else fails, use a dirty hack workaround.8

Of course, this posting only talks about how to use SWA to write automation. It completely omits the other half of the equation, which is to write a UI application that can play nicely with SWA, which will lead to happier testers. The example of using Calculator was simple. In a lot of applications, you’ll run into controls without automatable IDs, leaving you to do nasty things like grab all of the Edit controls in a window and selecting the third one from the list and hoping that it’s actually the edit box that you want. You’ll find buttons that don’t actually click, forcing you to perform crazy workarounds. It doesn’t have to be like that. When writing an application, you can implement some support for SWA, making it easier to reference the elements in your tests later. Perhaps I’ll cover that in a later post.

  1. It’s also supported by Silverlight, so all you Internet testers don’t have to feel left out. []
  2. MAUI was an internal library that was widely used for UI Automation by teams within Microsoft back when I did my time as a contractor in 2004. It was a .Net wrapper which had classes for every type of UI control. It was simple to understand and use, largely because if you wanted to interact with a button or a text box, there was a Button class and a TextBox class that you could program against. It was almost a mirror to Windows Forms. []
  3. You’ll probably find that it doesn’t work for web pages, because they’re custom drawn panels and not real Windows controls. It won’t even work for Firefox at all, because that browser is laid out entirely in XUL. I’ll probably talk about browser automation in a later article. []
  4. Which is usually the main editor window in Visual Studio, because every time you do a CTRL-C, CTRL-V, you’ll be hitting the CTRL hotkey, which makes UISpy select the window the mouse is over. []
  5. There is, however, a restriction on the environment where you can use SWA. Since it interacts with window controls and UI things, you must have a UI present for it to interact with. That means if you try to use SWA in a headless session, like the one your automated tests running on your build server will be executed in, you’re screwed. If you can, you may need to have your automated tests run on a server that’s always logged into a window session and unlocked. If not, you can do what I did, which was write a program to launch a virtual machine on an instance of Virtual Server, then remotely execute your tests within that virtual machine. Although complicated, you may actually find it easier to implement that solution than it is to get the corporate computing security policy altered so that it doesn’t get in the way of doing your job. []
  6. And why these all aren’t under one unified System.Windows.Automation library, I don’t know. Or at least named System.Windows.Automation.Client, .ClientsideProviders, and .Types. More evidence that the SWA team probably hadn’t used .Net before writing this library and that all the people keeping things sane had fled the scene. []
  7. Not to be confused with the Willow Pattern []
  8. And when that fails, then you can use fire. []

September 27, 2009   3 Comments

Electric Curiosities: The Lost Art of Cartridge Design

These days, with the exception of the Nintendo DS, video games come on boring shiny discs that look pretty much exactly the same as every other game for every other competitor’s system.  You can’t tell the difference between the games for two consoles by feel alone.

It was not always like this.  Deep in the mists of time, video games all came in chunks of plastic called “Cartridges”.  By look and feel, you could distinguish one system’s games from another’s.  Sometimes cartridges for a system were plain and rectangular, sometimes they were embellished with features like handles, and sometimes they were yellow…  Bright yellow.  This post explores the cartridges for a number of different systems, some of which may be familiar to you, and some of which will hopefully strike you as freakish and bizarre.

First off, the full gallery, for size comparison.  For fun, I’d recommend trying to see how many of these cartridges you can identify without zooming in all the way and reading their logos.


If you don’t recognize at least three, you’re probably not going to find the rest of this post very interesting.  And if you can name all of them, then feel free to start naming cartridge-based systems that aren’t represented here.  There are still a few systems that I haven’t raided eBay for yet…

Anyway, without further babbling, the carts:

Atari 400


(Pictured: Centipede)

The Atari 400 computer had a small (by Atari standards) cartridge with a plain text brown label.  The cart contacts were protected by a sliding door, which was partially exposed, unlike the doors on 2600 and 5200 carts.  The Atari 800 had two cartridge slots (because one is never enough), but the slots were not equivalent, forcing  Atari 400/800 games to have a marking on the top of the cart telling you to insert it into the left or the right cartridge port.  However, since people didn’t buy the Atari 800, not many cartridges were made for the exclusive right port, leaving it lonely and depressed.1

Atari 2600


(Pictured: Combat)

According to what trusted sellers on eBay have told me, this is Combat for the Atari 2600, which is apparently EXTREMELY RARE (LQQK!).  I had never heard of this game or this system before, so I don’t have much information on it to share.

Atari 5200


(Pictured: Robotron 2084)

After their apparent disastrous failure with the Atari 2600, Atari bounced back and produced the Atari 5200, which, as the name implies, was twice as good as the 2600.  5200 carts were the largest Atari carts, with the height of a 2600 cartridge, but the width of a SNES cartridge.  Typical 5200 carts were silver labels, with an image on the label and blue Atari branding.  Unlike the multiple rebrands of Atari 2600 cartridges, the 5200 did not survive long enough to change this basic design.

Atari 7800


(Pictured: Tower Toppler)2

During the rise of the NES, Atari released the 7800 ProSystem.  Learning from some of the mistakes they made with the 5200, the 7800 had 2600 compatibility, which lead to the 7800 using a size and shape identical to the 2600 for its cartridges.  7800 carts mostly had a silver border and plain text end label, with game art in the middle of the main label.

Atari Jaguar


(Pictured: Iron Soldier)

The last gasp of the once powerful Atari, the Jaguar came out right at the transition between 2D and 3D and would have had more success had most of its games not completely sucked.  (I’m looking at you, Checkered Flag.  And Club Drive.)  The cartridges used the extremely popular 2.75 x 4 form factor (Similar to the size used by the Genesis, SMS, Famicom, N64, TRS-80, TI-99, and Tomy Tutor), but for some inexplicable reason, had a tube shaped handle on the top.

Atari Lynx


(Pictured: Scrapyard Dog)

Lynx games are some of the flattest in this set, about two credit cards thick.  The contacts are exposed directly beneath the label.  Most Lynx games had a curved lip at the top edge, allowing you to pull the game out of the system after it’s inserted.  The Lynx is one of the few systems in this set where you can’t see what game is in the system after you’ve put the cart in, as the label faces the back of the system, instead of outward like on the Game Boy.

Atari XE


(Pictured: Battlezone)

The Atari XE Game System was Atari’s competitor to the Atari 7800.  The two systems fought each other valiantly and both were slain in the process.  The XE was compatible with Atari 400/800 series games, so XE carts were the same size and shape.  The color was changed because by the late 80’s, people had realized that the early 80’s were ugly.  The label was updated to include game art.  And finally, the bizarre half-exposed dust door on the 400’s carts was removed for the XE.

Bally Videocade


(Pictured: The Incredible Wizard)

Bally Astrocade/Videocade/Professional Arcade3 had cartridges that wanted to be cassette tapes.  While they didn’t have winding spools or magnetic tape, they did have what appears to be write-protect tabs that were punched out.  These holes were used to hold the cartridge in place after you inserted it, because Astrovision Videocade4 cartridges possessed limited intelligence and were known to try to escape when you turned the system on.



(Pictured: Donkey Kong)

The ColecoVision knew a good thing when it saw it.  Why try to come up with your own cartridge design when you can steal the one that was Atari was using?  Coleco carts are the same size and shape as Atari 2600 cartridges, but had controller overlay slots in the back and the label was reversed so that you’d see it the right way up when it was in the system.  Oh, and there were little ridgey things at the top, and the case was slightly beveled to prevent you from trying to jam a Coleco cart in an Atari or an Atari cart into a ColecoVision.  This cartridge camouflage is especially useful now, as less savvy eBay sellers can’t tell the difference between an Atari 2600 cartridge and a generally more valuable ColecoVision cartridge, allowing someone who can tell the difference to take advantage of them.  Unfortunately, the label design for a Coleco game kinda sucks, giving the ColecoVision name more prominence than the cartridge title, and limiting the custom artwork to the title itself.5

Emerson Arcadia 2001


(Pictured: Tanks A Lot)

The Emerson Arcadia 2001 wasn’t satisfied with the 3×4 cartridge that the Atari and Coleco had.  Oh no, they had to push it to the max.  As a result, the Arcadia carts are 6 FULL INCHES of rainbow-titled watercolor AWESOME.  And why stop there?  Why not put a label on the BACK side of the cartridge, too?

Fairchild Channel F


(Pictured: Videocart 1: Tic-Tac-Toe/Shooting Gallery/Doodle/QuadraDoodle)

They’re yellow.  They’re big.  And they’ve got giant psychedelic numbers on them.

Nintendo Game Boy


(Pictured: Kirby’s Dream Land)

Game Boy cartridges are about the smallest a cartridge can get without feeling too small.  It’s large enough that you can’t eat it and won’t permanently lose it in the seat cushions, but small enough to take large quantities with you wherever you go.  It’s got a big spot for label art that isn’t invaded by branding, since the branding is built into the cart itself.  These carts must have evolved from stray Bally carts, as they also have a notch for a locking device to prevent their escape.  It’s also one of the few carts that tells you how to put it in the system, with a large arrow in the plastic and the words “THIS SIDE OUT” on the label.

Game Boy Advance


(Pictured: Rayman 3)

The GBA cartridge was about half the height of a Game Boy cart, leaving less room for artwork on the label.  The system branding is still on the plastic and the insertion arrow is still there, but the label no longer tells you which direction faces out, leading to mass customer confusion.

Game Boy Color

GameBoyColorHybrid GameBoyColor

(Pictured: Blaster Master Enemy Below and The Legend of Zelda Oracle of Seasons)

The Game Boy Color cheats in its quest to gain attention by having two types of cartridge that I have to comment on.  There’s the Black Mutant Hybrid cartridges, which are identical to standard Game Boy carts, only black.  These mutated black cartridges could be played on an ordinary Game Boy, but also contained Game Boy Colorized versions of the games.  Then there’s the similarly sized clear carts, which are strictly for the Game Boy Color.  The system branding region that had been indented on regular Game Boy carts was inverted into a bubble on the clear carts.  Additionally, by the release of the GBC, the clear carts had been sufficiently tamed and no longer attempted to flee when they were played, so there was no need for a locking mechanism on the system, which means that they do not have a notch in one of the corners.



(Pictured: Snafu)

Intellivision cartridges were smaller than Atari cartridges, likely because prolonged use of the Inty’s control pad caused severe hand cramps, preventing people from opening their hands wide enough to grasp a larger cart.  The cartridge has a pointed front with the game’s title on a label on the sloped surface receding from the point.  There was typically no artwork on Intellivision cartridges, simply because there wasn’t enough room.  In fact, there was hardly even enough real estate for the game title itself in some cases.  Some Intellivision cartridges had what can best be described as a “Fill Line”, instructing you just how far to stick the cartridge into the system.  Mattel was so enamored of the Intellivision cart’s form factor that they used the same shell for their Atari 2600 versions of Intellivision titles (Known as M-Network), simply sticking on a wider base to fit the 2600’s cartridge slot.  These M-Network carts even have the Fill Line.



(Pictured: Star Trek Phaser Strike)

Milton-Bradley Microvision cartridges are large, but they’re large with a purpose.  You see, they’re not just the game cartridge, they’re also a face plate, controller overlay and screen overlay, all in one.  They mounted on the front of the Microvision handheld system.



(Pictured: Super Mario Bros./Duck Hunt)

The Nintendo Entertainment System was another obscure system which was released in the mid-80’s.  Presumably these games are so rare because they’re absolutely freaking huge.  Plus, no one really wants to play games which apparently promote cruelty to animals and pyromania.  Early Nintendo-produced NES games had a fairly standarized label design, with some real game graphics6 used for the artwork and the game title set beneath it at an angle, but this quickly gave way to a free-for-all anything goes approach to label design.  And they’re freaking huge.  I know I might be desecrating your memory of a classic here, but seriously.  Look at them.  They’re FREAKING HUGE.

Nintendo 64


(Pictured: The Legend of Zelda Ocarina of Time)

This cartridge almost killed Nintendo.  When the Nintendo 64 was released, it’s competitors were using CDs.  PC games were almost exclusively released on CDs.  Even third-rate lame ass systems like the Atari Jaguar had CD add-ons.  CDs let you have amazing sound, extensive videos, full voice-overs, and games of unlimited size, plus, they were really really cheap.  Nintendo, sensing a passing fad, decided to stick with the tried-and-true cartridge technology.  The N64 sold 33 million systems, the PSX sold 125 million.

Nintendo DS


(Pictured:  Rayman DS)

Too small.

Nokia N-Gage


(Pictured: Rayman 3)

Nokia thought they were going to take the world by storm and revolutionize the portable game market.  Let’s put games on the phone, so people only have to carry one thing around.  Let’s make the games 3D.  Let’s get big licenses to make games for us.  LET’S CRUSH THE GAME BOY!  Sadly, in all of their big plans, no one stopped to add the requirement “Let’s make it usable”.  In order to swap games in the original N-Gage, you had to pop the back cover of the phone off, then REMOVE THE BATTERY to reach the game card slot.  Yeah, and it sucked as a phone, too.  At any rate, I’m not sure if this one can even be legitimately included in this set, since N-Gage games came on a plain ordinary MMC card.



(Pictured: KC’s Krazy Chase!)

The main section of an Odyssey 2 cartridge was roughly the same size as an Atari 2600 cartridge, but there was a large handle on the top of the cart.  It’s unclear what prompted this design choice, but I suspect that O2 carts had the opposite problem from Game Boy and Astrocade carts, in that Odyssey2 carts would sometime refuse to leave the nice warm cartridge slot and you’d need to grab a hold of the handle and pull as hard as you can to get them out.  Odyssey2 cartridges were black plastic, and had labels that were simplified monochrome renditions of the groovy black light art found on the boxes.  The labels also featured a large Superman flying style “Odyssey2” logo emerging from the center of the artwork.  Odyssey2 carts were all very exciting, as demonstrated by the use of an exclamation point in every single title for every game released on the system!

Sega Game Gear


(Pictured: Sonic the Hedgehog 2)

Like the Lynx, the Game Gear featured full color graphics and was more powerful than the Game Boy.  And like the Lynx, that didn’t matter because it didn’t come with Tetris.  Game Gear carts were larger than Game Boy and Lynx carts, but still small enoguh to be portable.

Sega Master System


(Pictured: Golden Axe Warrior)

SMS carts were smooth black plastic, with a small red label only large enough for the game title and Sega logo.  But who cares about the cartridge, when the game in the picture is Golden Axe Warrior?  If you have a Sega Master System, you need this game.  If you don’t have a SMS, then you need to buy one, then buy this game.  It’s that simple.  Golden Axe Warrior is a pure rip-off of The Legend of Zelda, but it’s one of the most pitch-perfect ripoffs ever made.  Change the main character to Link and the main enemy to Gannon, and you have the game that Zelda 2 should have been.

Sega Genesis


(Pictured: Sonic The Hedgehog 2)

Black and curvy.  Obtrusive branding on label.

Super Nintendo


(Pictured: Super Metroid)

The Super Nintendo cartridge is one of the most complex cart designs around.  The ridges and bevels of the NES cartridge weren’t enough for Nintendo, so they added screws, curves, divots, notches, and what appears to be aluminum siding.  The label has a huge amount of space devoted to branding, but still manages to have room for game artwork.  SNES carts aren’t quite as freaking huge as NES carts, but they’re still pretty big.  Also, just like NES carts, they’re mostly empty space inside.  Early Super Nintendo cartridges had a solid section across the base, which allowed a locking mechanism to fit into a hole on the front of the cartridge, but as time went on the carts evolved a way to free themselves from this capitivity, leading to the gapped cart shown above.  In a rare gesture of defeat, later SNES consoles removed the locking arm entirely, thereby allowing even the early cartridges to escape.



(Pictured: Parsec)

Most cartridges are easy to stack in a pile.  The TI-99 doesn’t play like that.  Cartridges for that computer suddenly get fatter halfway up the cartridge, complicating any standard strategy.  Your only hope is a backwards/forward alternation, but even that tends to be unstable.

Tomy Tutor


(Pictured: Traffic Jam)

Tomy Tutor cartridges7 are similar to Sega Master System carts with a different color scheme.  They’re white instead of black, have white labels instead of red, and the label has a notebook paper motif instead of Sega’s graph paper styling.  Almost makes up for the rubber keyboard.



(Pictured: Mega Bug)

Back in the history of ages past, Radio Shack did more than try to sell you cell phone contracts and RC cars.  At one point, they had a fairly popular line of computers.  No, they weren’t just computers.  They were COLOR Computers.  Special.  TRS-80 carts had a large section of game art on the front, while the end label was a standard design with the Radio Shack logo, game title, and, for easy reordering, the catalog number.

Turbo-Grafx 16


(Pictured: Blazing Lazers)

The TG-16 used cards that were pretty much the size of two credit cards stacked together.  The artwork was printed directly onto the card, rather than being an applied sticker like on pretty much every other game cartridge.  Most games had a large single colored patch with the game title in the TG-16 font and the TG-16 logo.  Above this is a black patch, presumably housing the actual game content.  When inserted into the system, the full label section remains visible.

Virtual Boy


(Pictured: Red Alarm)

Virtual Boy carts were larger than Game Boy cartridges and featured a label with a red and blue field and the stylized game title.  The lack of art on the label was mitigated by the fact that most users of the Virtual Boy lost their eyesight while playing, and were therefore unable to closely examine the cartridges.

  1. It later found love in the form of the similarly ignored NES expansion port. []
  2. Also known as Nebulus or Castelian []
  3. No one seems to actually know what this thing was called, not even the system itself. []
  4. or whatever []
  5. And when they were selling the Adam computer, the corporate labeling read “ColecoVision & ADAM”, making it even larger and even more obnoxious. []
  6. Although slightly enhanced by exciting motion lines. []
  7. All ten of them… []

September 26, 2009   4 Comments

Addendum to “Stop!”

Of course, I kept going a little bit after I told myself to stop, but I ended up with something that sort of resembles progress.  I was able to get it to turn pretty much the number of degrees I told it to turn, which is good.

Trouble is that now the motor appears to be doing a binary search to find the right angle.  I tell it to go 90, and it goes to 110, then 80, then 95, then 87, before eventually landing on 90.  It could just be a matter of overpower.  Perhaps if I drop it back a notch, it’ll hit the target more cleanly.  At any rate, it is moving a certain number of degrees when I tell it to.  Eventually…  I don’t know what this will look like in the game, though.


I still need to add another line to control the motor power remotely.  I also need to look into the ability to send a packet of information across, because I don’t want another Bluetooth recv block for every input variable.  That has to be expensive.

September 23, 2009   No Comments

Stop! Stand there where you are, before you go too far…

I spent some time last night diving back into the world of Lego Mindstorms, trying to solve one of the outstanding problems from the Pong Robot that I built a few weeks ago.  If you recall, one of the biggest problems I had was with precision movement of the servo that was rotating the paddle knob.  It would consistently overshoot the mark or simply not move at all.  This was very frustrating, and I tried several different movement techniques before coming across one that worked well enough…  Until I recharged the batteries.

One of the problems I had was that I couldn’t tell the motor how far to rotate.  I had to tell it to turn on, then, sometime later, tell it to stop.  This meant that processing delay could have been the cause of most of the overshoots.  By the time the stop message had gotten to the motor, the paddle had gone too far, leading the movement processor to send a command telling the paddle to move the other direction, where it promptly overshot the mark heading in the other direction.  What I needed was to tell the motor exactly how many degrees to turn.

Now, I tried the degree method before, sort of.  Unfortunately, I was limited to fixed angles that were set up in the Lego Graphical Program that I wrote (If you can call drag and drop programming “writing”…), so it didn’t end up working that well.  In general, it overshot worse using these angles.  And movements got stacked up in the queue, so it kept moving long after it should have stopped.  All in all, a failure.

I eventually bypassed the program altogether and moved to using the Mindstorms Direct Command API, which lets you send commands to the NXT over Bluetooth.  Using Direct Commands, I had slightly more control over when to start and stop, although the primary benefit was turnaround time.  All of my logic was centralized in the same C# application, rather than being spread between C# and the Lego Graphical Program.  I could tweak a setting and run and see what it did, rather than drag a couple of program bricks around, redeploy, run the program on the NXT, then run the desktop app, only to find out it didn’t work at all.  However, what I didn’t find was any way to provide a movement angle at all.  There were all sorts of settings, like power, braking mode, even some used for twin motor locomotion.  But no “rotation angle”.  So, I made do with simply adjusting the power.  The “success” of the robot was less a factor of going where it was supposed to go than luck in having it correct from its mistakes fast enough to hit the ball.

Obviously, luck wasn’t going to cut it.  The robot would perform reasonably at the standard speed, but as soon as you bumped up the speed a notch, it didn’t stand a chance.  I could tell that it would have problems if I wanted to expand to games like Breakout, where the speed is variable in normal play, and there was no way that the current movement would work for games like Kaboom!, where you have to have pinpoint movement control and be thinking five moves ahead to have any kind of hope of survival.

Yesterday, I decided to look at the problem again.  In reading what other people had written about the Direct Command API, I discovered that the semi-obscurely named “Tacho Limit” parameter that I’d been ignoring was actually the number of degrees to rotate with the movement command.  Great!  That’s precisely what I needed.  Except for the minor fact that this Tacho Limit was obeyed like someone on a crotch rocket obeys the speed limit.  Somewhere around the Tacho Limit the motor would sort of decide to stop moving, but would then coast for another random length of time.  At full power, this random length of time tended to mean somewhere in the neighborhood of 720 degrees.

Two full revolutions past the mark is not my idea of precise.

And then, on top of that, there’s a counter in the motor that keeps track of how far you’ve wanted to go and how far it’s actually gone and won’t move again until you’ve told it to move past where it landed.  Which means, if you tell it to rotate 360 degrees, and it goes 1080, the motor won’t go anywhere at all until you’ve told it to move more than the difference of 720 degrees.  And then, it only travels the difference over the error.  So, needless to say, quite frequently, I was telling to move and it refused to go anywhere, and when it finally did decide to move, it didn’t move anywhere near the distance I wanted it to go.

Basically, I would have gotten more precise movement if I’d handed the paddle to a three-year-old that I’d pre-loaded with PixyStix.

I don’t understand it.  I’ve seen the motor move fast and stop with such force that the entire assembly shudders.  Why is it just gradually coasting to a stop now when it should be moving precisely 30 degrees.  Not 34, not 37.  30.

I’m back in the graphical environment now, trying to drag and drop a program that’ll give me more control over the rotation, but which will hopefully cause the motor to actually stop when it’s supposed to stop.  I’m debugging using a happy face icon and a bunch of sleepy ZZzzzs.  This is all I will ever be able to think about the next time someone tries to show me a UML diagram.

And speaking of stopping, I really need to stop now, given that I happen to have something that could be described as “work” in the morning.   Just like the motor overshoot, this could easily continue for another hour or two if I let it.   When I’m half asleep at the morning meeting, I don’t think the excuse “I was playing with Legos all night” is gonna fly…

September 22, 2009   No Comments

This is not a test. I repeat, this is not a test.

This is not a test:

public void Test1()
    DataService target = new DataService(); // TODO: Initialize to an appropriate value
    DataObject data = null; // TODO: Initialize to an appropriate value
    Assert.Inconclusive("A method that does not return a value cannot be verified.");

It is, in fact, a complete waste of time.

I spent part of this week neck deep in unit tests that the development team has written for our latest project. At the start of this project, there was a big commitment made to have unit tests written for every component. “Test Driven Development” was tossed around in meetings, despite the fact that most of the people using it were unaware that they were using that phrase completely incorrectly.1 Anyway, I’d offered my services to help the devs develop unit tests, but, for the most part, I stayed out of the way and let them do their own thing.  This week, I decided to take a trip through what had been done.  Due to corporate confidentiality agreements, I cannot discuss what I found there, however, let’s just say it inspired this posting…2

For many developers, testing is beneath them.  They view testing as the refuge of the software engineer who couldn’t cut it as a real dev.  Testing is a simple, braindead task, requiring limited skills or intelligence.  Yet remarkably, developers who hold that view are also the ones who are invariably unable to write any kind of useful test.  Sometimes they’ll resist, claiming that “It’s not my job to write tests”.  When they reluctantly give in to the radical notion that it is, in fact, their job to write code that, you know, works, they rely on wizards or recorders to tell them what tests to write and how to write them, and frequently brag about the fact that they’ve spent all week writing unit tests, without regard to the quality of those tests. 

I have an annoucement to make for the benefit of those people:

The letters “SDE” in my title are not honorary.

Writing good test automation is every bit about software development as is, say, writing a middle-tier WCF service.  You have to think of it that way or you will fail miserably.  In normal software development, you typically have a fixed set of requirements that you base your code on.  In test automation development, you also code to a requirement:  The requirement to make sure that the software works.  That particular requirement is infinite, highly dependent on context, and generally requires actual thought to implement.3  And somehow, many developers writing tests completely lose sight of that requirement.

Back to what I started with.  We’re using Visual Studio’s Unit Tests.  One of the much touted features is the fully integrated support of unit testing in the IDE.  You can even right click on a method you’ve written and generate unit tests!  Except…  Those tests are COMPLETELY WORTHLESS.  It doesn’t actually generate a test, it creates a stub and asks you to fill out the inputs and outputs.  It fails on a basic technical level, because it produces nothing useful and typically it takes you longer to delete all the crap they’ve put in the method than it would to have simply written the test the right way from scratch in the first place.   However, more disturbingly, it fails on a philosophical level for many reasons.  Among them:

  • It enables and encourages laziness.  A few clicks and look at that, you’re a tester!  Now you don’t have to think about what you’re doing at all.  Isn’t that easy?
  • It encourages bad testing practices.  It will create a single method for a function, implying that you only have to do one thing to make sure that your code is good.  It makes it easy to produce a whole set of test stubs at once, but, oh crap, they’re all Asserting “Inconclusive” and causing the build to report errors, so better comment that out, now the build looks good and you’ve got 80 passing tests — SCORE!
  • It gives you idiotic advice, like:  “A method that does not return a value cannot be verified.”  List<T>.Clear(); doesn’t return a value, but I’m sure you can think of a way to verify that. Array.Sort(something); also has no return, and also has a clear verification.
  • It creates a worthless “ManualTests.mht” and “AuthoringTests.txt”, which are about as useful as the MFC “Readme.txt” from VS6 and should be deleted just as fast.

And there’s more, but I’ll save it for some other time, because I think you’re getting my point.  Unfortunately, because it’s Visual Studio that’s doing it, people seem to think it’s automatically the right way of doing it, and don’t bother to think about it long enough to realize that they’re doing it wrong.  Therefore, I feel it is important to provide testing advice for developers who are attempting to write unit tests.  I think I sent most of these tips to my team at the start of our project, which means that they’re still in mint, unused condition for you!  In no particular order, but numbered for your reference:

  1. Unit tests are supposed to be self-contained.  They’re not supposed to run against external DBs, they’re not supposed to hit external services.  Use your DLL directly, don’t bother going through the service that exposes your DLL.  If you can’t check it in, you can’t use it.  Remember, the purpose of a unit test is to make sure your code works, not to make sure that the database server is configured correctly.  Read up on mock data providers to see how to design your code so that you can test it without talking to a DB or external service.
  2. Automation code is the same as regular code.  You can use helper methods and classes.  You can include libraries.  If you’re doing the same thing in fifteen tests, altering only a variable or two, then do the same thing you’d do in your regular code:  Write a method that does the work and pass those variables in as parameters.  It’s the same language you normally program in, and I’m assuming you already know how to use it.
  3. Always give yourself enough information in failure messages to find out why the test failed.  xUnit4 Assert methods typically have an optional “string message” parameter so you can put a note about what you’re checking.  As far as I’m concerned, that should not be an optional parameter.  Always use it.  If you’re catching an exception, print out the message AND the stack trace.  Add status writelines if you want.  Remember, you’re probably only going to look at the details of a test case when that test case is failing or acting strange, and when the test is failing or acting strange, you want to know why.  The more details and information, the better.
  4. The default status of a test should always be failure, whenever possible.5  Any time anything goes awry, it should fail.  If you have an if-else chain or a switch that’s supposed to cover every possible case, you should have the final else or the default of the switch make the test fail when you get an impossible case.  Basically, treat the code as guilty until proven innocent.
  5. Don’t forget the easy pickings!  Can you pass nulls?  Negatives?  Boundaries?  If your method has preconditions, try inputs that fail to meet those preconditions.
  6. You should generally have more than one test per method.  If your code has an if statement in it, then that’s a minimum of two tests right there.
  7. You should generally have more than one Assert per test.  For instance, if you have a function that fills a data object and returns it to you, then you should assert that the object isn’t null and you should assert that every field in the object is the value you expect it to be.  In some cases, it’s also prudent to make sure that it didn’t do anything you weren’t expecting it to do.  In the case of a delete method, for example, it’s often a good idea to make sure the thing you wanted to get deleted was deleted AND that the method didn’t go on a killing spree and delete everything else while it was at it.
  8. You should pretty much always end up with more code in your tests than in the code being tested, or you’re doing it wrong.
  9. You should pretty much always end up with more ideas for test cases than you can ever write, or you’re doing it wrong.
  10. Don’t write a bunch of unit test stubs for things you’d like to test.  It’s worthless.  Instead of test stubs, use comments to remember what tests you’d like to write.  In normal programming, you write a stub because you want to write some other piece of code first that will need a method, but you don’t want to actually write the function just yet.  That’s not the way it is in testing.  Nothing is going to call a test except the test harness.  A stubbed out test is not only a waste of time and space, it’s also dangerous.  If you stub out a bunch of tests that you’d like to write, something is going to happen that’s going to prevent you from getting to them right away, and you’re going to end up forgetting that you’ve left all the stubs there.  Then, sometime later, you’re going to review the tests that are being run, and you’ll see 20 test cases covering method X, and from the names of the test cases, you’ll assume that method X is adequately covered.  If, for some inexplicable reason, you feel you absolutely must have test stubs, then you MUST make them explicitly fail.  That way, you’ll know that they’re there.  Also, you’ll be annoyed by the large number of failing tests, and be less likely to put stubs in next time.  It’s better to have no test at all than a bad test.  At least you know where you stand when you have no tests.  If you have bad tests, they’re lying to you and making you feel good about the state of things.
  11. Use the most specific Assert available for what you’re checking.  The more specific the Assert is, the more specific information you’ll get about when when wrong when it fails.  And if there isn’t an Assert for something that you’re checking frequently, then I recommend writing one.  They’re not magic functions or anything.  They’re typically an if statement that’ll throw an exception with a details message in certain circumstances.
  12. Avoid using try/catch blocks in your test code.  Yes, I know that sounds strange, and like it’s a bad coding practice, but you’re doing it for a reason.  First, Asserts communicate to the harness using exceptions, so if you’re catching exceptions, you risk swallowing the failure exception from an Assert.  Second, if your test is throwing an exception, you generally want to know about it and want it to fail the test.  xUnit frameworks consider any exception to be a failure, unless you’ve marked the test with an attribute telling the framework that you expect a certain type of exception to be thrown.  Now, try/finallys, on the other hand, are perfectly acceptable.  Just be careful with catches.
  13. Always make sure you’re actually testing the right code.  Don’t worry about testing the third party framework, don’t test the external class library, and don’t copy large sections of the code from your project into the test project for convenience.
  14. Don’t rely on the code you’re testing to test the code you’re testing.  Don’t call your method to make sure that your method is correct.  If your method is wrong, there’s a good chance that it will be wrong the same way both times.  Verify the code using independent means when possible.
  15. If you have an automated build setup, then integrate your unit tests such that a unit test failure causes a build failure.  Every last unit test you have should be passing at all times.  (And if you don’t have an automated build, then you need to set one up.  Seriously.)
  16. If a unit test fails, fix the code or fix the test.  You should consider a failing unit test to be the same level of seriousness as a compliation failure.  Do not ignore it and do not comment it out to make it work for now.  A unit test failure means that your code is broken.  If the test is obsolete, then delete it.
  17. Name your tests something meaningful.  Don’t call them “LoginTest1” to “LoginTestN“.  Name them something useful, like “Login_NullUsername” or “Login_ValidUsernameAndPassword” or “VerifiesThatLoginDoesNotWorkWhenUserIsNotFoundInDataStore”.  Anyone should be able to look at a test name and have some idea about what it is supposed to be testing.
  18. Test your tests.  Make sure that they’re doing what you expect them to be doing, and looking for what you expect them to be looking for.  Sometimes I’ll forcibly alter an important value, just to make sure that the test will fail.  You can even step through your test cases in a debugger and watch what’s going on, just like normal code!  (Because it is normal code!)
  19. Your test cases should never depend on one another.  Test A should never set up something for Test B, because you’re not guaranteed that Test A will run before Test B, in fact, you have no guarantee that Test A will be run at all.  xUnit frameworks typically have some sort of ClassInitialize and ClassTearDown and TestInitialize and TestTearDown methods that you can use to set up and clean up your test cases, if needed.  Read the documentation on these for your framework to be clear on exactly when these will be called.

And finally, the most important reason to be diligent about writing good unit tests:

  • If you write good unit tests, then you’re more likely to keep those annoying testers away from you, because you’re less likely to be giving them code that doesn’t work.
  1. I’m sure I’ll have something to say about TDD at some point. []
  2. To be fair, one group had their act together, producing about 480 out of the 500 or so unit tests that were there, and I didn’t see any glaring problems.  So, they’re either doing it right, or they’ve tricked me into thinking that they’re doing it right, either way, they deserve credit. []
  3. I know when a developer is starting to understand the true nature of unit testing when they cry out “Testing is hard” in anguish. []
  4. nUnit, JUnit, MSTest, MBUnit, etc. []
  5. Unfortunately, the xUnit frameworks are completely backwards in this regard and will pass any test if there isn’t an explicit failure.  This is wrong, and in my view, broken.  In the obvious case, a completely empty test method will count as a pass, even though it doesn’t do a damned thing.  In a more insidious case, an unintended code path could cause a test to pass, even though there is a bug in the system.  For instance, I recently wrote a test to verify that a method would throw an exception with a property set to a certain value.  I put the method call in a try block, my Assert on the property in the catch, and called it good.  A few days later, I realized that my test would pass if the method I was testing didn’t throw an exception at all, which was obviously incorrect. []

September 18, 2009   No Comments

Electric Curiosities: Motion Sensitive Controllers

The PS3 has one that, much like the system itself, no one cares about.

The Wii built their entire reason for existing around one.

And the XBox 360 has decided that they’re completely irrelevant, and is instead trying to pass off an EyeToy rip-off as “Innovation”.

I’m talking about motion sensitive controllers, and despite what Nintendo1 might have you believe, they’re about as innovative as Project Natal.  You see, the Wii isn’t the first Nintendo system to have a motion activated joystick. 

IMN Control Game Handler  GH-001

 The NES had one, called the “Game Handler”, from IMN Control.  The Game Handler is basically the top half of a flight stick, without that whole pesky base that’s normally attached.  The product code for this joystick is the overly optimistic “GH-001”, implying that not only were they expecting the line to produce a GH-002 and GH-003, but also a GH-100.

I have one.  Unfortunately, my apparently gargantuan hands are unable to comfortably hold the controller in such a way that I can press all of the buttons without finger contortions.  The main trigger and the primary thumb button are interchangable between A and B, while the two difficult to reach, yet remarkably in the way side buttons are start and select.  They’re unlabeled, but that’s not a problem, because you’ll accidentally hit one of them rather frequently when you play, so you’ll soon learn which one is which.

IMN Control Game Handler

Of course, Game Handler itself was not terribly innovative.  The Atari 2600 even had a gravity controller, and I think it used real mercury switches, because it’s from back before mercury was dangerous.  I’ve seen it called “Le Stick” or the “Heyco Gravity Joystick”.  The joystick itself says “Heyco” on a small ring where the cord enters the base, but the plug appears to have been harvested off of a regular Atari joystick.  Although it’s decidedly phallic in appearance2, it’s much easier to hold this joystick than the Game Handler.  Much of the simplicity is owed to the fact that it only has one button, and that button can be placed on the top, where it’s easy to press. 

Atari Gravity Joystick

Thing of it is, this joystick is digital.  That means it’s on/off.  Which means you’re either tilted or not.  There’s no clear indication of when you’re about to tilt far enough to trigger the switch.  And it’s inconsistent.  Sometimes left will move you left, sometimes it will move you left and up, and sometimes down.  Just down.  Not left and down.  Just…  Down.  Down doesn’t move you down.  Down moves you right.  Of course.  That means you play like you’re having a seizure.

I don’t normally suck that much at Yars’ Revenge.

Now, that much awesome simply cannot exist on its own.  When you have an object this amazing, well, you just have to get two.

Atari Gravity Stick Dual Wield

Next up:  Robotron 2084.

So, in the end, it’s clear that the motion control of today’s consoles is not nearly as technologically innovative as you were led to believe.  It had all been done before, nearly 30 years ago.  The innovation made with this generation was that the designers decided that it would be a good idea to remove the suck from motion controllers and make something that actually worked.

  1. And Apple, for that matter. []
  2. Thankfully it was released before controllers had a rumble feature… []

September 16, 2009   1 Comment

Being Mean To The Average

I hate the average.

Specifically, I hate the use of the average as the predominant, sometimes only bit of information given when talking about software performance test results.  I will grant that it’s easy to understand, it’s easy to calculate, and I know that it’s one of the few mathematical concepts that can be passed along to management without much explanation.  However, it’s often misleading, presents an incomplete picture of performance, and is sometimes just plain wrong.  What’s worse is that most people don’t understand just how dangerously inaccurate it can be, and so they happily report the number and don’t understand when other things go wrong. 

Calculating the average is simple.  You take the sum of a set of numbers and divide it by the number of numbers in that set.  But what exactly does that give you?  Most people will say that you get a number in the middle of the original set or something like that.   That’s where the faith in the number begins and, more importantly, where the mistakes begin.  The average will not necessarily be a number in the middle of your original set and it won’t necessarily be anywhere near any of the numbers in your original set.  In fact, even if the average turns out to be dead-center in the middle of your data, it doesn’t tell you anything about what that data looks like.

Consider, for a moment, the annual income of someone we’ll call Mr. Lucky.  Mr. Lucky’s salary starts at $50,000.  Every year, Mr. Lucky gets a $2500 raise.  So, for five years, here’s his income:  $50000, $52500, $55000, $57500, $60000.  Over that period, his average annual income is $55000.   Great, smack in the middle.  Now, in the sixth year, Mr. Lucky wins a $300 million lottery jackpot.  What’s his average income over all six years?  Over $50 million a year.  However, it’s clearly wrong to try to claim that Mr. Lucky made $50 million dollars a year over six years, because once you look at the data, it is obvious that the $300 million is skewing the average well away from what he was actually making at the time.

Let’s take another example, one closer to home.  Every month, you get an electric bill.  Since heating and cooling are often the largest chunks of power consumption, the bill will have the average temperature for the month, in order to help you make sense of the fluctuating charges.   This year, you get a bill for $200.  Shocked, you pull out last year’s bill to compare, and discover that you paid only $50 then.  Last year, according to the bill, the average temperature was 54.3 degrees, and this year it was 53.8 degrees.    The average temperature was roughly the same, so your heating/cooling shouldn’t have changed that much.  You didn’t buy a new TV or an electric car, you turn off the lights when you’re not in the room, your shut down the computer at night, you’ve got CFL bulbs everywhere, and as far as you know, the neighbors aren’t tapped into your breaker box to power the grow op in their basement.  So…  What happened?  Let’s take a closer look at that weather…

Daily Temperature Last Year:


Daily Temperature This Year:


Once you look at the actual daily temperature, it becomes clear what happened to your power bill.  Last year, the temperature was fairly constant, but this year, there were wild temperature swings.  You had your AC cranking full blast for the first part of the month, then you kept nudging up the thermostat during the end of the month.  However, since the temperature extremes offset one another, the average temperature makes it seem like both months had the same weather.

That’s the key:  Once you look at the data, it’s often clear that the average doesn’t tell the whole story, or worse, tells the wrong story.  And the problem is that people typically don’t look at the data.

Let me pull this back to the world of software performance testing and tell a story about a product I’ve worked on and how I first came to realize that typical performance testing was dead wrong.  The product was a keyword processor at the heart of a larger system.  Our customer had demanded a 1.5 second response time from our overall system, and that requirement got chipped down and used up by other components on its way to the keyword processor I was involved with.  By the time it got to us, we only had a response time cap of 250 ms in which to return our data, otherwise the larger system would give up on us and continue on its way.  So, great, I thought.  I’ll just load up the system with an ever increasing number of concurrent requests and find out how many requests we can process at the same time before our average hits 250 ms.  I did that and came up with 20 requests at once.

So we set up enough machines to handle the anticipated load with a maximum of 15 requests per box at one time, so we’d have some room to grow before we needed to add capacity.  All was well.

Until, that is, the day we launched and our keyword processor kept getting dumped on the floor by the system above us.

Something was obviously wrong.  We knew what our performance was.  At 20 concurrent requests, we had a 250 ms response time, at 15 requests, we has seen an average 200 ms response time.  We’re fast enough, and we’d proved that we were fast enough.  Statistics said so!

That right there was the problem.  We trusted the wrong information.  Sure, the average response time was 200 ms at the load we were seeing, but that said absolutely nothing about the something like 30% of the requests that were hitting the 250 ms timeout.  We frantically reran the performance tests.  The results were stunning.  While the average response time did not hit 250 ms until we reached the 20 concurrent request level, we saw a significant (and SLA violating) number of requests that took more than 250 ms by the time we reached the 10 concurrent request level.

People aren’t very happy when you tell them that a cluster size has to double…

At the time, I thought I might have just made a rookie mistake.  It was the first major system I’d done the performance testing for, and I’d had no training or mentoring.  I did what I thought was right and ended up getting burned.  Surely, I thought, everyone else knows what they’re doing.  Real performance testers using real performance testing tools will get it right.  Trouble is, I’ve since discovered that’s not the case.  Everyone else makes these mistakes and they don’t even realize that they’re making any mistakes.  And performance testing tools actively encourage testers to make these mistakes by not giving the tester the information that they really need and, in some cases, giving testers information that is just plain invalid.1

So…  What do you do about it?  For starters, don’t use the average in isolation.  Pull in some other measurement.  I like using the 95th percentile for a second opinion.2  The 95th percentile means that 95% of all requests take less than that amount of time.  That’s really more what you care about, anyway.  You’re probably not really concerned with the 5% that lie beyond that point, since they’re usually outliers or abberations in your performance anyway.  This will get rid of things like Mr. Lucky’s lottery winnings.   Additionally, you’re probably not really concerned with where the average response time lies.  People often use the results of performance testing to feed into capacity planning or SLAs.  When there are dollars on the line tied to the Five Nines, why do you care about a number that you think of as the middle of the road?  You care about the worst case, not the average case.  If we’d used the 95th or 99th percentile in our inital performance tests of that keyword processor, we would not have had the problem that we did.

But even so, the 95th percentile has its own set of issues and should also not be used in isolation.  It, too, does not tell you the complete story, and can easily hide important trends of the data.

There I go again with “the data”.  You have to look at the data.  In order to get the full picture of your system performance, you actually have to look at the full picture of your system performance.  You can’t go to single aggregate numbers here or there and call it good.

Of course, that leads to the obvious problem.  When you run a performance test, you’ll often end up running thousands upon thousands of iterations, collecting thousands upon thousands of data points.  You cannot wrap your head around that kind of data in the same way that you can see Mr. Lucky’s income or the daily chart of the temperature.

Well, not without help…

 Whenever I do performance testing now, I rely on a tool that I built that will produce several graphs from the results of a performance test run.  The simplest graph to produce is a histogram of response times.


A histogram will show you the distribution of the response times for the performance test.  At a glance, you can see the where the fastest and slowest responses lie, and get a sense for how the service behaves.  Are the response times consistent, with most of them around 150 ms, or are they spread out between 100 and 200 ms?  A histogram can also show you when things are acting strangely.  One piece of software I tested had most of the response times centered below 400 ms, but there was a secondary bump up between 500-900 ms.  That secondary bump indicated something was strange, perhaps there was a code path that took three times as long to execute that only some inputs would trigger, or there were random slowdowns due to garbage collection or page swapping or network hiccups.


As you can see, there are a sizeable number of responses in that bump, enough to make you want to investigate the cause.  This potential problem would have been completely invisible if you were only concerned with the average, and the full extent would not have been known if you’d been looking at the 95th percentile.3  However, it’s plainly visible that something strange is going on when you see the graph like that.

While helpful, simple histograms like this are not enough.  In particular, they fall down if anything changes during the test run.  If something like network latency slows down your test for a brief period of time, that will be invisible in this graph.  If you’re changing the number of users or the number of concurrent requests, then the times from all of them get squeezed together, rendering the graph invalid.  What you’re missing here is the time dimension.

One way to bring in the time dimension is to animate the histogram.  You produce multiple histograms, each representing a slice of time in your test.  That way, you can watch the behavior of your service change over the length of the performance test.  The problem I have with an animated histogram is that you can’t just glance at the information.  You have to watch the whole thing, which can be time consuming for a long running performance test.

Instead of animation, I prefer to visualize the time in this way:


Going up the side, you have divisions for buckets of response times.  Going across, you have time slices.4   Essentially, this is what you’d see if you stacked a bunch of histograms together side by side, then looked at them from the top.  It’s basically like a heat map or a graph of the density of the response times.  The red zones have the most responses, while the green areas have the least.5  In the example above, you can see that there are a lot of responses in the 100 and 150 ms buckets and then it quickly trails off to zero.  There’s a lot of noise up to about 600 ms, and sporadic outliers above that.  The performance remains fairly stable throughout the test run.  All in all, this is a fairly standard graph for a well behaved piece of software.6

These density graphs aren’t terribly interesting when things are well behaved, though.  Here’s another graph I saw:


First, there’s bands of slowness between about 210 and 280 and 380-450.  These bands would appear in a histogram like the secondary hill shown above.  But what a histogram isn’t going to show you is the apparent pattern in the 380-450 band.  It appears that there’s groups of slow responses in a slice of time, then none for the next couple of slices, then another group of slow responses, then none, and so on.  Seeing this kind of behavior can help you find the problem faster.  In this case, the slow responses may be caused by something else running on the box that’s scheduled to run at a regular interval, like an anti-virus scanner or a file indexer, or they can be caused by something like the garbage collector waking up to do a sweep on a somewhat regular basis.

Another benefit of a density graph is that they’re still useful, even if you change the parameters of the test during the test run.  For instance, a common practice in performance testing is to increase the load on a system during the run, in order to see how performance changes.


In this example, the number of concurrent users was steadily increased over the run of the test.  In the first part of the test, you can see that increasing the number of users will directly influence the response time.  It also increases the variation of those response times:  For the first part, all of the responses came in within 100 ms of one another, but pretty quickly, they’re spread over a 300-400 ms range.  And then, during the final third of the test, everything went all kerflooey.  I don’t know what happened, exactly, but I know that something bad happened there.

I think this graph is one of my favorites:


As you can see, this graph is distinctly trimodal.  There’s a steep, but well defined band shooting off the top, then a wide and expanding band in the middle, followed by a sharply narrow band with a shallow slope.  I like this graph because it doesn’t actually show anything that’s wrong.  What it illustrates is the huge impact that your test inputs can have on the results.  This test was run against a keyword index system.  The input I used was a bunch of random words or phrases.  Different words caused the keyword index system to do different things.  The shallow band at the bottom was created by the keywords for which the index system found no results.  When it didn’t find anything, the system simply returned immediately, making it very fast.  The middle band was filled with keywords that found a single result.  The top band was the set of keywords that found multiple results, which required extra processing to combine the two results into one before returning.


Performance tests are the same as any other test, your goal is to find problems with the software.  You’re not going to find them if you’re only looking at the average.  So, the next time you’re involved with a performance test, remember: Look deeper.  There are more problems to be found under the surface.

  1. I’ve seen several cases of odd numbers coming out of VS Perf Tests, but the one I’m specifically thinking of here is the fact that VS will still report an average response time, even when you’re ramping up the number of users.  The performance of your system when you have a single user is vastly different than when you have 100 users, so the single “Average Request Time” number that it will report is just plain useless. []
  2. You can get the 95th percentile in Visual Studio, if you tweak a setting.  Set “Timing Details Storage” to “Statistics Only” or “All Individual Details” and it’ll start being recorded. []
  3. Likely, the 95th percentile would lie in the middle of the bump, leading you to believe that the system performance is slow overall, not that there’s an anomaly. []
  4. Time isn’t marked in these examples because the tool I use to generate them will usually generate other graphs, as well, making it possible to correlate the time on that graph with the time on this graph.  For these examples, it’s not really necessary to know how much time each column is or how many requests are represented by each block. []
  5. And the white zones are for loading and unloading only.  There is no parking in the white zones. []
  6. For comparison, this graph is from the same test run that produced the “Average Response Time: 149 ms” histogram that I showed earlier. []

September 13, 2009   No Comments


And with that, I’m declaring Crazy Project Weekend complete.  All in all, it was a success, although there were some things that didn’t work out.  I started on Friday morning with nothing but an idea and ended up successfully building a robot that could win a game of Pong, despite never having touched most of the major technologies (OpenCV, Mindstorms, Bluetooth) prior to this weekend.

I’m definitely going to have to do this sort of thing more often.

Just a reminder, the source code, if you’re interested, is available in SVN:  https://mathpirate.net/svn/

And now…  Sleep.

September 8, 2009   No Comments

Victory Video

Full Video

I’m working on a YouTube friendly condensed version at the moment.

[Edit: Here’s the YouTube version.]

September 8, 2009   No Comments

An Even Epicker Win

Epicker Win

21-9!  Take that and dance, 30+ year old technology!  I’ve got two words for you:  Inyo and Face!

Video coming soon.

September 8, 2009   No Comments