Random header image... Refresh for more!

Guidance System Video

Here’s a video of the game processing.

I don’t think it’s quite as fascinating to watch as the laser line and target circle trajectory projections of Pong, but I am happier that the game is much cleaner to work with.

This was a recording of the game processing a previously recorded play session.  It’s not a recording of the robot playing, nor is it a recording of me playing with the augmented display.

If you notice, I’m measuring distance from the edge of the bucket, not the center.  I’m hoping that will help alleviate some of the overrun effect.  It will have the entire bucket distance available as a buffer.

February 28, 2010   No Comments

Guidance Systems Online

The computer can now tell you where to go.

Bombs that will be caught by the buckets in their current position turn green, bombs that will be missed turn red.  If the buckets will catch the lowest bomb, they turn green, otherwise they’re pink and give you a pointer as to how far you need to move and in which direction.  If there are no bombs detected, the buckets turn white.

February 28, 2010   No Comments

Calculating ROI

I’ve managed to cut the processing time from 160 ms, down to 70 ms, and there’s still room for improvement.  I did it by using something called an ROI, or “Region of Interest”.

Pretty much every image operation in OpenCV can be restricted to the ROI of an image.  It’s a rectangular mask, specifying the region that you’re interested in.  If you want to blur out a box in an image, you set the ROI on the image, call the blur function, then unset the ROI, and you’ll have a nice blurred box in your image.  You can also use the ROI to tell OpenCV to only pay attention to a specific area of an image when it’s doing some kind of CPU intensive processing, which is what I did.

Look at the Kaboom! playfield for a moment.  What do you see?

For starters, there’s that big black border around the edge.  I don’t need to process any of that at all for any reason.  Any cycles spent dealing with that are a complete waste of time.

Then, each item I want to recognize lives in a particular part of the screen.  The bombs are only in the green area, the buckets live in the bottom third of the green zone, and the bomber lives up in the grey skies.

So why should I look around the entire screen for the buckets, when I know that they’re going to live in a strip at the bottom of the green area?  That’s where the ROI comes in.

I can tell the processor to only look within specific areas for a given object.  There will be no matches outside of that zone.  If a bucket ever ends up in the sky, it means my Atari is broken and I will be too sad to continue.  If it thinks that one of the bombs looks like the bomber, then it’s a false hit that I don’t want to know about.  By restricting the ROI on the detection, I’ve seen a significant boost in performance.

And that 70ms is what I got for using large ROIs, ones considerably larger than shown in the image above.  If I limit the processing to those areas, I expect to be able to knock it down to around 40-45 ms, maybe even less.

Of course, in order to do that, I’d rather not hard code pixel dimensions.  I’m going to have to find the grass and the sky automatically.

February 28, 2010   No Comments



February 27, 2010   No Comments

Working for the TSA

That’s looking better.  This is an implementation of the strip method of bomb detection suggested by a friend I mentioned in the previous post.  This appears to be roughly twice as fast as the rescan method.

The assumptions are as follows:

  • There can only be one bomb in a horizontal row.
  • Not all rows will have a bomb.

To find all bombs:

  1. Partition the playfield into strips the height of a bomb.
  2. Look for a bomb in each strip.
  3. Blot out the bomb point, in case the match overlaps strip boundaries.

This ends up only having to scan the source image once.  It also reduces the amount of noise, because only one bomb is allowed per row.  However, you do end up making more calls cvMinMaxLoc(), because you’re calling it once per strip, whereas in the rescan algorithm, you stop making calls after you hit a cutoff threshold.

In all, this method seems to be roughly twice as fast as the rescan, which is very good.

It also means that I’m now performing a strip search in an attempt to find bombs.  I should work at the airport.

Anyway, since I haven’t posted any code today, here’s the strip search algorithm.

IplImage bombMatches = new IplImage(image.Width - _bombTemplate.Width + 1, image.Height - _bombTemplate.Height + 1, BitDepth.F32, 1);
Cv.MatchTemplate(image, _bombTemplate, bombMatches, MatchTemplateMethod.SqDiffNormed);

for (int stripY = 0; stripY < bombMatches.Height; stripY += _bombTemplate.Height)
 bombMatches.ROI = new CvRect(0, stripY, bombMatches.Width, _bombTemplate.Height);
 Cv.MinMaxLoc(bombMatches, out minVal, out maxVal, out minLoc, out maxLoc);

 if (minVal < minThreshold)
  CvPoint adjusted = minLoc + new CvPoint(_bombTemplate.Width / 2, _bombTemplate.Height / 2 + stripY);
  image.Circle(adjusted, 10, CvColor.Red);
  bombMatches.Circle(minLoc, 5, CvColor.White, -1);

Now, the problem is that the cvMatchTemplate call seems to take about 50 ms to run.  At 30 FPS, I have only 33ms to process a frame.  I might be able to live with 50 ms, but I haven’t even started to find the buckets yet.

February 27, 2010   No Comments

Bomb Detection

I finally made some headway into detecting the bombs.

It missed one, though…

As I noted, you call cvMatchTemplate to find likely matches within a larger image, then you call cvMinMaxLoc to get the location of the best match.  But only the best match.  What about the other matches?  How do you find them?

Well, the method I used to get the points above is a fairly simple, but fairly dumb way of doing it.  You get the first point from cvMinMaxLoc, then you blot it out  with a filled circle and repeat.  Your initial point, and points around it are now no longer considered matches, so you get the second best match, and so on.  Trouble is, it’s a bit slow.  You’re constantly rescanning the entire image for the best match.  My friend Susan had a suggestion, where you scan strips of the image instead.  Because you know that there can only be one bomb for any given row, you can partition the screen into strips roughly the height of a bomb and know that there’s only one point in each strip that you’ll care about.  I’m going to try that one out and see how it does.

By the way, there are so many false circles in the image above because I’m not throwing away matches below a certain threshold.  Obviously, I’ll have to do that if I want anything useful.  It looks like .2 might be a good cutoff.

February 27, 2010   No Comments

More Results Like This

Playing around with the different parameters to cvMatchTemplate.

I think these are finding what I want them to find, now I just have to figure out how to get back the points where it found what it found.  Everyone says to use cvMinMaxLoc, but that seems to give a single point, not the array that I need.

February 27, 2010   No Comments

Well, that’s, uh, different…

I think the blacker points are the closer matches to the image.  I told it to find the bombs, and it looks like it managed to find them all.  I think.

Either that or my Atari is possessed.

February 27, 2010   No Comments

The Game Part 1: Visualization

That’s the game.  Those are the elements I need to recognize and react to.  I’m going to need to find the bombs, find the buckets, and find the bomber.

I could try to do this in pretty much the same way that I did for Pong, but that was a bit hacky and prone to failure.  I’d like to learn a bit more about some of the object detection and recognition features in OpenCV, and see if there’s some way for it to identify the components directly, instead of just assuming that different boxes are different parts of the playing field.  I played with some of that on the Wesley Crusher thing, but that was using ready-made classifiers.  This time I’ll have to do it from scratch and see what happens.

At any rate, the game playing logic should be easier this time around.  I don’t have to deal with bouncing balls and linear regression and all of that.  The bomb tracking should be failry straightforward.

Anyway, time to break out the OpenCV.

February 27, 2010   No Comments

The Cupcake of Glory

Each Crazy Weekend Project has a crazy goal they’re aiming for.  The first was to get a perfect game in Pong.  The second was to eliminate Wesley Crusher.  For this one, it’s joining the Bucket Brigade.

Part of what set Activision apart, aside from making awesome games, was that for pretty much every game, you could get a patch to prove that you’d obtained some high score.  Just snap a picture of your score, mail it in, and a few weeks later, you’d get three inch trophy to commemorate your achievement for free.  Some games had multiple patches for different difficulty levels.

For Kaboom!, you could get the Activision Bucket Brigade patch for a score of 3000 or higher.  To reach this level, you had to be good, but you didn’t have to be perfect.  An ordinary player would be able to get this score, given enough practice.

My primary goal for this weekend is to build a robot capable of getting a score in Kaboom! that will qualify it to join the Bucket Brigade.  I’ll then send a picture in to Activision and see what happens…

The spirit of these patches lives on in the XBox 360 and PS3.  Games for those systems feature Achievements or Trophies which you get for doing something specific in the game.  However, instead of having just one, games will typically have up to 50.  And, unfortunately, they’re all electronic, so you can’t show them off.

February 25, 2010   No Comments