I've got to hand it to the University, they do know how to bring out our childish sides. : ) The instructions for this practical were simple: "Here's a load of Lego and here's your Arduino. There is a football pitch in one of the rooms and a camera facing it from the ceiling. You have three months to build a football playing robot. GO!"
As someone who spent a better part of my childhood playing with Lego, naturally, I was very excited.
Anyway, for those who don't know what an Arduino is, according to the Arduino website, an Arduino is "an open-source electronics platform based on easy-to-use hardware and software. It's intended for anyone making interactive projects." This is what it looks like:
We went through three different robot designs, before finally settling on the cuboid holonomic design that struck fear in our opponents' hearts. Our initial ideas were to have a differential drive robot (two wheels for driving plus some balancers) with a massive grabber and kicker, but it seemed that all the holonomic robots had a great advantage in their ability to move, so we switched to a holonomic design in the end. These are the three robots we built:
As you can probably tell, there was a big difference between Fred and the other robots, not only because of the holonomicity, but also the kicking mechanism. Nobody really believed a propeller could work just as well as an average grabber/kicker combination, yet adding the correct maths to it, the kicker did quite well:
Not to mention that Fred made an incredible defender. Over the entirety of the practical, Fred remained undefeated. This is what an enemy robot would see when aiming at our goal:
Even though Fred was a very simple robot, his strength lied in the rest of the system backing it up. The Vision System was real-time with no lag and the Strategy Module was clever enough to guide it well. Fred was also the fastest robot of them all thanks to this little beast of a board:
This board is called the Motor Multiplexor board and it allowed us to over volt the motors, making Fred move ridiculously fast. Because who wants to watch slow robots?
Writing the strategy system was arguably the most interesting part of the course. I wanted a system that could reliably control any robot, not just the robot we made (because code reusability, right???). Thus we split the strategy system into modules:
The navigation system did not drive the robot (which may sounds rather counter-intuitive). The navigation system was called with two arguments: The name of the robot we are driving and the destination location. The output of the navigation system was a vector specifying the absolute direction in which the robot was supposed to move at the very moment in order to follow the optimal path to it's destination. This way the logic of "which wheels do we turn to follow the path" can be implemented independently of any particular robot.
Navigating a robot around obstacles is not as trivial as it may seem. We implemented two approaches: A* navigation and Potential Field navigation.
A* search is very straight forward and it was our navigation system of choice for navigating the robot across long distances. The pitch was split into tiny squares 10 by 10 centimeters in size, squares were marked as passable and impassable based on the locations and sizes of other robots and standard A* search was used to find the shortest path across the pitch.
Potential Field navigation is the more interesting system of the two. We can think of the football pitch as a potential field. If we make other robots and obstacles have high potential (repulsive) and the destination location have low potential (attractive), we can compute the net force on the navigated robot at it's location. If the robot moves in this direction, it will be on the optimal path to the potential minimum.
The pros of potential field navigation is that it is continuous (no tiny squares) and easy to compute. The cons are that the robot may get stuck in a local minimum as opposed to the global minimum. This is actually very funny to see happen, the robot just kind of 'hovers' there looking all confused, the poor thing... We avoided this by using A* to navigate long distances and potential fields to fine-tune the location when the robot was close to it's destination. The robots were very happy with this and gladly made their way to wherever we asked them to go. ; )
Together these two approaches made the best navigation system, winning us the 'best robot' award.
The high level strategy system worked with Actions and DynamicPoints.
DynamicPoints were classes with getX, getY and recalculate methods. These points recalculated themselves based on the state of the world and "followed" certain high-level locations on the field. We had simple points that followed a certain robot or the ball, as well as more complex points that followed the optimal location that would prevent a pass between two robots or the ideal point for defending the goal. The navigation system would guide the robots to these locations until told otherwise.
Actions were composed into hierarchical finite state machines, which observed the state of the world and called methods in the robots' Controllers. They told the robots which DynamicPoints to follow (and face, for holonomic robots), and decided on what high-level action the robots should be performing (defending, attacking, blocking etc.).
Controllers stitched the whole system together. They were implemented and customized for each physical robot and mediated between the rest of the system and the robot.
The whole idea of a Vision System seemed simple enough, until I found out that the camera output was streamed to all computers on the floor via some obscure ungrounded VGA network, making the image quality resemble something recorded by a budget camera from the 80s. But OK, let's think of it as a challenge and see what we can do, right? : )
In the standard Robocup competitions, the robots have a small vision plate placed on their top, making them easier to detect. The plates have a Team Spot in the center and four Pattern Spots in the corners. Using just these spots, one can determine which robot it is and which way it is facing. This is what the plates look like:
The finished Vision System was a 5-stage pipeline, each stage of which hammered more information from the image. These were the stages:
This is the image's journey through the pipeline (left to right, top to bottom: preview, spot analysis, undistortion, robot detection)
The Vision System also provided the user with the ability to configure the colours, distortion and team configurations on the fly. The colour calibration allowed the user to simply click the preview window and the color the user clicked would be saved as the defined colour.