The Brains of the operation
The software is the third main part of our team besides the electronics and hardware department. The overall goal of the software is simple: Gather all information about the current game state and send the right commands to the robot to score a goal. But how can a computer program achieve this functionality? To do this our software is divided into separate modules, each with distinct input and outputs.
Before getting to the modules some information about the software language and build tools: Currently we’re using C++20 without any major framework to implement our modules. We build our software using CMake and Conan as dependency management. For the graphics we’re using a custom OpenGL render with ImGui as the GUI-Backend. Besides some external dependencies (like protobuf or asio) we try to write everything from scratch.
To communicate with the outer world (the robots and cameras) we have the “ssl_interface” and “robot_interface” modules. The “ssl_interface” module listens to data from the provided cameras and translates the vision into our custom format. Also, it reads the game state and message from the referees. The “robot_interface” on the other hand translates software commands into a format which is readable by our RF-basestation. This is necessary to write and read data directly to the robots.
All the information which come from different places needs to be gathered into a single place. This is the “game_data_provider”. It contains methods for filtering the incoming data (e.g., smoothing the changes) and other functionality to make it easier for the next modules to work with the data. For analyzing the data, it also provides the “Observer” which is an analyzer tool to get information about the current state of the game (ball possession, risk factor, target probability).
To calculate what each robot should do next, we use A.I. which is training using reinforcement learning. This happens in the “task_manager” module. The model gets all information about the game as input parameters and tries to find the best task for each robot to get the highest reward (which would be to win the game). The trained model is converted to a deployable format and used in the final software in a tensor runtime.
The results of the A.I. are tasks which are then executed. The main problem here is the movement since the robots should no collide with each other. This is implemented in the “localplanner” model, which controls the movements and behaviors of the robot.
These final commands are sent back to the robot again and the basic structure is complete. Of course debugging such an application would be very hard, so at the same time we’re writing our own visualization tool called “luhviz” which can show every possible information about each module and decisions of the software.