Robotic Simulated Exploring and SLAM with A* - ROB 456 Final Project

Relevant Skills

Project Overview

The goal of this project was to implement a simulated robot that follows a path of waypoints towards a goal, generates a map of its environment as it explores (SLAM), and be able to avoid obstacles in its environment. This project was implemented in ROS Noetic using a given simulated RViz world. This project was group effort and a synthesis of many of the concepts covered throughout the term.

There are 3 main nodes that comprise this project: A driver, a controller, and a node for generating map images. The controller node is responsible for generating the waypoints to send to the driver and the map generator. The controller uses the robot's current location and generates a "best point" to set as a goal. This goal is fed into the A* algorithm to generate a series of waypoints between the robot and its goal. Once the waypoints are generated they are published to both the map node and the driver itself. Occasionally, the controller receives updated map data from the map processor as the robot moves through the world. This allows us to replan our path in the robot cannot reach the specified goal.

The driver is responsable for actually driving the robot through its environment and detecting obstacles to avoid using simulated lidar scans. If lidar detects an object in the way of the robot's path its speed will be scaled in proportion from how close it is to the object, and it will turn to avoid the object. In the case that the robot gets stuck, a timer starts and when it times out the driver publishes a signal to the controller to generate a new set of waypoints. The actual movement publisher uses the ROS twist_msg.

The mapping node is responsible for generating map images as the robot moves. Everytime the map is updated, it is saved by the controller and then the updates are used to generate new waypoints. The map processing code has its own action server for better processing times. It uses binary dialation to thicken the walls around the edges of the map so the path planning algorithm doesn't generate waypoints that cause collisions.

Still images of the robot exploring a simple world

A screenshot of a simulated run of a robot exploring its environment. The path the robot takes is marked in blue and its direction is shown by a green arrow. The robot itself is represented by a small red box. It is navigating a box shaped room. Another screenshot of the robot exploring its world. The map has updated to reflect the new information the robot has obtained about its environment as it explored the last path planned.

If you'd like to see video of the simulated robot working, one of our group members has a video of "walle" running in a complicated environment. It's linked in the relevant links section below!

Project contributions

I was responsible for writing the path-planning code for the robot. The algorithm used to plan paths for the robot is A* which is a best-first search algorithm. Essentially, the paths the robot could take from its starting postion in the world and its goal are represented as a weighted graph. I was familiar with dijkstra's algorithm and had done toy implementations of it before this class, but A* was not introduced to me until this course. A*'s main difference from dijsktra's is that it uses a heuristic to determine which node in the graph to explore next.

Implementation wise- the path planning code was written in the map coordinate plane. What this means is that the robot's location was represented as an index in an array that corresponded to a pixel in the map image. Same with the goal point. Graph locations are stored in a priority queue and visited nodes are stored in a dictionary. Each graph location pushed onto the queue is represented by a tuple with the first value being the weight/score of the node and the second being the actual pixel location [i,j] on the map. The visited node dictionary stores a tuple that holds distance to the node, its parent's location, and if we've already seen the node. When a node is visited in the search process, it is set to closed and the euclidean distance to the robot's location is calculated. The priority queue is initialized with the robot's location and a weight of 0.

For this implementation the edge weight of the graph is set to root(2) because the distance from pixel to pixel is just 1. This would be more complicated in a real world scenario. Cost was calculated by adding each node's score plus the edge weight. The heuristic I used in this case was the chebyschev distance because the neighborhood for each node is the distance across eight connect pixels. Since the robot is allowed to move diagonally the heuristic becomes 4 times the cost of moving diagonally. If a neighbor in our search is not visited, it first gets added to the visited dictionary and then it's location and weight (chebyschev distance + cost) is pushed onto the priority queue. If the node has already been visited, and the current cost is less than the saved cost- we update the cost in the visited dictionary. This process of exploring 8 connected pixels and calculating their movement cost is repeated until the goal node/pixel is popped off the priority queue.

It is important to note that sometimes the goal location is not reachable/wasn't visited in the search. In that case, the path is planned to go to the closest node to the goal node that was visited. Once a suitable goal location (either the real goal location or the closest visited node to the goal) is found a path is returned. The path is represented as a list that stores the [i,j] location of each node in visited. This A* function is called in the controller to create a path. The returned path is then converted into x-y coordinates for the robot to actually move to. Check out the github repo and the file "path-planning.py" to see the actual code for my implementation.

Lessons Learned/Outcome

Although my contribution to the project was small, seeing how my code interacted with other parts of the project such as the map generator and the actual driver code was a neat experience. It's important to mention that beyond exploring the concept of SLAM, this project was also geared towards familiarizing students with the ROS ecosystem. Many of the students in this course had limited coding experience and had never touched ROS before. I was in the unique position where I actually had prior experience working with ROS before taking this class- a position only a few of us in the class were in. My big takeaway from this project was not to learn ROS. Implementing the path planning code gave me a better idea about how path planning could be used in other projects, especially in the rover team's plans to develop long term simulation of the robot.

Relevant Links