Implementation
Hardware Used
We used the Turtlebot 3 waffle with a Nvidia Jetson TX2, and Marvelmind Indoor GPS beacons.
Software
Our code can be found here and is also explained in detail below
Link: https://github.com/jessicaleu24/NSF-T3
Launch files
research/launch/avoid_obstacles.launch
brings up the turtlebot and launches the kalman filter, obstacle detection, and path planning/following nodes
research/launch/avoid_obstacles_simulation.launch
brings up a simulated environment used to test path planning/following. Includes a modified rviz config file and a fake obstacle.
research/launch/fake_turtlebot.launch
used in avoid_obstacles_simulation, similar to turtlebot3_fake with custom rviz settings
research/rviz/fake_turtlebot.rviz
visualizes the path and obstacles by default, and only shows the current odometry, was used to save time when debugging
research/launch/stop.launch
stops the turtlebot
Obstacle detection and visualization
research/src/find_lines_node.py
Launches the obstacle detector, found in find_lines.py.
research/src/find_lines.py
Subscribes to /scan to detect lines from LIDAR data (`find_lines`) and convert those lines into obstacles (`make_polys`). To do this, we perform a few steps (described in more detail in the Design section):
- Convert LIDAR polar data to a black image with white pixels representing an obstacle somewhere within that pixel
- Detect edges within that image with a Canny edge detector
- Identify edges that could be obstacles with a probabilistic Hough transform
- Eliminate lines that appear like they could be multiple readings of the same obstacle, for computational efficiency later down the line (in path planning)
- Create polygons from each line, that represent the obstacles in the scene
- Publish those polygons as an array to the /obstacles topic, for the path planner to read in
Also creates a visualization on a separate thread (visualization logic in viz_obs.py), and updates it whenever a new path comes in from the /path topic.
research/src/util.py
Various utility functions, having to do mostly with converting lidar data to image data, and to help deduplicate lines (as mentioned previously).
research/src/viz_obs.py
Visualizes the 2D scene from the perspective of the robot, which remains at the center of the image as it moves. Shows the robot's planned path and the obstacles it sees, along with all LIDAR data in the form of an image.
Path planning and following
research/src/optimizer.py
Subscribes to obstacles (/obstacles), published in the turtlebot frame, and turtlebot location (/odom), published in the world frame. Converts obstacles into the world frame, and solves the optimization problem in a few steps
- Finds the direct path to the original line (x_target = x_current + 3, y_target = y_goal) and find linearly spaced waypoints between the current and target points
- Feed this first path into the optimization, which enforces the obstacle avoidance constraints, while staying close to the direct path
- Repeat until the path stops changing, which usually takes 2-3 times
- Publish the final path as a Path object to /path
more detail on the path planner is on the design page
research/src/follow_lines.py
Subscribes to /path, creates pathmodule object, publishes to /cmd_vel
research/src/pathmodule.py
Takes a current location and set of waypoints and coverts into a Twist (using linear x and angular z components only), adjusting for both deviations from the desired angle and desired location (distance from the path). A few line following algorithms are included in here, including a basic go straight/stop and turn, as well as one we used (detailed in the design page)
There are also a few safety features built into this, including that the turtlebot should stop moving after it has traveled 3 meters (the length of the room we were testing in), and that it should lift if picked up (if it notices a change in z position).
Data collection and sensor fusion
robot_localization
The Unscented Kalman Filter package that we used in this project was developed by Charles River Analytics, Inc. cra-ros-pkg/robot_localization, and includes the following files:
robot_localization/launch/ukf.launch
This file launches the ukf_localization_node and create an node called ukf_se. It looks for the configuration file which is also in the robot_localization package that specifies all the parameters needed and use it to set everything up.
robot_localization/params/ukf.yaml
Specifies all the UKF parameters, in our case modified to take in the odometer, IMU and the indoor GPS beacons. It also specifies additional parameters such as the covariance matrix to tune the filtering, and a differential mode for when we don't know the covariance information of the sensor
robot_localization/src/ukf.cpp
Creates the UKF node and reads in the parameters
marvelmind_nav
The GPS system's default package that publishes the position data from the beacons. We adapted it to publish in the message type the UKF node expects and to calculate the turtlebot position from the sensor position, since the two frames were slightly offset. Link to the official repository: MarvelmindRobotics/marvelmind_nav
marvelmind_nav/src/hedge_rcv_bin.cpp
Constructs one subscriber that takes in position and orientation data from odometry and perform a transform between the GPS beacon coordinate frame and the turtblebot's body frame so that the position data from GPS can actually reflect the information of turtlebot itself. Also, it collects data from the GPS and put them into the message type that supported by the UKF node that does the sensor fusion.
Complete System Function
- Use LIDAR to scan for and identify obstacles
- Determine location with help of odometry / indoor GPS data
- Plan then follow path to final location, avoiding obstacles