Development of an autonomous mobile base for navigation in an unknown, dusty environment
For my thesis for the MSc in Autonomous Systems, I developed from scratch all the modules necessary for an autonomous mobile base, to be able to navigate safely in an unknown, dusty environment, as part of the prototype version of the startup Conbotics.
Conbotics aims to change how repetitive, tedious tasks are being implemented in the construction industry.
The development took place in ROS1 with Gazebo simulation and it was divided in three parts:
Part 1: Development of an autonomous mobile robot in simulation (Gazebo, ROS)
Part 2: Development of the perception module of the autonomous robot (ROS, YOLOv4)
Part 3: Development of the real autonomous robot (ROS, Innok Robotics mobile base.
PART 1: Development of an autonomous mobile robot in simulation (Gazebo, ROS)
The first part was implemented in a simulation environment (Gazebo), in order to conduct experiments and optimize the system before transitioning to the real robot. The modules that were developed for this purpose are:
Data acquisition: Filter (using laser_filters package) and merge (using ira_laser_tools package) the readings from 2 lidars (RPLidar A2)
SLAM: Tuned and optimized to either work with hector_slam, or gmapping package
Odometry: Measurements from both robot’s (wheel encoded) odometry and rf2o (Range-flow based 2D visual) odometry from Lidar’s data
Sensor fusion: Tuned to work either with robot_pose_ekf or robot_localization package
Feature extraction: The system supports feature extraction with both the RANSAC and split-and-merge (laser_line_extraction) algorithms for lines and corners extraction.
Navigation: Used move_base package, tuned with Dijkstra global planner (NavfnROS) and neo_local_planner (with pure pursuit controller method) as a local planner (teb_local_planner is also supported and optimized), since initial development regarded a holonomic robot (neo local planner achieves better accuracy than TEB for holonomic). Then the neo local planner was adjusted accordingly for the differential-drive robot.
Path planning: Implemented a PID controller for wall following
Decision Making: Used Finite State Machines (SMACH package) for the implementation of various tasks (eg orientate to the wall > follow the wall).
PART 2: Development of the perception module of the autonomous robot (ROS, YOLOv4)
After ensuring a fully functional autonomous mobile base through the simulation, my task was to develop the perception module of a fully autonomous mobile base from scratch, able to navigate safely in an unknown dusty environment. The development took place in ROS1.
To achieve this:
I used a realsense 2 camera for robotics
I trained a YOLOv4 object detection neural network, on doors, windows and humans entities, taken from Open Images Dataset from Google. The training took place on Nvidia CUDA 11 GPU (Google Colab).
I integrated it into ROS using darknet_ros package, and tested it on real scenarios using rosbags from a real autonomous mobile base
Integrated a 3D semantic object mapping module, for accurate clustering of the above detected objects. This package, uses rtabmap for SLAM.
PART 3: Development of the real autonomous robot (ROS)
The final task was to integrate the development of the simulated autonomous mobile base, on a real robot, that would be able to navigate safely in an unknown dusty environment. The mobile base was developed by Innok Robotics and the software development took place in ROS1