How to build autonomous mobile robots

We explain how to design and build autonomous mobile robots and what elements to choose.

Table of Contents

Intro

Robotics is a very broad field. Very wide and deep. For example:

– Tesla in self-driving mode – robot

– DJI/PixHawk/Marvelmind autonomous drone – robot

– Roomba – robot

– Marvelmind v100 – robot

– Honda Asimo – robot

– Sony Aibo – robot

– An autonomously driving advertising – robot

– Even Lego – robot

 

Soon, there will be hundreds of different types of robots around. They may be so different in appearance, they may use so different combinations of technologies that it would be rather difficult to cover them all. We certainly don’t aim at that here.

 

We have been asked quite many times about our opinion on different subjects of robotics – first of all, precise indoor position for industrial applications, of course, since we have been active in this area for many years – but not only. Thus, we collected some of the most typical questions and answered them on this page.

 

We are covering just very few areas in which we as Marvelmind Robotics operate: industrial autonomous delivery or inspection robotics, warehouse delivery, research or university robotics. And the purpose of this note is to give an idea from where to start if you build your own robot or choosing between the available options.

Using Marvelmind Indoor "GPS" for robots, vehicles and AGVs

Marvelmind indoor positioning system (MIPS or Marvelmind IPS) also known as Marvelmind Indoor “GPS” or Marvelmind RTLS is widely used for different types of autonomous robots, autonomous vehicles, AGVs, forklifts for various purposes:

– Autonomous navigation and positioning for robots indoor and outdoor

– Tracking AGVs, vehicles or forklifts

– Providing geo-fencing for robots, forklifts and people

– General robotics research and development

– Robotics education and competitions

– Swarm robotics

What Marvelmind starter set to choose?

If you don’t have time to study the detail, but need to choose fast and safe, choose Starter Set Super-MP:

 

– MP stands of Multi-Purpose. The set indeed supports multiple architectures and multiple configurations, thus giving the highest flexibility:

– 3D with N+1 redundancy

– 2D with up to 2 submaps

– 2D with up to 3 mobile beacons (robots)

– 2D with Location+Direction

– 1D with up to 4 mobile beacon

– It supports different architectures: NIA, IA, and MF NIA

– It has 900-1000 mAh LiPol battery inside – very easy and fast to deploy without external power supply

– It was external antenna – more robust radio connectivity with the modem

– It has can receive and transmit ultrasound. Thus, it can work as stationary and as mobile

– It has DSP (digital signal processing) inside. Thus, it can receive several ultrasound channels at once and work in IA

– It has IMU (3D gyro + 3D accelerometer) in each beacon

 

Remember, that one mobile beacon per robot gives only location. For location and direction, you need the Paired Beacons – two mobile beacons per robot. See the next variant.

 

For Location+Direction, you need more than one mobile beacon per robot. Thus, the easiest is to add to the Starter Set Super-MP an additional Super-Beacon.

 

Robotics. Basics

Terminology

Robot = autonomous mobile robot

We call robots, first of all, autonomous mobile robots. Anything is which guided by humans is not a robot. Anything that is not mobile, but has all robotic elements is a robot, but we do not focus on assembly robots. Our robots are:

a) Autonomous

b) Mobile

Thus, when we are referring to robots, we mean autonomous mobile robots even if don’t use the long wording.

From this perspective, an autonomously flying copter is a perfect 3D moving robot. More about drone on our Drones page.

Examples of precise indoor positioning and navigation for robots

Autonomous Delivery Robot - car assembly plant demo

– Marvelmind Autonomous Delivery Robot v100

 

– IA with 15 stationary beacons and 1 modem for Indoor “GPS” coverage. See more: https://youtu.be/TWWg_8JHYzo

 

The same Indoor “GPS” map supports on top of the robot shown in the video:

– Multiple mobile robots and forklifts tracking as well as people tracking. Altogether – up to 250 beacons/objects – stationary+mobile combined.

Robot v100 example with detailed explanations

This is the same demo as https://youtu.be/TWWg_8JHYzo, but with additional verbal comments explaining what is what on the video and in the system in general.

 

Configuration:

– Marvelmind Autonomous Delivery Robot: https://youtu.be/efOc-ItVvgg

– IA with 15 stationary beacons and 1 modem for Indoor “GPS” coverage.

 

Robot’s specs:

– Fully autonomous delivery between any points covered by Marvelmind Indoor “GPS”

– Up to 100kg payload

– Driving time more than 16h on a single charge: https://youtu.be/JaxRd_9D1fQ with 60+kg payload

– Automatic obstacle avoidance and detection

– The delivery route can reconfigured by 1 button click in 1 second

– Charging time is less than 4h. So, 2-shift work (16h) and 1 shift (8h) charging is supported

– Re-configurable capacity: 1 large box of up to 65x65x160cm to up to 8 boxes of 65x65x15cm – one shelf vs. multiple shelves

The same Indoor “GPS” map supports:

– Multiple mobile robots and forklifts tracking and people tracking. Altogether – up to 250 beacons/objects – stationary+mobile combined.

Robot driving fully autonomously using Marvelmind Indoor "GPS"

Fully autonomous robot is driving by its own relying on:

(1) Marvelmind Indoor “GPS”

(2) On-board odometry and inertial units

 

The robot receives coordinates for the key points to visit from the user (table on the right) and then creates and follows the path by constantly correcting its position against the path. Coordinates are formed automatically in the Dashboard by simple clicking on the map.

 

Distances between beacons is up to 36 meters. It is possible to cover with precise “GPS” the full campus by installing more beacons every 20-30 meters.

 

Fully autonomous small delivery robot moving in office environment

Demo: small delivery robot is moving fully autonomously in an office/factory environment using the Marvelmind Indoor Navigation System:
– Mobile beacon is installed on the robot
– Stationary beacons are installed on the walls
– Blue dots – location of the robot (mobile beacon) measured by the Marvelmind Indoor Navigation System
– Yellow dots – location of the robot obtained from its own inertial/odometry system
– Big green dots – stationary beacons installed on the walls
 
Note, that robot and the Marvelmind Indoor Navigation System handles the shadows of ultrasonic signal from the beacons under the tables and chairs pretty well. This lets the robot to perform its tasks in the real-life environment pretty well.
 
The distances between stationary beacons can be up to 30 meters. The general requirement to the Marvelmind Indoor Navigation System is to provide visibility of the mobile beacon to three stationary beacon at any given time. But, as the demo shows, using other sources of information (IMU/odometry) the robot can handle 1-10 seconds without full ultrasonic coverage needed for the Marvelmind Indoor Navigation System – purely relying on its own IMU/odemetry.
 
However, the IMU/odometry has inherent drift. And the drift is measured and corrected with the help of the Marvelmind Indoor Navigation System, when full and reliable data is available.

 

Fully autonomous robot driving demo: "8-loop" track

Marvelmind Indoor Navigation System + autonomous robot Marvelmind Hermes demo:

– “8-loop” (7x2m) track

– Fully autonomous driving The Marvelmind Indoor Navigation System Starter Set deployed in a 80m2 room

 

Mobile beacon is attached to the top of the robot. Robot receives its coordinates with ±2cm precision from the Marvelmind IPS and uses them to drive through the track fully autonomously.

 

There are intentional mild shadows for the Indoor Navigation System (column, padded stools) imitating real-life environment. While in the shadows, the robot relies on its inertial navigation system and odometer

 

 

 

Robotics solutions

Localization

One of the biggest problem for any autonomous robot is to answer the question: “where am I?”. The question is immediately explodes into subsets of questions:

– Where am I against my expected location at the moment?

– Where am I against my next waypoint?

– Where am I against the other object: robots, people, obstacles, charging stations, etc.?

But everything starts with localization against some reference, for example: (0,0,0) coordinates whatever that could be or against the starting point or similar. Many other questions are derivatives of that master question.

 

 

Localization against what?

There are several major option:

– Against myself – center of the robot, for example

– Against an external reference point

 

Against myself is easier in many cases, but it is about obstacle detection and avoidance, rather than moving and navigating in space. Let’s discuss in details about positioning and navigation against external references.

 

 

Localization inside-out, inside-in or mix?

If the robot carries all required for localization onboard, it is inside-out localization. Humans and animals use inside-out localization. Not only and we will talk about it, but they do not normally need a direct and constant information where they in terms of a constant stream of coordinates. They determine it “inside” based on the different clues.

 

Some people call the process visual odometry. Of course, it works much better along with a regular wheel (feet) based odometry. And this is why it is to some extend easier to make a robot with a navigation than just a navigation system for any kind of robots. Developers of such “universal positioning systems” would struggle, because the data coming from an odometer could be in virtually any format – analog with different values or digitally coded with unknown format. It could be with absolutely different resolutions and many other parameters could be different.

 

Thus, most systems inside robot are inherently linked. It shall be clearly understood and taken into account by anyone designing a robot right from the beginning.

 

It could be theoretically possible to do some sorts of converters. Kind of distantly resembling 50-Ohm high-frequency systems. Receivers, antennas, amplifiers, transmitters, they may have drastically different impedances. But people agreed to have 50-Ohm common impedance. Thus, all units convert their impedance to 50 Ohms and from 50 Ohms to their internal impedance. Yes, it brings losses and complexities and costs, but it turned out to be the most working solution in the radio frequency field. Something similar could be implemented here as well.

It could be done for robots … but it is not done. Yes, there “universal interfaces” – like USB… hm… why there are so many different types of USB formats?… And why are there are so many other types of interfaces? – well, because of cost implications and other major constrains as complexity, power consumption, size – many indeed.

 

As a result, a universal interface remains distant and illusive. And the reality is rather messy and unique for each particular type of the robot.

 

 

 

Choosing a reference point

In case of GPS, the coordinates are available in regular latitude and longitude of the Earth. Is some cases, it may be useful, when, for example, the robot is moving and out of the indoor. But for the majority of the real indoor cases, we are really interested in local coordinates only.

Thus, we are choosing them at our convenience. In Marvelmind Indoor “GPS”, usually, the system assigns one of the stationary beacons as (0,0,0) or (o,0). But you can set any point on the map as (0,0):

Moreover, it is possible to do a geo-referencing and assign an external GPS coordinates to the internal (0,0) point. After that, the stream of coordinates from the Marvelmind Indoor “GPS” would be in real GPS coordinates in NMEA0183 format. Or in the internal format: https://marvelmind.com/pics/marvelmind_interfaces.pdf

Direction

Unlike outdoor, where magnetometer/compass is available, calculating direction indoor, particularly, in static, is not a trivial task. For example, using our system, your robot can easily have a precise location. But if your robot doesn’t know its current direction – where it is facing – then, it is difficult for the robot to make a decision where to drive.

 

It is possible to rather quickly calculation the robot’s direction by measuring its current location point, driving 1m or so straightforward – keeping the straight direction using IMU/gyro – then measuring a new location, and by knowing two points and knowing that it was a line – not a curve – to calculate the robot’s current direction. Later, during the driving, employ the same technique all the time.

 

These older robots are using the approach:

The method is simple, requires only one mobile beacon (tag), but it works only, if you can drive. Very often, you can’t drive and need to localize right on the spot – in static. What to do?

 

 

The paired beacons

Our recommended way to get direction in static is to use a Paired Beacons configurations.

 

Here is more about it. In NIA:

 

Another example: a self-driving autonomous Robot v100 with a base between the mobile beacons ~60cm. In IA:

 

Similar configuration with external microphones on a single mobile beacon. Though, it was done for VR, it could easily be a robot. The base between the microphone is ~20cm. In IA:

 

Obviously, there are many alternatives for each solution. For example, motion capture with external cameras. Is it a precise and good solution for both location and direction? – sure! Yes! Is it practical for industrial robotics? – not really:

– Costly. Very costly (in 2021)

– Is not really tuned to a harsh environment of factories or warehouses

– Prone to multiple limitations: too little light, too strong light, fog, temperature changes, power supply, etc

Thus, we are not touching all possible options here. Only what is relatively relevant and implementable.

Swarm robotics

Making a single robot driving autonomously is not a very easy task. But to make a swarm of robots is even more challenging.

 

What are the challenges?

 

– When there are too many moving objects around – other robots – it is more difficult for each robot to makes decisions, because the environment just moves in an uncontrollable and in an predictable manner.

 

– If the robots have to stream out or receive separate streams from a central computer, there may be just not not enough radio bandwidth or whatever bandwidth to serve them all.

 

– Since the robots are autonomous and independent, they may request an access to the common communication channel randomly. If they are and if the channel bandwidth is not 10-100 higher than the peak throughput required, chances of collision are high. Thus, either a special central controller or a mechanism to resolve the collisions is required. Both increase the complexity and bring other limitations.

 

– Robots simply obstruct each others view of other objects around. Robots just sense neighbors, whereas something to position against – like external fixed references – not really.

 

 

What are the solutions?

 

Marvelmind can help with localization of robots in swarms, which is the starting and the most important point, because, if it is solved properly, then many other difficulties of robots swarms simply don’t happen.

 

See swarm examples and solutions below.

 

Conclusion

The page will steadily grow in details and subjects based on your questions and our available time. Thus, please, send your questions to info@marvelmind.com and we will be happy to address them in detail here.