How to build autonomous mobile robots

Very quick hints for our customers

If you are building a robot or an AGV and have to very quickly decide what to choose as a positioning and navigation system, choose the following:

  1. Starter Set Super-MP-3D – the simplest and the most versatile set to start with
  2. Starter Set Super-MP-3D + Super-Beacon – if you want Location+Direction
  3. Starter Set Super-MP-3D + Super-Beacon + 2 x Omni-Microphones – if your area is large (20x20m or more) or your stationary beacons are 30 degrees from the horizon or lower. You will be able to build robots with this kind of driving capabilities


We have been asked many times about our opinion on different subjects of robotics – first of all, regarding the precise indoor position for industrial applications, of course, since we have been active in this area for many years – but not only. Thus, we collected some of the most typical questions and answered them on this page.

Robotics is a vast field. Very wide and deep at the same time. For example:

  • Tesla in self-driving mode is a robot
  • DJI/PixHawk/Marvelmind autonomous drone is a robot
  • Roomba vacuum cleaner is a robot
  • Marvelmind v100 is a robot
  • Honda Asimo is a robot
  • Sony Aibo is a robot
  • An autonomously driving advertising robot is a robot
  • Even a Lego robot is a robot

There will be hundreds of robots around soon. They may be so different in appearance; they may use different combinations of technologies that it would be rather difficult to cover them all. We certainly don’t aim at doing that here.

We are covering just very few areas in which we, as Marvelmind Robotics, operate:

  • Industrial autonomous delivery
  • Inspection robotics
  • Warehouse delivery
  • Research or university robots or robotic platforms

And the purpose of this article is to give an idea of where to start if you build your robot or choose between the available options.


Using Marvelmind Indoor "GPS" for robots, vehicles, and AGVs

Marvelmind indoor positioning system (Marvelmind IPS), also known as Marvelmind Indoor “GPS” or Marvelmind RTLS, is widely used for different types of autonomous robots, autonomous vehicles, AGVs, and forklifts for various purposes:

  • Autonomous navigation and positioning for robots indoor and outdoor
  • Tracking AGVs, vehicles, or forklifts
  • Providing geo-fencing for robots, forklifts, and people
  • General robotics research and development
  • Robotics education and competitions
  • Swarm robotics

What Marvelmind starter set to choose?

If you don’t have time to study the detail but need to choose fast and safe, choose Starter Set Super-MP:

  • MP stands for Multi-Purpose. The set indeed supports multiple architectures and multiple configurations, thus giving the highest flexibility:

– 3D tracking with N+1 redundancy

– 2D tracking with up to 2 submaps

– 2D tracking with up to 3 mobile beacons (robots)

– 2D tracking with Location+Direction

– 1D tracking with up to 4 mobile beacons



  • Starter Set Super-MP supports different architectures: NIA, IA, and MF NIA
  • The beacons have a 900-1000 mAh LiPol battery inside, which allows for easily and quickly deploy the system without an external power supply
  • Beacons have external antennas – more robust radio connectivity with the modem
  • Super-Beacons can receive and transmit ultrasound. Thus, they can work as stationary and as mobile beacons
  • Super-Beacons have DSP (digital signal processor) inside. Thus, it can receive several ultrasound channels at once and work in IA
  • Super-Beacons have IMU (3D gyro + 3D accelerometer)

Remember that one mobile beacon per robot gives you only one location. For location and direction, you need the Paired Beacons configuration – two mobile beacons per robot. See the following variant.

For Location+Direction, you need more than one mobile beacon per robot. Thus, the easiest is to add to the Starter Set Super-MP an additional Super-Beacon.

Robotics. Basics


Robot = autonomous mobile robot

We call robots, first of all, autonomous mobile robots. Anything is which is directly guided by humans is not a robot. Anything that is not mobile but has all robotic elements is a robot, but we do not focus on assembly robots. Our robots are:

  • Autonomous
  • Mobile

Thus, we mean autonomous mobile robots even if we don’t use lengthy wording when referring to robots.

From this perspective, an autonomously flying copter is a perfect 3D moving robot—more about drones on our Drones page. But a remotely controlled drone is not a robot. However, the same drone autonomously returning to the base using RTK GPS or visual guidance is a perfect 3D flying robot.

Examples of precise indoor positioning and navigation for robots

Autonomous Delivery Robot - car assembly plant demo

Marvelmind Autonomous Delivery Robot v100.

IA with 15 stationary beacons and one modem for Indoor “GPS” coverage. See more:

The same Indoor “GPS” map supports on top of the robot shown in the video:

  • Multiple mobile robots and forklifts tracking as well as people tracking. Altogether – up to 250 beacons/objects – stationary+mobile combined

Robot v100 example with detailed explanations

This is the same demo as, but with additional verbal comments explaining what is on the video and in the system in general.


  • Marvelmind Autonomous Delivery Robot:
  • IA with 15 stationary beacons and one modem for Indoor “GPS” coverage.

Robot’s specs:

  • Fully autonomous delivery between any points covered by Marvelmind Indoor “GPS”
  • Up to 100kg payload
  • Driving time more than 16h on a single charge: with 60+kg payload
  • Automatic obstacle avoidance and detection
  • The delivery route can be reconfigured by one button click in 1 second
  • Charging time is less than four h. So, 2-shift work (16h) and one shift (8h) charging is supported
  • Re-configurable capacity: 1 large box of up to 65x65x160cm to up to 8 boxes of 65x65x15cm – one shelf vs. multiple shelves
  • The same Indoor “GPS” map supports:
  • Multiple mobile robots and forklifts tracking and people tracking. Altogether – up to 250 beacons/objects – stationary+mobile combined.

Robot Boxie

Demo: small autonomous delivery robots are moving fully autonomously in an office/factory environment using the Marvelmind Indoor Navigation System:
  • Mobile beacons in the Paired Beacons configuration for Location + Direction in IA modem with two external Omni-Microphones can be seen installed on the robot
  • Stationary beacons in 2D tracking are installed on the walls
Along with the core indoor positioning system, the robot uses odometry and IMU for positioning, mainly for handling non-line of sight or other interference.
The robot also has a visual closed-loop tracking system based on Intel Realsense. But the system wasn’t used in this demo.
Also, notice that multiple 1D LIDARs are onboard, but they are used for obstacle detection and avoidance – not for positioning.

Robot driving fully autonomously using Marvelmind Indoor "GPS"

A fully autonomous robot is driving on its own, relying on:

  • Marvelmind Indoor “GPS”
  • On-board odometry and inertial units (IMU)

The robot receives coordinates for the key points to visit from the user (table on the right) and then creates and follows the path by constantly correcting its position against the path. Coordinates are formed automatically in the Dashboard by simply clicking on the map.

Distances between beacons are up to 36 meters. It is possible to cover the entire campus with precise “GPS” by installing more beacons every 20-30 meters.

Fully autonomous small delivery robot moving in office environment

Demo: a small delivery robot is moving fully autonomously in an office/factory environment using the Marvelmind Indoor Navigation System:
  • A mobile beacon is installed on the robot
  • Stationary beacons are installed on the walls
  • Blue dots – location of the robot (mobile beacon) measured by the Marvelmind Indoor Navigation System
  • Yellow dots – location of the robot obtained from its own inertial/odometry system
  • Big green dots – stationary beacons installed on the walls
Note that the robot and the Marvelmind Indoor Navigation System handles the shadows of ultrasonic signal from the beacons under the tables and chairs. This lets the robot perform its tasks in a real-life environment pretty well.
The distances between stationary beacons can be up to 30 meters. The general requirement of the Marvelmind Indoor Navigation System is to provide visibility of the mobile beacon to three stationary beacons at any given time. But, as the demo shows, using other sources of information (IMU/odometry), the robot can handle 1-10 seconds without full ultrasonic coverage needed for the Marvelmind Indoor Navigation System – purely relying on its IMU/odometry.
However, the IMU/odometry has inherent drift. And the drift is measured and corrected with the help of the Marvelmind Indoor Navigation System when robust and reliable data is available.

Fully autonomous robot driving demo: "8-loop" track

Marvelmind Indoor Navigation System + autonomous robot Marvelmind Hermes demo:

  • “8-loop” (7x2m) track
  • Fully autonomous driving The Marvelmind Indoor Navigation System Starter Set deployed in an 80m2 room

A mobile beacon is attached to the top of the robot. The robot receives its coordinates with ±2cm precision from the Marvelmind IPS and uses them to drive through the track autonomously.

There are intentional mild shadows for the Indoor Navigation System (column, padded stools), imitating real-life environments. While in the shadows, the robot relies on its inertial navigation system and odometer.

Domino robot with positioning and direction based on Marvelmind Indoor "GPS"

Marvelmind Indoor Navigation System has been used by an ingenious domino placing robot. The system was used for precise location and direction – see the mobile beacons placed on the base to achieve the best directional accuracy.

See the original video as well: World Record Domino Robot (100k dominoes in 24hrs)

Robotics solutions


One of the biggest problems for any autonomous robot is to answer the question: “where am I?”. The question immediately explodes into subsets of questions:

  • Where am I against my expected location at the moment?
  • Where am I against my next waypoint?
  • Where am I against the other object: robots, people, obstacles, charging stations, etc.?

But everything starts with localization against some reference; for example, (0,0,0) coordinates, whatever could be or against the starting point or similar. Many other questions are derivatives of that master question.


Localization against what?

There are several primary options:

  • Against myself – the center of the robot, for example
  • Against an external reference point

Against myself is more straightforward in many cases, but it is about obstacle detection and avoidance rather than moving and navigating in space. Let’s discuss in detail positioning and navigation against external references.

See more about coordinate systems:

Why not SLAM?

SLAM (Simultaneous localization and mapping) is a terrific method. Still, it doesn’t look like it is the most suitable method for real and practical industrial applications: warehouses, assembly plants, and intralogistics in general. It is more suitable for research projects and PhDs rather than for practical applications because:

  • It is simply more efficient to split the task into two stages: 1) Mapping, 2) Localization
  • Obstacle detection and avoidance are not about mapping and localization at all. It is a point 3). It is just a different task and must be solved differently. LIDARs are suitable for obstacle detection, but they are not particularly good for mapping, since in moving surroundings, robots using LIDARs will need many additional clues, or they will make too many mistakes
  •  The same limitation mentioned about LIDARs applied to visual SLAM systems – they are confused, and they have to rely on other methods for the significant mistakes
  • In general, sensor fusion is the best approach and gives the best result

Thus, in short:

  • SLAM is excellent but unnecessarily complex while it is returning not guaranteed results
  • Splitting the same operations in time (mapping is separate from localization and different from obstacle detection and avoidance) returns more robust and predictable results while still having a low cost, particularly for multiple mobile objects. Whereas with SLAM, each intelligent agent must be very smart – i.e., expensive, complex, bulky, full of sensors, and power-hungry

Localization inside-out, inside-in or mix?

If the robot carries all required for localization on board, it is inside-out localization. Humans and animals use inside-out localization. Not only will we talk about it, but they do generally not need direct and constant information about where they are in terms of a continuous stream of coordinates. They determine it “inside” based on different clues.

Some people call the process visual odometry. Of course, it works much better with a regular wheel (feet) based odometry. And this is why it is easier to make a robot with a navigation system than just a navigation system for any robot. Developers of such “universal positioning systems” would struggle because the data from an odometer could be in virtually any format – analog with different values or digitally coded with an unknown format. It could be with absolutely different resolutions, and many other parameters could be different.

Thus, most systems inside robots are inherently linked. It shall be understood and considered by anyone designing a robot right from the beginning.

It could be theoretically possible to do some sorts of converters—distantly resembling 50-Ohm high-frequency systems. Receivers, antennas, amplifiers, and transmitters, may have drastically different impedances. But people agreed to have a 50-Ohm common impedance. Thus, all units convert their impedance to 50 Ohms and from 50 Ohms to their internal impedance. Yes, it brings losses, complexities, and costs, but it turned out to be the most working solution in the radio frequency field. Something similar could be implemented here as well.

It could be done for robots … but it is not done. Yes, there are “universal interfaces” – like USB… hm… why are there so many different types of USB formats then?… 🙂 And why are there so many other types of interfaces? – well, cost implications and other significant constraints such as complexity, power consumption, and size.

As a result, a universal interface remains distant and elusive. And the reality is rather messy and unique for each type of robot.

Choosing a reference point

In the case of GPS, the coordinates are available in regular latitude and longitude of the Earth. In some cases, it may be helpful when, for example, the robot moves out of the indoors. But for the majority of the real indoor cases, we are interested in local coordinates only.

Thus, we are choosing them at our convenience. In Marvelmind Indoor “GPS”, usually, the system assigns one of the stationary beacons as (0,0,0) or (o,0). But you can set any point on the map as (0,0):

Moreover, geo-referencing is possible, assigning external GPS coordinates to the internal (0,0) point. After that, the stream of coordinates from the Marvelmind Indoor “GPS” would be in absolute GPS coordinates in NMEA0183 format. Or in the internal format:


Unlike outdoors, where a magnetometer/compass is available, calculating indoor direction, mainly in static, is not a trivial task. For example, your robot can easily have a precise location using our system. But if your robot doesn’t know its current direction – where it is facing – it isn’t easy to decide where to drive.

It is possible to rather quickly calculate the robot’s direction by measuring its current location point, driving 1m or so straightforward – keeping the straight direction using IMU/gyro – then measuring a new location, and by knowing two points and knowing that it was a line – not a curve – to calculate the robot’s current direction. Later, during driving, employ the same technique all the time.

These older robots are using the approach:

The method is simple and requires only one mobile beacon (tag), but it works only if you can drive. You often can’t go and need to localize right on the spot – in static. What to do?

The paired beacons

Our recommended way to get direction in static is to use a Paired Beacons configuration.

Here is more about it. In NIA:


Another example is a self-driving autonomous Robot v100 with a base between the mobile beacons ~60cm. In IA:


Similar configuration with external microphones on a single mobile beacon. Though it was done for VR, it could easily be a robot. The base between the microphone is ~20cm. In IA:


There are many alternatives for each solution. For example, motion capture with external cameras. Is it a precise and suitable solution for both location and direction? – sure! Yes! Is it practical for industrial robotics? – not really:

  • Costly. Very costly (in 2021)
  • It is not tuned to a harsh environment of factories or warehouses
  • Prone to multiple limitations: too little light, too intense light, fog, temperature changes, power supply, etc

Thus, we are not touching all possible options here. Only what is relatively relevant and implementable.

Obstacle detection and avoidance

Obstacle detection and avoidance is a separate task from localization

As discussed above, the SLAM approach promises obstacle detection, mapping, and localization at the same time. It sounds like a dream, but the reality is harsher and less friendly.

In real-life conditions of poor lighting, a lot of high-dynamic range light sources – bright sun through windows together with very dark shadows of a warehouse, different other sources of light from headlights to all sorts of scanners, the visual-based SLAM solutions can be very easily confused to the point of a complete loss of localization. Additional methods for correcting major mistakes are required when SLAM systems can’t choose between different options correctly. Sensor fusion is the solution.

Additionally, why optimally (technically and economically) solve the task of localization, it is more difficult to optimally solve the task of obstacle detection, simply because the tasks and requirements are different in nature:


But the SLAM approach unnecessarily complicates the task by loading obstacle detection on top of mapping and localization:

  1. Mapping
  2. Localization
  3. Obstacle detection

All three elements are crucially important for autonomous driving, but they are not required to be the same thing. They are not required to be done with the same methods and by the same sensors.

Integrated robots vs. Split robots approaches

One of the important points is to clearly distinguish between the robots and their payloads. It is very much the same as rockets and satellites. Two things are pretty separate and shouldn’t be mixed up. The same story with tractors and tractor mounted equipment.

Integrated robots approach

One crucial point is to distinguish between the robots and their payloads clearly. It is very much the same as rockets and satellites. Two things are pretty separate and shouldn’t be mixed up—the same story with tractors and tractor-mounted equipment.

Very often, though, the robots are fully integrated, i.e., their payloads are merged together tightly. There are pros and cons to the united robotics approach.


  • It can be easier to build because the robot is tuned for one task only. Very focused
  • Simpler to operate, integrate


  • Inflexible
  • It can be more expensive in the long run due to inflexibility and a need for multiple different for different tasks

Split robots approach

Pros and cons for a split approach when robots are “tractors” or “rockets” and payloads are provides based on the case needs:


  • Flexible in usage. With a limited number of robot platforms and a limited number of types of mounted equipment it is possible to virtually unlimited amount of different configurations
  • Flexible in development, because parts can develop independently and only interfaces (electrical, mechanical, SW) shall remain compatible. But even they can be
  • The robotic platform can be simple, even primitive and still very functional, because it is just a platform – “tractor” or “rocket” – without sophisticated “satellites”
  • Less expensive per robot


  • More complex integration. The robot consists of two parts, at least: “tractor” and “equipment”
  • May be less robust, because split approach has more different variants, i.e. more testing required, more parties involved, etc.

Examples of robotic platforms and payload/equipment

Robotic platforms:

  • Autonomous delivery platform. It is really like a tractor, but it can carry different things – different payloads or different equipment
  • Drone itself

Payload or equipment:

  • Arms, for example, to take a box and put on the robot
  • The basket on the robot
  • The camera on the robot or drone
  • All kinds of meters (chemical, radiation, noise, etc.)
  • Scanners (3D, bar/QR readers, etc.)
  • Fire-prevention equipment
  • Anti-COVID sprays or lamps and similar

Swarm robotics

Making a single robot drive autonomously is not a very easy task. But to make a swarm of robots is even more challenging.

What are the challenges?

  • When there are too many moving objects around – other robots – it is more difficult for each robot to make decisions because the environment moves uncontrollably and in a predictable manner.
  • If the robots have to stream out or receive separate streams from a central computer, there may not be enough radio bandwidth or bandwidth to serve them all.
  • Since the robots are autonomous and independent, they may randomly request access to shared common communication channels. If they are, and the channel bandwidth is not 10-100 higher than the peak throughput required, the chances of collision are high. Thus, a special central controller or a mechanism to resolve the collisions is required. Both increase the complexity and bring other limitations.
  • Robots obstruct each other’s view of other objects around. Robots sense neighbors, whereas something to position against – like external fixed references – is not really.

What are the solutions?

Marvelmind can help with the localization of robots in swarms, which is the starting and the most crucial point, because, if it is adequately solved, then many other difficulties of robot swarms simply don’t happen.

See swarm examples and solutions below.


The page will steadily grow in details and subjects based on your questions and our available time. Thus, please, send your questions to, and we will be happy to address them in detail here.

If anything is unclear, contact us via