Line of sight is a must

Line of sight between mobile beacons (tags) and serving them stationary beacons (anchors) is a must. It is a single most important requirement for any precise indoor positioning systems. If the direct line of sight is provided, it doesn’t matter what materials are around (metal, wood, glass or nothing); it doesn’t matter whether there is a reflection from those surfaces or not. Surroundings absolutely do not affect on tracking and the performance of the precise indoor positioning system is perfect.
 
However, the opposite also holds true: if there is no a guaranteed line of sight, all kind of issues appear: no tracking, jumps, lower accuracy, higher latency, etc. The system is trying to provide the best possible performance, but if there is no sufficient data to determine the location confidently, then it is simply physically impossible. And all those issues mentioned above arise.
 

We have addressed the subject of the paramount importance of direct line of sight several times already:

 
But it looks like we have to stress it again and again, because it is still a source of confusion, misunderstanding and some rosy expectations.
 
Some systems claim that they can do precise tracking in non-line of sight configurations. When such claims are combined with not sufficiently deep understanding of underlying technologies, they may lead to a bitter disappointment with results of tracking of indoor positioning systems. Our recommendation is very simply:

Always build indoor positioning systems with line of sight between mobile and serving stationary beacons.

Why line of sight is the most important requirement

Think about the requirement of direct line of sight as you would think about GPS: if there is no direct visibility from a GPS tracker to the GPS satellites, you will have no tracking. Full stop.

Very often, the situation is even more complex. If you don’t have a direct line of sight between mobile beacons and stationary beacons, but there is a non-direct line of sight visibility – reflection or echo – for example, due to a strong reflection from a neighboring wall or a corner or a reflecting object and the length difference of the two paths is not high, it is very difficult or often impossible for the positioning system to detect that there is something wrong, that it is not a direct line of sight and the resulting measured distances between the stationary and mobile beacons are not correct and resulting coordinates are simply not right.

The only good solution for the problem is to design the system properly right from the beginning and to provide the line of sight between the mobile beacons and serving them stationary beacons. Then, there is no need to fight the problem with complex and not guaranteed solutions, because you won’t have the problem at all.

One more time about the underlying physics and technologies

Precise positioning systems are using time of flight and multilateration approach:

  • GPS – time of flight of radio waves (1.1-1.6GHz) from 4 or more satellites. Time-synchronized atomic clocks
  • UWB – time of flight of radio waves (3-10GHz). Time synchronization over radio or Ethernet or no need for synchronization
  • Marvelmind Indoor “GPS” – time of flight of ultrasound. Time synchronization over radio in license-free ISM bands (868/915MHz mostly)

RSSI-based systems

Other technologies that are relying on radio signal strength (RSSI)  – BLE, WiFi, SigBee, LoRa – are imprecise by definition, because the RSSI changes so drastically due to multi-path indoor that only by employing very sophisticated algorithms it is possible to somehow define the location. But, still, the underlying physical principle – measuring RSSI and thus calculating the distance – results in much poorer performance than the time-of-flight-based systems that do not depend on the strength of the signal, because the latter can fluctuate too much.

Triangulation-based systems

Triangulation based system are not the same as trilateration based systems.

Triangulation – a localization method based on intersection of three lines. It requires precise knowledge of angle and doesn’t required knowledge of distances. Of course, coordinates of points from where the lines are taken must be known.

In today’s practice, typically, these systems are scanning laser-based systems.

To some extent, optical systems that calculated mobile objects’ position based on cameras with a base between each other are also triangulation systems. By the way, our binocular sight – our eyes – is a typical biangulation system.

Triangulation system can be relatively precise, particularly, based on laser and with a proper base between the reflecting markers. However, their accuracy of positioning depends on the distance. Whereas, typically, for trilateration-based systems that is less so. To simplify, triangulation systems measure position very well, in vicinity and with favorable base between cameras or between reflecting markers. They become notably less precise on a distance.

The explanation for relatively lower accuracy of triangulation-based systems as compared with trilateration-based systems is very simple: we can measure the angle with less accuracy than we can measure time.

For optical systems using cameras – the resolution of cameras is a limitation. Thus, one has to choose between:

  • Cost
  • Pixel resolution that is directly affects on the angular accuracy that directly affects on location accuracy. Higher pixel resolution – higher angular resolution, but lower light sensitivity (smaller pixels) and lower update rate (more latency)
  • Angular beam width that impact on how many cameras will be required to cover the same area. Lower angular beam, i.e. higher zoom – better angular resolution, but more cameras will be required to cover the same area and even worse – the complexity to match the data between the cameras will be much higher. As a result – higher cost

But why is direct line of sight a must?

Assumption 1:

Trilateration-based systems are using time of flight of something (radio, light, ultrasound) to measure the distance. They are not measuring the distance directly. They are measuring time, simply because we are capable to measure the time so precisely – picoseconds, if needed. Nanoseconds – pretty comfortably. Microseconds – very easy.

Thus, we must have precise clocks, for example, atomic clocks, or synchronize clocks so that the difference in time wouldn’t affect on the expected accuracy.

For radio- or light-based systems, for example, you want to achieve 10cm accuracy:

  • 1e-1/3e8=0.3 (ns) – pretty tough requirement. But doable

For ultrasound-based system and you want to achieve better – 1cm accuracy:

  • 1e-2/340=30 (us) – very relaxed requirement, i.e. it is about 1 million times easier to make precise-enough clocks for ultrasound-based systems than for electromagnetic waves-based systems (radio or light)

Assumption 2:

Trilateration-based systems assume that the propagation speed is a) known and b) the same over the whole propagation path. With these two assumptions in mind, it is possible to simply measure the time very precisely and then multiple the time by the precisely known propagation speed (speed of light or speed of ultrasound).

And that is the biggest problem with non-line of sight. When you don’t have a line of sight, you signal, for example, radio signal of UWB, propagates through the materials with unknown properties (concrete, wet wood, glass, etc.). But the propagation speed (speed of radio waves) through these materials is different than the speed of radio waves in vacuum or air! It may differ substantially – times, i.e. the speed of radio wave can be, for example, 2-3 times slower than in vacuum. It means that 1ns of measurement in vacuum would result in 30cm distance, but in some ceramics it would be 1m instead. Huge difference!

The problem is worse. We can’t take this into account, because it technically very-very challenging to take those delays into account and somehow compensate. To our current knowledge, nobody does it, though, it would be possible. It is even worse, than the properties of real materials, i.e. speed of radio through them is purely known even in lab conditions. In real-life conditions – absolutely not known (thickness, types of materials, propagation direction and resulting propagation thickness, etc.).

Thus, the only practical solution – to build the system using line of sight between stationary and mobile beacons to avoid the problem completely.

Non-line of sight indoor positioning systems do not exist

It is simply physically impossible to determine a location of a mobile beacon in a non-line of sight situation. Non-line of sight shall be read as “non-line of sight for visible light”, i.e. a human can’t see the mobile beacon but the system sees. Yes, with such remarks, NOLS systems could exist, for example, UWB. But they are line of sight for UWB radio signal.

Simple reference to UWB as a NOLS system leads to bitter disappointments for the end users, when they realize only thin walls of labs are transparent the UWB signal. Real life concrete and brick walls are not or produce such an error in distance measurements that makes the tracking system highly unreliable in practice.

The only recommended and working solution – always build any precise positioning and tracking system with line of sight requirement in mind.

P.S. Even human body gives non-line of sight for UWB. The recommendation – place UWB mobile beacons (tags) on the shoulder, on top of the helmet or on the wrist in the worst case. Don’t put in chest pocket. Body will block the UWB signal coming from the back with very bad impact on performance.

Problems with non-line of sight (NOLS)

By now, it shall be already evident that the line of sight is a must for precise indoor positioning system. However, let us add what else bad happens when there is no direct line of sight (NOLS situation).

Nearby reflective walls combined with NOLS

As mentioned above, when there is a non-line of sight situation (NOLS), the system may be designed and tuned to detect such events and filter such measurements out or, at least, it can warn users or other elements of the system that the results cannot be trusted. It can can even employ some algorithms to calculate the distance or location differently or adjust calculations. Something could be done, at least. It is complex, not guaranteed, but still – it is something.

Such warning or filtering works reasonably well, when there is a large enough difference between the expected LOS distance and the measured distance. The system will have sufficient amount of trustable data to make a conclusion that something doesn’t adds up and then to warn the user or filter out the measurement completely.

When the difference between the measured distance and the expected distance is small, the system has to choose between a) just badly measured distance, b) NOLS and must be filtered out, c) our mobile object made some very unexpected twist and the distance is fine. Choosing is difficult, because many assumptions must be made and there is always a trade of between false positive and false negative.

Again, the only robust solution of the problem is not to have the problem at all – avoid NOLS as such – and build systems with line of sight only.

Static or slow-moving mobile objects are even more problematic with NOLS

When there is NOLS and the object is moving relatively fast, it is possible to employ several techniques to predict an expected location. Based on prediction, it is possible to extrapolate or filter-out wrong measurements.

Fast-moving mobile objects with NOLS produce jumping distances. The fact that the distances are jumping combined with prior knowledge of the mobile objects (forklifts don’t jump) make relatively easy grounds to filter out wrong distances.

However, when objects are static or slow-moving, the system will repeatedly return a measured distance with little variations between the measurements, i.e. no jumps. Distances will have low sigma, i.e. look like the results can be trusted. But the repeated measurements would be repeatedly incorrect and must not be trusted. But that is difficult to detect, Thus, it is difficult, without other sources of information (sensor fusion), to warn the users or filter out wrong static measurements. The system will consistently and confidently locate a mobile object in a wrong place.

Slow update rate

When the update rate is high, latency is low and one has many measurements per second. Thus, it is possible to accumulate some measurements, make decisions and filter out the bad ones or make conclusions that everything is bad and to filter out everything and still keep the latency relatively low.

When update rate is high, the power consumption is high. To reduce the power consumption, the update rate is reduced. The system can’t accumulate or filter data anymore in the same manner without making latency unacceptably high.

The worse: slow update rate and fast moving mobile objects

The scenario with slow update rate with fast moving mobile object is difficult even in good LOS conditions, because the object may make turns, loops, etc., but the slow update rate simply doesn’t allow to notice that there was a loop or turn, because they happen between the location updates.

With NOLS and the attempts to detect or filter out wrong measurements from the good one, the task become nearly a mission impossible, because the LOS tracking looks pretty chaotic, if the object is moving with turns and loops, but with NOLS, they chaos of movements is multiplied with the chaos due to NOLS. There is no pattern, there is no trail and tracking simply become impossible even with pretty advanced additional methods.

The only real solution for all these troubles is to provide line of sight between the mobile beacon and two or more stationary beacons within 30m for 2D and three or more stationary beacons for 3D.

How to solve non-line of sight problems

Place the stationary beacons properly

Follow the Placement Manual and place the stationary beacons properly, thus, providing the most confident coverage with the smallest number of beacons.

Clearly understand what sees what

What is line of sight? Between what and what? Between stationary beacons and the mobile beacons. OK… what which parts of stationary beacons and which parts of the mobile beacons?

The answer is not very simple, actually:

  • In NIA and in the MF NIA, the mobile beacon is emitting. Therefore, there must be a direct line of sight between the emitting transducers of the mobile beacons and the receiving sensor of the stationary beacon
  • In IA, the stationary beacons are emitting and the mobile beacons are receiving
  • NIA/MF NIA and IA are not always symmetrical from the line of sight requirements point of view. Check carefully
  • Beacon’s own antenna can be non-line of sight for the microphone, because it creates a small shadow right in front of the microphone. Usually, it is not, because it is relatively small and not so close after all, but still. It is not an obstacle for the emitting transducers

When stationary Super-Beacons automatically build the map of beacons, they act as stationary and as mobile in different times

One of not so obvious, but rather typical problems happens when one stationary beacon is behind another stationary beacon, when they are placed on the wall, but not exactly on the wall, but on a different distance.

When the map of stationary beacons is being built by Super-Beacons, for example, one time slot one of the beacons is mobile and another is stationary and during the other time slot they switch. The result – the must have direct line of sight during this process.

If the beacons are side to side on the wall, it is already a challenging situation, because of the microphone sensitivity diagram. See the detailed video about that.

But when one of the beacons is behind the pack, then the angle from the beacon to the neighboring beacon becomes negative and NOLS situation happens! Yes, body of the beacons “see” each other. But the microphone doesn’t see a transducer, i.e. it is clearly a NOLS situation.

The solutions:

  • Avoid such situation
  • Turn the beacons slightly towards each other so that the ultrasound transmitters wouldn’t be behind the microphone
  • Enter the distances/coordinates manually

Build fully-overlapping submaps

If there is clearly a NOLS situation in your setup, for example, a column in the center of a room, the shadow from the column creates a NOLS for a robot behind the column, just create a fully overlapping submap:

– Place two stationary beacons on the left wall to build a service zone for the whole room

– Place other two beacons on the opposite wall and build another service zone – to create fully overlapping service zones

Since the coverage will be provided from both directions, the mobile beacon will see at any given moment either two pairs of beacons or, at least, one pair of beacons, and with high confidence will track in either or of the submaps or in both.

If you choose IA with different ultrasound frequencies, for example, 19+25kHz and 31+37kHz then the system will get the location data from left and the right submaps in the same time slot, i.e. with each location updates.

If you choose the same ultrasound frequencies, for example, 25+31kHz and 25+31kHz, then you must choose TDMA=2, i.e. two time slots. In the first time slot the system will determine location using the left submap. In the second time slot – from the right submap.

When there is LOS, you will have good tracking, but provided by different submaps.

When there is a NOLS from one of the submaps, then there will be no tracking during that time slot and good and normal tracking in another time slot. Effectively, it looks like you just reduced the location update rate by 2 manually.

That is in a good situation. But there could be bad situations, when the tracking looks plausible to the left submap (making a mistake due to NOLS) and the right submap that is tracking correctly and having LOS. In the bad situation you may have a split track – different locations for left and right submaps, for different time slots. It is a pretty rare situation in practice, but possible. Some other sources of information usually shall be employed to select who is right and filter the bad ones out.

The same logic can be further extended for more than 2 fully overlapping submaps, if there are more potential obstacles. Remember, though, that it may come for the expense of update rate, because you may soon run out of available ultrasound frequencies, or even for the expense of the stability, because there may be too many opinions about the location of the mobile beacons and the opinions may be conflicting, thus, making the process of selection particularly challenging for the system

See example of fully-overlapping submaps.

The NOLS situation can be solved on the stationary beacons’ side, on the mobile beacons’ side or on both

It shall be very clearly understood that an additional visibility between the stationary beacons and the mobile beacons in challenging conditions of potential NOLS situations can be provided by additional solutions on the side of the stationary beacons (Fully overlapping submaps, for example) or on the mobile beacons’s side or both.

What can be done on the mobile beacons side?

Well, there are clearly several solutions. For example, Marvelmind Jacket. The Jacket has a Mini-RX on one shoulder and an additional microphone on another shoulder. What for? – to combat NOLS of sight from a person’s head. If the head obstructs stationary beacons that are placed on the left from the head and the head is obstructing the Mini-RX on the right shoulder. Then, an additional microphone on the left should simply receives the signal and receives it earlier than the potentially obstructed signal to the Mini-RX on the right shoulder. In this way, the NOLS is completely resolved: either both or, at least, one of the microphones (external microphone or the one on the Mini-RX) will receive the signal.

The same solution is employed in the Marvelmind Badge. It has two pairs of microphones – on the left side of the neck and on the right side of the neck. Vicinity of the neck to microphones is a big problem, because the neck closes too large of the “sky” – visibility to the stationary beacons. But special solution to put the microphones slightly further from the neck and having two pairs of the microphones solve the problem.

See more about the solutions against NOLS on the mobile beacon side:

FAQ

No, this is a requirement for the serving stationary beacons. Each submap has 1-4 stationary beacons for 1D-2D-3D submaps. So, the mobile beacons must have a direct line of sight to the serving stationary beacons in each submap. It is OK not to have LOS to stationary beacons in other submaps.

No, it is not a must. You can provide a distance between the stationary beacons or their coordinates manually, for example. Of course, Super-Beacons in this setup can’t measure the distance automatically and the map of stationary beacons won’t build automatically. Moreover, it must be checked that the stationary beacons haven’t measured distance automatically and the distances didn’t accidentally appear to be right to the system, whereas it is wrong in practice.

But it is not a requirement to have a direct line of sight between the stationary beacons as long as the line of sight is provided from the stationary beacons in the serving submap to the mobile beacons in the service area of the submap.

Yes, it does. Human body is clearly a non-line of sight for the ultrasound, i.e. if a beacon is in front of the person and serving stationary beacons are behind the person and the person produce a shadow for the ultrasound, it is a clear non-line of sight.

Additionally, remember that close proximity of, for example, Mini-RX or the antenna of another beacon, to the human body results in about 10dB degradation of the radio signal strength in the communication channel between the beacon and the modem. Usually, radio is a less limiting factor, because the line of sight for ultrasound is far more critical and NOLS of sight for ultrasound will clearly lead to no tracking or to jumps. But for the farther placed beacons, the human body may block the radio connectivity between the modem and the beacon. It will result in no tracking, but not due to NOLS of ultrasound, but due to not getting synchronization, telemetry and the measured data between the modem and the beacon.

 

Scroll to Top