3D Sensors and their Top 10 Biggest Challenges

 

19.02.2016

 

The dominant 3D sensor systems on the market right now are 3D laser and 3D camera systems. They work just fine when they are used for their original purpose and every required condition is fulfilled. However, the functionality of these technologies depend a lot on external factors and come along with some really annoying weaknesses:

 

1)     Lighting Conditions

Optical 3D sensor systems are very sensitive towards the lighting condition, such as direct sunlight, reflecting surfaces, darkness, shadows or black surfaces. Under these conditions, most 3D sensors fail to work in the intended manner and cannot provide the proposed usability anymore. Especially for gesture control and safety-relevant systems this is a huge issue, which is still not properly solved today.

 

2)      Robustness

Besides depending on the lighting conditions, another limitation to many sensors is that you can (and should) only use them inside. If you want to use 3D sensors in a rougher environment with rain, snow, fog, dirt or similar, their performance will decrease drastically if not fail completely.

     

  3)     Accuracy

Lots of applications do not need great accuracy. For most of them, recognizing objects with an accuracy of a few centimeters suffices. However, if e.g. your gesture control sensor does not recognize your gesture after the fifth try, it might get a tiny bit unnerving.

 

 Image courtesy of Stuart Miles at FreeDigitalPhotos.net

4)      Range

It is really nice to be able to operate some application or even your whole computer by gesture control. However, if you need to be as close to the sensor as 1m (+ being positioned just in the right angle to it), you could just as well use your normal mouse and keyboard instead. Some systems, especially 3D camera systems, are very limited concerning the range. Most of them don’t exceed a distance of 5 meters and some are only able to “see” object in a distance of 1m. The best range capabilities belong to the 3D laser scanners, which can look as far as 300m (e.g. SICK laser scanner).

 

               

      5)      Price

Great if you found a suitable 3D sensor that offers the very accuracy, range and robustness you need. Not so great if you find out that this sensor costs several thousand Euros. The most expensive systems even cost up to six digit dollar amounts.

 

 Image courtesy of Sira Anamwong at FreeDigitalPhotos.net

 

6)      Size

With some exceptions (e.g. Leap Motion), 3D sensors are large in their build size and heavy. That is no problem if the sensor is permanently installed in one place and you have no desire to change its position. It might get inconvenient quickly, though, if you frequently want to move the sensor or implement it into a mobile system like a drone. 

 

7)      Privacy concerns

Even though 3D camera systems only try to track positional 3D data, they also record everything like a normal camera does. This information can be viewed by whoever owns the data stream of the camera systems, which might bring surveillance privacy issues to areas where they do not belong (in your home, for example).

 

Image courtesy of iosphere at FreeDigitalPhotos.net

 8)      Processing power

Optical 3D sensor systems use a great amount of processing power. They record everything they see, and then they need to process all of this data to filter for the information that is important to them. This process takes time, power and a lot of energy. Most of the time, the data processing does not happen within the sensor, but instead, all of the data has to be sent to a server and processed there. This adds a potential security issue.

 

9)      Energy efficiency

Due to the use of optical components and the big processing need (as described in Challenge #8), existing 3D camera and 3D laser systems need a lot of energy. The SICK laser sensors use approx. 8W, the Microsoft Kinect uses 12W (+ the Kinect needs a computer for the processing of the data it captures, thus energy consumption is effectively even higher than those 12W).

 

10)      Integration

Available 3D sensors come in predefined designs with pre-set interfaces. This might complicate integration into other products. The problem is that not every machine might support the given hardware interface the sensor provides. And what if you need the functionality of a Microsoft Kinect, for example, but do not have the space to integrate the whole piece of hardware? Existing technologies are simply not very flexible.

 

To sum it up, 3D sensors today still face some considerable limitations. However, since they rely on technologies that are fairly new to the market and which are constantly worked upon, it should not take too long to improve the above mentioned issues and then we can enjoy 3D sensors which perform great and are not disturbed by any external factors.

 

In the meantime, you can check out our innovative Toposens 3D ultrasound sensor, which tries to solve most of these challenges successfully…

 

Comments: 0