2D vision has been a longstanding productivity driver in industrial automation. Its use in inspection, identification and guidance have lead to widespread adoption and continued growth. However, how are we to check the height of parts? How can we guide a robot to a target that is rotated around X or Y? Enter the world of 3D.

Unlike its 2D cousin which generally relies on the same technology, obtaining three dimensional data can be achieved in a variety of ways; each with their own features, advantages, and drawbacks.

Stereoscopic

By analyzing two or more images from calibrated 2D cameras, stereoscopic systems can formulate detailed 3D maps of their target objects. The using this stereo image benefits from maintaining the ability to take a snapshot of a target without having to move either the camera or the object; making it an ideal candidate for 3D vision guided robotics applications such as sorting or bin picking.

The data can even be merged with 2D color data to make decisions from color data. This can be seen in the time lapse of our Contact Us page.

Laser Triangulation

3D data from this method is achieved by training a sensor on a laser line and watching its movement and intensity. Since the laser line can only cover a portion of the target at any given time, the object or camera must be moved across a level plane in order to build out an entire image buffer. Laser triangulation sensors can be extremely fast (tens of thousands of individual profiles per second) and are ideal for in-line quality inspections such as the below.

Structured Light

Laser light again becomes useful in this form of data acquisition. Here a set pattern of light is projected out onto a target scene and a sequence of images are taken. Deviations in the pattern over the course of multiple images help provide spatial data and can even be used to extrapolate motion. The most recognizable use of this technology is the first Microsoft Kinect camera that used an IR dot-matrix pattern to feed data to its algorithms.

Time-Of-Flight

TOF cameras will calculate the time it takes for light to be emitted, impact an object, and return to a sensor. The greater the time it takes the light to return, the farther away the object or feature is. Camera configurations are available to area scan (such as the image of our vision lab desk below) or take a point (or a few) at a time as LIDAR sensors do.

Resolution and noise reduction are currently not as strong as other methods, but TOF cameras are simple, efficient, and fast, for the right applications.