Editorial Feature

Vision Systems for Robotics

Andrea Danti | Shutterstock.com

What is Robotic Vision?

Robotic vision is similar to human vision – it provides valuable information that the robot can use to interact with the world around it. Robots equipped with vision can identify colors, find parts, detect people, check quality, process information about its surroundings, read text, or carry out just about any other function we might desire. Even though we refer to this as robotic vision, the systems often differ greatly from the way our eyes work. When learning about robotic vision, it is best to start with the basic parts of the system.

Vision System Components

All types of vision systems share some common components. One of the crucial components of any vision system is the camera. This is the part of the system that will take in light from the outside world and convert it into digital data that can be processed and analyzed by the system.

Originally, the cameras consisted of a small number of photocells (around 2000 pixels) arranged behind a lens and worked off a greyscale of 256 different shades to determine the shape of images. Today, the cameras used in robotic vision range from 2 megapixels on up with full color and 4,095 different shades to work with. This large amount of data has made image processing easier, as it provides a wealth of information, but not necessarily faster.

This brings us to the next main component of the vision system, the processor. The processor converts all the raw data from the camera into something useful to the robot. There are two main methods of processing the information from the camera – edge detection and clustering.

With edge detection, the processor looks for sharp differences in the light data from the camera, which it then considers an edge. Once it finds an edge, the processor looks at the data from pixels nearby to see where else it can find a similar difference. This process continues until it has found the outline information for the image.

With clustering, the processor finds pixels that have identical data and then looks for other pixels nearby with the same or near same data. This process develops an image using the data captured by the camera. Once the processor has decided what the image is, it formats the information into something the robot can use and sends it to the robot’s system.

This brings us to the last key piece of any vision system – cabling. In earlier technology, the communication cables used for vision systems were clunky and limited in how far they could send the data without loss.

Around 2009, Adimec developed a new way of sending data that allowed over 6 Gbps of data transmission over coaxial cable, and named it ‘CoaXPress’. This protocol, and those that followed in its wake, insured that we would be able to use one coaxial cable for transmission of data, despite the fact that the amount of data we need to transmit keeps growing.

Not all vision systems use just one coaxial cable for data transmission, so it is important for those working with vision systems to understand the specifics and limitations of the system they have.

Vision System Applications

When it comes to vision system applications, some of the exciting and popular options right now include facial recognition, safety systems, part finding, and quality control.

Facial recognition is the ability of robotic systems to match an image of a person to data stored in its memory. In many ways, this is just an adaptation of part recognition, but the result is a much more personal experience with the robot. For example, you can program Aldebaran’s NAO robot to recognize your face and then respond with a message using your name, creating a personalized experience when interacting with the robot.

Beyond the social uses, this technology also has great security applications. Instead of risking the lives of people, we can use a robot to deny entry or search for unauthorized persons based on a database of approved facial scans.

Robotic vision has helped to free the robot from the traditional safety cage that is common in industrial applications. The Baxter robot created by Rethink Robotics is a perfect example of this, thanks to his 360-degree sonar and front facing camera.

Anytime Baxter senses a person, the robot slows down to a safe speed and closely monitors system feedback for any indication of a collision, stopping all movement before anyone can get hurt. On top of this, Baxter uses its vision system to find parts and adjust positioning as needed.

Interacting with Baxter the robot
Interacting with Baxter the Robot | FutureLearn | Youtube.com

Other robots have replaced the cage with an overhead camera that looks for anyone inside the work area. These overhead systems are not as flexible as Baxter’s system, but they are easier to adjust than the steel cage protection method.

Vision systems use clustering or edge detection to pick out specific parts from a complex image. Once the system finds a part, it uses the data gathered from the visual information to modify its program and complete tasks as directed. This allows the robot to work with parts that are offset, tilted, jumbled in a bin, or otherwise out of the optimal position. To use a vision system this way, there must be some form of calibration where the robot can relate the visual data to distance.

With 2D vision, or a single camera, the camera needs to be in the same position each time it takes a picture, and there must be some form of calibration to find distance from this point. With 3D vision, two cameras, or images from two locations, determine the distance.

The 3D system will require calibration as well, and in the instance of two cameras, the location of the cameras in respect to one other is part of the calibration. This same type of system can measure part features to the micron level, insuring the quality of each part during operation.

What is Hot Right Now?

All-in-one vision systems that plug directly into the robot and handle all of the data processing are nothing new, whether the robot manufacturer or some third party provides them. What is new is the fact they are now targeting the hobby robotics market.

CMUcam5 Pixy is an all in one vision system that works with Arduino, Raspberry Pi and BeagleBone for color and object recognition, with facial recognition on the way. Previously, it took either a large amount of work or a costly system to provide this functionality for hobby robots, but Pixy has made it easy to give vision options to your homemade robot.

Pixy Pet Robot

Pixy Pet Robot | Bill Earle | Youtube.com

Another exciting option for vision systems is the addition of a laser to create a 3D scan of objects. These systems use a camera, laser structured light, and motion to create a set of data points about the object.

Usually, the object rotates while the camera and laser stay stationary, but a robot may rotate around and over the object as needed. The camera records the reflected laser light, and based on the intensity, it determines the distance to the part from a defined point. Software converts the point grid into a mesh that can either be 3D printed or utilized in some other way. These systems can gather dimensional information about parts as well as the shape and features.

When it comes to research for the vision system of tomorrow, insect vision is the hot topic. Insects react to the world around them and chase prey using simplistic eyes and very little brainpower, which is something scientists are hoping to copy for vision systems. The idea is to use multiple vision cells and change the information collected for ease of processing.

One technique is to fix the background and track an object based on that fixed background. This allows for fast response to movement with minimal movement of the vision system. For high-speed processes, or situations where reaction time is limited to milliseconds, this technology could provide huge advancements.

Another application of this technology is for flight in tight spaces. In this instance, the system tracks the changes in the environment from one pixel to the next to determine position and speed instead of using accelerometers. The system uses three feedback loops to control everything, one to follow the roof or floor, one to monitor speed and opening size, and one to make sure the vision system has the best view of the surroundings as possible.

Conclusion

Vision systems have become a common feature of many robots and there is no end in sight to the possibilities these systems create. As this technology continues to evolve, robots will have access to new and exciting ways to interact with the world around them.

Sources and Further Reading

Basic Robotics - Keith Dinwiddie
Vision Gets a Speed Boost - Control Design
Baxter - A Different Kind of Robot - Rethink Robotics
Machine Vision Technology Developments - Control Design
Using 3D Imagining in Machine Vision - Vision Systems Design
Bio-Inspired Eye Stabilizes Robot's Flight - Science Daily
Robot Eyes Will Benefit From Insect Vision - Science Daily

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dinwiddie, Keith. (2022, November 08). Vision Systems for Robotics. AZoSensors. Retrieved on April 25, 2024 from https://www.azosensors.com/article.aspx?ArticleID=632.

  • MLA

    Dinwiddie, Keith. "Vision Systems for Robotics". AZoSensors. 25 April 2024. <https://www.azosensors.com/article.aspx?ArticleID=632>.

  • Chicago

    Dinwiddie, Keith. "Vision Systems for Robotics". AZoSensors. https://www.azosensors.com/article.aspx?ArticleID=632. (accessed April 25, 2024).

  • Harvard

    Dinwiddie, Keith. 2022. Vision Systems for Robotics. AZoSensors, viewed 25 April 2024, https://www.azosensors.com/article.aspx?ArticleID=632.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.