Vision systems: what you need to know
By Christopher Chalifoux, International Applications Engineer, Teledyne DALSA
Thursday, 09 October, 2014
Vision systems are a primary consideration for any manufacturer who is looking to improve quality or automate production, but choosing systems that match your application and ownership requirements can be confusing.
Vision systems can be thought of as computers with eyes that can identify, inspect and communicate critical information to eliminate costly errors, improve productivity and enhance customer satisfaction through the consistent delivery of quality products. Primarily used for online inspection, vision systems can perform complex or mundane repetitive tasks at high speed with high accuracy and high consistency. Errors or deviations in the manufacturing process are immediately detected and relayed, allowing control modifications to be made on the fly to reduce scrap and minimise expensive downtime. Vision systems are also deployed for non-inspection tasks, such as guiding robots to pick parts, place components, dispense liquids or weld seams.
Vision systems come in all shapes and sizes to suit any application need, but they all have the same core elements. Every vision system has one or more image sensors that capture pictures for analysis and all include application software and processors that execute user-defined inspection programs or recipes. Additionally, all vision systems will provide some way of communicating results to complementary equipment for control or operator monitoring. That said, it is important to know that there are significant and important differences between vision systems that make one more suitable than another for any given application. It is equally important to know and appreciate the importance of choosing the optimal sensor, lighting and optics for the job. Failure to do so may result in unexpected false rejects or, even worse, false positives.
There are many variants of vision systems on the market, but for the purpose of this article we will classify them all into two categories - those with a single embedded sensor (also known as smart cameras) and those with one or more sensors attached (multicamera vision systems). The decision to use one or the other is dependent not only on the number of sensors needed, but also on a number of other factors including performance, ownership cost and the environment where the system needs to operate. Smart cameras, for example, are generally designed to tolerate harsh operating environments better than multicamera systems. Similarly, multicamera systems tend to cost less and deliver higher performance for more complex applications.
Another way to differentiate the two classes of systems is to think in terms of processing requirements. For many applications, such as in car manufacturing, it is desirable to have multiple independent points of inspection along the assembly line. Smart cameras are a good choice as they are self-contained and can be easily programmed to perform a specific task and modified if needed without affecting other inspections on the line. In this way processing is distributed across a number of cameras. Similarly, other parts of the production line may be better suited to a centralised processing approach. For example, it is not uncommon for final inspection of some assemblies to require 16 or 32 sensors. In this case, a multicamera system may be better suited as it is less costly and easier for the operator to interact with.
Perhaps the most important consideration when selecting any vision system is software. The capabilities of the software must match the application, programming and runtime needs. If they don’t, you will find yourself investing more time and expense than you anticipated in trying to make the system conform to your expectations. If you are new to machine vision or if your application requirements are straightforward, you should select software that is easy to use (doesn’t require programming), includes core capabilities (such as pattern matching, feature finding, barcode/2D recognition, OCR) and can interface with complementary devices using standard factory protocols. If your needs are more complex and you are comfortable with programming, you might look for a more advanced software package that offers additional flexibility and control. In either case, make sure that the software you choose is available across vision system platforms in case you need to migrate due to changing inspection requirements.
Implementation factors to consider
Image sensor resolution
Image sensors convert light collected from the part into electrical signals. These signals are digitised into an array of values called ‘pixels’, which are processed by the vision system during the inspection. Image sensors can be integrated into the system, such as in the case of a smart camera, or into a camera that attaches to the system. The resolution (precision) of the inspection depends in part on the number of physical pixels in the sensor. A standard VGA sensor has 640 x 480 physical pixels (width x height) and each physical pixel is about 7.4 microns square. From these numbers, resolution can be estimated for your real-world units.
Image sensors used by vision systems are highly specialised, and hence more expensive than, say, a webcam. First, it is desirable to have square physical pixels. This makes measurement calculations easier and more precise. Second, the cameras can be triggered by the vision system to take a picture based on a part-in-place signal. Third, the cameras have sophisticated exposure and fast electronic shutters that can ‘freeze’ the motion of most parts as they move down the line. Image sensors are available in many different resolutions and interfaces to suit any application need. In many cases, multiple image sensors are deployed to inspect large parts or different surfaces of the same part.
Sensor lens selection
Each sensor needs a lens that gathers light reflected (or transmitted) from the part being inspected to form an image on the sensor. The proper lens allows you to see the field of view (FOV) you want and to place the camera at a convenient working distance from the part. The working distance is approximately the distance from the front of the sensor to the part being inspected. A more exact definition takes into account the structure of the lens and the camera body.
Consider this example: If a part to be inspected is 100 mm wide by 50 mm long, you would need an FOV that is slightly larger than 100 mm, assuming you can position the part within this FOV. In specifying the FOV you need to also consider the camera’s aspect ratio - the ratio of the width to length view. The sensors used with vision systems typically have a 4:3 aspect ratio, so the example 100 mm x 50 mm part would match the sensor dimension, but a 100 mm x 90 mm part would require a larger FOV to be seen in its entirety.
From the FOV, working distance and the camera specifications, the focal length of the lens can be estimated. The focal length is a common way to specify lenses and is, in theory, the distance behind the lens where light rays ‘from infinity’ (parallel light rays) are brought to focus. Common focal lengths for lenses in machine vision are 9, 12, 16, 25, 35 and 55 mm. When the calculations are done, the estimated focal length will probably not exactly match any of these common values. The way around this is to pick a focal length that is close and then adjust the working distance to get the desired FOV.
Most vision suppliers have tools that will help you calculate the closest lens to match your FOV and working distance. There are other important specifications for lenses, such as the amount and type of optical distortion the lens introduces and how closely the lens can focus.
Lighting source
The human eye can see well over a wide range of lighting conditions, but a machine vision system is not as capable. You must therefore carefully light the part being inspected so that the vision system can clearly ‘see’ the features you wish to inspect. Ideally, the light should be regulated and constant so that the light changes seen by the vision system are due to changes in the parts being inspected and not changes in the light source. While some vision algorithms can tolerate some variation in light, a well-designed implementation will remove any uncertainty. When selecting a light source, the goal is to amplify the elements of the part that you want to inspect and attenuate elements that you don’t care about. Proper lighting makes inspection faster and more accurate, whereas poor lighting is a major cause of inspection failure. Generally it is recommended to avoid using ambient light, such as overhead light, as this can vary over time. Factory lights can beat, burn out, dim or get blocked. Similarly, if there are windows near the inspection station, outside light changes can have a negative effect on system robustness. Selecting the proper lighting requires some knowledge and experience that most suppliers can provide during application evaluation.
Predictable part presentation
It is important to consider how parts will be presented to the vision system for inspection. If the part is not presented in a consistent way, you will not achieve the desired result. Therefore, you will need to ensure that the surface of the part you want to inspect is facing the sensor at runtime.
Next you will need to decide whether the part is to be inspected while in motion or stationary. If the part is moving, the motion will likely need to be ‘frozen’ by turning the light on briefly (strobing) or by using the high-speed electronic shutter feature of the sensor (standard on most industrial vision sensors). In this case you will need to provide a trigger to the sensor to let it know when to take a picture. The trigger is typically generated by a photoelectric sensor that detects the front edge of the part as it moves into the inspection area. If the part is stationary, for example indexed or positioned in front of the sensor by a robot, the sensor can be triggered to take a picture from a PLC or the robot itself.
Finally, if you are inspecting parts at very high speed, you will likely need to optimise part positioning to reduce processing time. Keep in mind when designing your system that everything consumes processing bandwidth. So, when considering a vision system for high-speed inspection, you should try to determine which of your requirements are critical or just ‘nice to have’.
Armed with knowledge, support and a reputable supplier, the cost of implementing vision solutions on the factory floor will be returned many times over through increased quality, production efficiency and scrap reduction.
Want to learn more?
If you want to learn more, particularly about machine vision interfaces, there is a new Global Machine Vision Interface Standards brochure jointly published by the Automated Imaging Association (AIA), European Machine Vision Association (EMVA) and the Japan Industrial Imaging Association (JIIA). The brochure contains detailed descriptions and comparison charts of the hardware and software specifications that power the machine vision industry around the world, including Camera Link, Camera Link HS, GigE Vision, USB3 Vision (AIA) and CoaXPress (JIIA). In addition, the software standards covered in the brochure include GenICam (EMVA) and IIDC2 (JIIA). It marks the first time all these industry groups have come together to issue a comprehensive global resource.
Advanced robotics in tomorrow's factory
Addressing the production challenges of complexity, customisation and openness.
Cracking the nut: robotic automation at Freedom Fresh
SCARA robots from Shibaura Machine have found a place in helping to package macadamia nuts.
Food plant expansion sustained by central robotic palletising system
A palletising system with eight robotic cells has been installed at Unilever's food factory...