If you own a modern DSLR or video camera, you’ve already benefitted from vast improvements in lens technology, sensors and the software/firmware that runs your camera. If you’ve been replacing older compact fluorescent lamps (CFLs) with LED lamps—both plain and decorative—you’ve already seen some vast improvements in lighting technology. Put all these technologies together into a machine vision inspection system and add artificial intelligence (AI) into the mix, and you’ll be seeing some important advances in the next five years. In fact, some are already knocking on your door.


Applications

If you think about all the roles for machine vision inspection systems, one of the biggest is label verification. In fact, label problems still remain a top reason for FDA recalls. And label errors range from the simplest (e.g., wrong ingredient listing) to the sublime (e.g., out-of-registration inks), yet vision systems can be used to detect them all, and some problems will be solved with the help of AI.

Probably the most prevalent vision-system label applications, according to Craig Souser, president/CEO of JLS Automation, are checking that the label exists on a can or bottle, verifying it’s the correct label, making sure the information is correct and checking the date code.

Label issues occur everywhere in the industry, and Nelson Leite, director of sales and marketing, JMP Solutions Automation & Robotics Division, notes several secondary packaging applications where the AI has been involved. “We have engineered many applications for customers who manufacture products such as ice cream, egg rolls, corn dogs, chicken nuggets and patties, cupcakes, brownies and onion rings.”

Besides having systems that can inspect for color, shape, surface defects, fill levels and all the label issues (e.g., date/lot codes, barcodes, etc.), Bradley Weber, product marketing manager—machine vision for Datalogic, suggests that a pattern recognition tool can come in handy for secondary packaging. With this tool, the system’s database can be filled with items based on their pattern. When an item shows up in an image, the system can recognize it. The tool also works well for robotic assembling of variety or multi-packs where the vision system can recognize the correct item (even without a barcode), and make sure that, for example, the robotic system loads two each of three products into the variety pack.

Though OAL makes inspection systems for label and date codes in packaging applications, the company has also created systems that can be used to identify product (such as distinguishing between red and white onions) within the final packaging rather than at the processing stage, according to Harry Norman, owner of OAL. “This would be an example of human oversight where it is an obvious error; however, boredom, tiredness, distraction can play a role and allow these sorts of mistakes to slip through the net to reach the end consumer,” Norman says. “Our vision system can prevent this happening, helping to protect the brand.”

While the above is a sorting application at the secondary packaging level, digital sorting in the processing stages can be configured with cameras, lasers, and/or hyper-spectral sensors to detect foreign material (FM) and defective products based on color, size, shape and structural and chemometric properties and/or internal conditions, says Marco Azzaretti, Key Technology advanced inspection systems product manager. 

Belt-fed sorters are ideal for inspecting wet and frozen potato strips and specialty potato products as well as fresh and frozen fruits and vegetables, leafy greens, potato chips and other snack foods, confections, seafood and more, says Azzaretti. Chute-fed sorters are typically applied to inspect nuts, dried fruits, IQF products and other free-flowing and bulk particulate foods. Depending on the processor’s objectives, sorters can be found in the receiving area of the plant, where they remove FM and defects from incoming product prior to processing, and at various points along the processing line up to immediately prior to packaging, where they inspect final product quality. 

In many cases, sorting systems based on RGB colors work fine, but some applications call for different lighting. For example, Bühler uses GaAs camera systems with short-wave infrared (IR) to detect FM where there is or isn’t a color difference in the visible wavelength, says Ben Deefholts, senior research engineer within Bühler’s Business Area Digital Technologies. The company makes sorting machines for mostly raw food products, such as seeds, nuts, grains and coffee in their pre-packaging stages.


General advances in vision in the last five years

Regarding current technology, sensor improvements and CPU processing speeds have increased significantly in the last five years, according to Steve Sollman, senior consultant at Matrix Technologies, Industrial Systems Division, a certified member of the Control System Integrators Association (CSIA). In the tools area, pattern match and OCR tools have continued to improve in both robustness and speed.

Most vision systems today offer a plethora of software tools. According to Allen Cius, Omron vision product manager, there are more than 100 off-the-shelf vision tools in Omron’s FH system to help accomplish almost any measurement or detection requirement so that no special or custom programming is needed. Most application challenges can be solved using standard vision tools, adds Cius.

Not only is software getting better, but hardware has seen many advances. “Thanks to consumer mobile products, we have been able to get more processing power into smart cameras,” says Datalogic’s Weber. More processing power means higher frame rates and improved throughput. Weber adds that with multiple core processors, it’s now possible to have multiple cameras (also known as simply “sensors”) or multiple points of inspection going back to a single multicore vision processor. With the higher processing power, users have the ability to use higher-resolution cameras, allowing them to find smaller defects or see multiple products in one image, says Weber.

One method to connect high-speed cameras to processors uses an open-systems, high-speed Ethernet dedicated to cameras. GigE Vision is a global camera interface, and widely adopted due to the common protocol interface, digitalization capabilities, and reasonable cost, says Neil Chen, senior manager, Advantech/IIoT. The interface makes the connection efficiently and smoothly between sensor, processor and algorithm.

“We have an open systems approach, meaning we can utilize the latest processors to go faster or deal with complex tool configurations,” says Robert Conrad, Mettler Toledo Product Inspection regional sales manager. Mettler’s CIVCore software allows the easy importation and simplification of algorithms to capitalize on the best-of-breed tools available at any given time. The software has bi-directional data flow to allow for easy connection to higher level systems for product selection, downloading of codes, exportation of vision results and OEE statistics via Pack ML standards (Pack Tags).

Some of the more important advances are made not just on the machines, but in the way application teams analyze products and determine end-user needs to optimize the camera to provide the highest sorting performance, says Deefholts. “You might hear a lot about RGB cameras, but the really interesting sorts use just two wavelengths. These two wavelengths have been very carefully selected from the range between 400 and 1000 nm, which show the maximum contrast between the good and bad product. This is one of the big differences between a commodity color sorter and our specialized optical sorters. We put the cameras together using custom designed prisms and filters to suit the customer’s application. In addition, we rely on our precision algorithms and use high-speed valves to ensure that once we have identified a defect, we remove it with surgical precision.”

In terms of improving detection accuracy, Key Technology developed a proprietary system called Pixel Fusion, according to Azzaretti. This detection technology combines pixel-level input from multiple cameras and laser sensors to differentiate more clearly between FM and defects from good product. “With Pixel Fusion, a sorter consistently removes the most difficult-to-detect FM and defects without false rejects to improve quality while maximizing yield,” says Azzaretti. The system can also identify specific FM types, and alerts operators if a quality problem occurs so corrective action can be taken.


Sensor improvements/optics

Because image sensors are manufactured in ways similar to microprocessors, you might say that Moore’s law also applies to these sensors, and therefore, they enjoy many analogous benefits: more pixels, higher resolution, faster response time, more sensitivity to low light, etc. with each new generation. 

Just as important as the sensor is what happens with the optics around it. “From my perspective, the spectra is very important,” says Datalogic’s Weber. “It really helps in creating the contrast that you need in an image to ‘see’ the object or defect.” As sensors improve in seeing in different bands or using polarization, this helps to create the desired image before processing it. As sensors get better at producing contrast for the specific application for which they’re designed, then the processing algorithms can analyze the image, adds Weber.

The choice of spectra is always important, says Bühler’s Deefholts, who says the company uses a range of lighting from UV to short-wave IR. “UV lighting is used, for example, when detecting aflatoxin in maize. When maize is illuminated with UV lighting, the good grain glows blue, and the contaminated grains glow green. IR lighting and InGaAs cameras are used for many FM sorts, including detection of shells from nut meats and sorting of frozen vegetables, including veg mixes, to remove any type FM.”

“Selection of the optimal spectral regions for inspection is crucial when configuring the ideal detection method for a particular application,” says Key’s Azzaretti. The sorter manufacturer might use a spectrophotometer on the customer’s products, defects and FM to see how each of these objects responds to different wavelengths for maximum discrimination and the clearest contrast between each type. Armed with this information, the manufacturer will identify the ideal wavelengths or sets of wavelengths for the application—spanning from the visible color range into the near infrared (NIR) and ultraviolet (UV) spectrum—and recommend the most appropriate technology to achieve the desired results. For a wide variety of different food inspection applications, NIR spectroscopy is relevant and will continue to be a significant practice in the future.


Lighting: Flash or continuous?

Any great Ansel Adams scene would be nothing without lighting, and he often waited for days to get the lighting just right. With vision inspection systems, we don’t have to wait to get lighting right. But nevertheless, the optics of lighting is just as critical to inspection applications. “Resolution and process speed are all predictable and a matter of calculation, but optics are unpredictable,” says JMP Solutions’ Leite.

Should you use flash or continuous? “Sometimes this question is answered based on the electrical design of the system,” says Leite. Why would you toggle a light source? “The environment may not be acceptable for continuous light if the light requirements are extremely intense,” says Leite. “Heat generation could become an issue with extreme, intense light.” 

“We generally don’t strobe, but have on occasion to solve ambient [lighting] or product issues—seeing a clear clam shell package, for example, necessitates creative lighting, and sometimes strobing improves the image quality,” says JLS Automation’s Souser.

“Flash lighting has its uses, but is typically used in budget systems where a more basic sort is required,” says Bühler’s Deefholts. “In sorting systems where the product can move between flashes, registration between colors can be adversely affected, so it [flash] does not always provide the best solution.”

In general, continuous light or IR light is a simpler solution, says Leite. When dealing with reflections, lens filtering such as polarized filters helps reduce the effects of reflections. Software algorithms can also help to distinguish between object deformity and reflections, adds Leite.

Lighting is critical for all machine vision solutions and is another technology that advances on a regular basis, especially with the use of LEDs, says Mettler’s Conrad. They are much brighter, longer lasting, use less energy, are coming in smaller packages with built-in strobe controllers and are available in a variety of wavelengths.

Continuous lighting has become much better as manufacturers keep making great improvements in LED light output intensity, packaging, and more shape and size options, says Matrix’s Sollman. “It’s great to see the industry standardizing on the new ‘Triniti’ strobe light control, which is a plug-and-play solution that several lighting manufacturers are now offering products.”

For end users who are setting up their own systems, they need to realize that with higher resolution sensors, pixels become smaller, and therefore, these sensors often require more light than their lower resolution counterparts, says Datalogic’s Weber. “Unless they get this correct, it doesn’t matter how good the algorithm is. The biggest challenge for us is trying to educate our users on how to set up the correct lighting.”

Key Technology developed a versatile new LED lighting solution to support its Pixel Fusion system, says Azzaretti. The opportunity to define flexible lighting strategies enables the intensity, frequency and timing of pulsed light to be controlled to support each system’s detection configuration.


Image processing

With image processing software, prices have not changed that much over the last five years, but what has changed is the newer, advanced algorithms being built in, coupled with ease of use, says Datalogic’s Weber. “The algorithms are getting better at setting themselves up. It takes fewer configurations by the users to set up the algorithms.”

However, there are still some issues, according to JMP Solutions’ Leite. “OCR has not been as successful as originally anticipated due to its inability to ensure consistent quality in the characters that are typically printed on a label.” The software runs well and is quite powerful, but false negatives become common when character quality diminishes.

Advantech’s Chen, however, thinks that instead of rule-based image processing algorithms, deep learning features will make OCR more accurate and adaptive.

Pattern matching software is one advancement in the industry that has been very successful, adds Leite. Advancements in blob detection, which include filtering methods such as dilation and erosion, have been deployed but add processing time to the inspections, says Leite.

“Color sensing has, to this day, been very difficult to inspect,” adds Leite. Gross differentiation is in place, such as distinguishing green over red, but inspection of a color shade on a product is extremely difficult as the system measures not one level but multiple levels of the spectrum usually defined as RGB values, which make up the different colors that the human eye is able to see. Lighting becomes extremely critical as the light must be consistent itself in color or it may produce false outcomes, adds Leite.

“We are still looking for that big advancement in color vision that will allow us to measure shades of color accurately and repeatedly on the production floor,” says Leite. While costs have come down on software and cameras, Leite finds that vision system manufacturers used to produce one camera with a couple of choices in resolution. Today manufacturers produce many models of cameras, which are stripped down to perform just one or a few tasks such as barcode reading or pattern matching.

“High resolution, very low-cost camera technology makes verification systems accessible and to food manufacturers big and small,” says OAL’s Norman. Rather than having expensive OCR vision cameras, alternative lower-cost cameras can be used to help customers track their label and date codes to avoid mistakes entering the marketplace. Norman says machine learning algorithms and AI are important because they drive down the cost of ownership. As manufacturers change packaging and labels, machine vision systems must be able to “learn” new formats quickly and easily to keep up with demand and innovation while protecting the company’s brand and avoiding product recalls.


What about AI?

Can artificial intelligence (AI) improve vision inspection systems? It depends how you define it and how you use it. “AI covers a very broad spectrum,” says Bühler’s Deefholts. “We have used machine learning in our systems in one form or another since microprocessors were first introduced some 40 years ago. The early systems used the average color of the product as a reference for sorting thresholds; if the product color changed, or more likely with fluorescent lighting, if the lighting changed, the sorter would automatically change the reference for sorting threshold.” The SORTEX S sorting system for rice has sophisticated adjustment mechanisms that allow unskilled operators to work machines with very little intervention, thanks to machine learning, adds Deefholts. 

“I believe that AI will find its place in industry, because it can account for normal production variants that our existing dedicated systems can’t handle without intervention,” says Martix’s Sollman. “I believe AI can be used for trending problems; that is not easily done in today’s sensor programs.”

“‘Deep learning’ is the latest subset of ‘machine learning,’ which is a subset of artificial intelligence,” says Mettler’s Conrad. It requires the use of neural networks with multiple layers and large quantities of items that need to be trained. It will be used to identify the “normal” variations of product and at what point does it become a “bad” product. It will also assist in dealing with normal process variations during manufacturing. The technology has a lot of promise and is the future of machine vision, but is still early in its development and adoption, adds Conrad.

“Combining vision with AI makes our system more flexible and adaptable to changing conditions,” says OAL’s Norman. “For instance, AI can handle changing lighting conditions in factories and variations in print quality from inkjet printers.”

“I believe deep learning and artificial intelligence technology is poised to enable sorters to make even better accept/reject decisions and further ease of use by delivering more advanced self-adjustment capabilities and streamlining system setup,” says Key’s Azzaretti. With the application of these technologies, digital sorters will be able to accommodate a broader range of normal changes in the product and the production environment to maximize the sorter’s performance over time and fully eliminate the need for operator supervision during normal production. Deep learning and AI technology will also simplify the generation of strong image processing algorithms, making sorting system setup faster and easier.  


For more information:

Advantech, www.advantech.com
Bühler, www.buhlergroup.com
Datalogic, www.datalogic.com
JLS Automation, www.jlsautomation.com
JMP Solutions, www.jmpsolutions.com
Key Technology, www.key.net
Matrix Technologies, https://tinyurl.com/y9aql6hz
Mettler Toledo Product Inspection, www.mt.com/pi
OAL, https://connected.oalgroup.com
Omron, www.omron247.com