While performing dangerous jobs, vision-guided robots can also help prevent food contamination.


The new delta robot platform LDx from Sigpack Systems is suitable for use in the food/confectionery industries. The platform is made up of compact cells with integrated delta robots, which can be adapted to suit a variety of applications. Source: Sigpack Systems.


Some robots may be out of work in the auto industry, but there are plenty of jobs in the food and beverage industry. And though robots will need training to learn these jobs, processors might find these untiring workers come a little cheaper than they thought-thanks to relatively inexpensive high-performance vision systems, controls and servos.

According to Steven West, development manager for vision-guided robotics at ABB Robotics NA, the essential applications for vision-guided robotics in the food industry are the three Ps: picking, packaging and palletizing. These three cover most applications: for example, tray-loading of cookies, feeding of thermoform machines or tray sealers with products like meats and frozen products and loading cartons on skids or pallets.

According to Jon Hocker, JBT FoodTech, marketing and business development manager, another “p” could be added to the list: portioning. From outside the machine, JBT FoodTech’s portioner doesn’t look like a robot. But inside, it acts a lot like a robot. The machine uses 3-D vision to measure length, width, height, contour and fat versus lean pieces of boneless meat traveling on a conveyor, and then mathematically decides how to get the greatest number of pre-determined sizes/shapes/weight of product out of each piece. Finally, it uses water jet technology to cut the meat into the calculated portions while minimizing waste.

When “portioning” or slicing/cutting larger unstructured slabs, a 3-D vision system can serve as a robot’s eyes as it cuts a side of pork or ribs. According to Steve Prehn, material handling, irVision, Fanuc Robotics, a 3-D system not only provides x, y and z information, it also adds three more degrees of freedom-roll, pitch and yaw-and that information can allow the robot armed with a knife to slice the fat off an irregularly-shaped slab of meat. Prehn adds that since a meat slab’s dimensions are not known or pre-defined-as opposed to a shape such as a block of cheese or a candy bar-the depth of the slab can be determined by using a second camera or laser sensors.

The Westfleisch meat center in Coesfeld, Germany, tested two six-axis, jointed-arm robots to determine if they could handle unstructured meat and carcass cutting. According to Karsten Turek, Coesfeld operations manager, two Kuka KR 30 and two KR 60 robots were integrated into the production line. A 3-D laser measuring system, software and conveyors were teamed up with the robots to provide precise data about the carcasses and their conveyance to the cutting operation. The first robot in the line is equipped with a double shackle and removes the animals’ front extremities at a defined position while the second removes additional parts not suitable for consumption. Following a second 3-D laser measurement, the third robot breaks bone with a cleaver-like cutting tool and scores the abdominal wall. The fourth robot pulls the abdominal wall of the carcass forward with a pin and opens the chest and abdominal cavity with a circular cutter. A guide mechanism ensures that the sternum is always cut completely and precisely through the middle.

“The robots automatically disinfect their tools in water heated to 178°F after every process step. This ensures a germ-free environment throughout the entire process chain, guaranteeing long-term freshness,” says Turek. “Additionally, improved quality also translates to improved cost-effectiveness. We can achieve this, for example, with the cutter, which increases part yield by 5% over manual cutting.”

Using 3-D vision, the Fanuc M-710i robot cuts and trims ribs, or any other large unstructured slab of meat. Source: Fanuc Robotics.

Robotic eyes

In their infancy, vision systems consisted of three physical components: the camera or cameras, the lighting system and a controller that fit into a metal box the size of a suitcase. Not only was the size big, but the price tag was also huge with the controller alone costing anywhere from $10,000 to $50,000. While these vision systems could be the eyes of a robot, they often were used as a pass-fail inspection device, with their software determining whether a product met dimensional or surface specifications-what current vision sensors do. 

Today, says Mark Lampert, Banner Engineering’s business development manager, a $1,500 vision sensor (camera plus electronics) typically includes a built-in touch screen and has a more powerful pattern-matching algorithm than the $20,000 vision system of 10-15 years ago. These sensors often provide discrete I/O for pass-fail inspections and may include Ethernet/IP, Modbus TCP, Profinet and other industrial protocols for video, OCR and coordinate outputs.

“People want to use vision technology today, but more in the sense where they would have used a photoelectric sensor a few years ago,” says Jim Anderson, SICK Inc. machine vision product manager. The lines blur, however, in comparing vision sensors with vision systems because the capabilities for both can exist in the same box with the camera and often are defined in software. Anderson recommends that users not overbuy functions they don’t need. That is, when pass-fail is all that’s necessary, coordinate data and Ethernet outputs are not required. A simple discrete output usually will do.

Anderson suggests combining pass-fail inspection with coordinate information for a robot in the same box, therefore a vision sensor/system-or smart camera-could have a discrete output for pass-fail, diverting a defective product rather than having the robot pick it up and proceed with processing or packaging. Smart cameras can combine imaging, lighting and analysis in a single housing with the tools to inspect and provide 3-D coordinate information to a robot.

While some robotics companies have integrated their own vision systems to make setup and configuration a little easier, traditional vision-only companies such as Cognex have both vision sensors and vision systems that plug into popular robots. John Lewis, Cognex market development manager, says that for many years, his company supplied the eyes to a dozen or more robot brands before they began developing their own vision systems. To make vision systems nearly plug and play with each robotic system, it was necessary to develop software drivers and protocol for each-much like a scanner company develops driver software to run on various computer platforms. Similarly, a vision company spends its time focused on vision systems-not on robotics-and can develop sophisticated vision application tools.

Robotics manufacturers and vision suppliers often team up to fine tune software for robots. Matt Lorig, Mitsubishi Robotics product manager, described his company’s vision software (MELFA-Vision) as a result of Cognex-Mitsubishi collaboration. The alliance resulted in a system that makes vision/robotics integration easy to set up for end users. Lorig says that large food companies with in-house engineering staffs often do the integration themselves while smaller processors typically look to system integrators with expertise in the food industry and robotics to set up their applications.

ABB’s delta-class robot, the FlexPicker, handles four different frozen pizza products in triangular, circular and oval shapes at Panidea, an Italian pizza producer. The robot has a hygienic design with no painted surfaces and is rated IP67 for washdowns. Source: ABB Robotics.

Quick and determined

While it may seem that a vision system would have to deliver constant, streaming video to a robot, this is usually not the case. Speed of vision transfer is important, but “the vision-guided robot system needs the image capture to be deterministic,” says Jon Otto, Bosch Packaging Technology sales engineering/product manager. “The transfer of the image data is less critical,” he adds.

Lampert says that even though robots are fast moving, the vision system doesn’t have to keep up in real time. For example, once coordinates have been sent to a palletizer to pick up a box, the next picture can be taken and coordinate information processed while the robot is still moving, delivering the previously scanned package to the pallet.

New vision technology and algorithms that provide real-time-of-flight data to the robot are driving suppliers to expand into new and faster applications in food processing, says ABB’s West. For those readers who may have seen ABB’s FlexPicker (architecturally a delta-class robot) in action, the recently-released second generation machine operates with a 30-60% faster throughput and 50% higher payload than the first generation.

“The faster the robot, the faster the vision system must be,” says Roland Czuday, Sigpack product manager delta robot systems. For very fast delta-class robot pick-and-place applications (say 150 pick and place cycles per minute), the time from taking the picture to delivering the coordinates of a [good] product to the robot should take only a few dozen milliseconds.

Country Fresh’s dairy plant in Flint, MI, installed a Kuka KR180 PA robot in its palletizing operations. The split head end of the arm tooling from Dyco allows the robot to pick multiple products at once and place single products. Source: Kuka Robotics Corp.

Give a robot a job

“Typically a robot improves workforce productivity and quality,” says West. “A robot normally pays back within 18 months or less within most economies, even where labor rates are low. Most global companies see robots as a way to solve ergonomic, safety and quality problems, and they factor this together with the labor savings provided by robots,” adds West. The business case for robots is excellent for food industry applications, he says.

Another benefit to letting a robot do the work may not be so obvious. Lewis remembers the reason IBM bought an early Cognex vision system for a robotics cell to work in a wafer fab/clean room-to keep humans out so they couldn’t contaminate the process. Arguably, preventing human contamination of food by viruses and bacteria is a good reason.

Robots usually are far better than humans at performing monotonous, repetitive tasks, but some jobs can be more problematic for robots, says Czuday. Some tasks can pose problems by product presentation and through restrictions caused by gripping tools. Robots may not be appropriate in applications where many different handling tasks are required in short cycles or where there are many format changes and several different gripping tools required.

Robots, however, can provide some other less obvious benefits. Processors often don’t realize how much time is lost in changeovers in non-robotic systems, says Dick Motley, account manager with Fanuc Robotics packaging integration network. In addition, non-robotic automation may have several hundred points of failure with sensors, motors, belts, chains, valves, etc. In many cases, these systems can be replaced with a robot where it’s realistic to expect a mean time between failures (MTBF) of more than 90,000 hours.

Keeping it under control

To get the performance robots need, controllers have largely been dedicated and often proprietary. But gradually, some suppliers are seeing the benefits of combining disciplines (motion, PLC, computer numerical control, etc.) on a single platform and using more generic PC-based technology. Czuday says robots can be operated with PC-based hardware-which means lower costs. In addition, today’s powerful PCs can control many robots with one controller.

Lorig says Mitsubishi’s robot controller is a dedicated system-not a PLC. But he sees no reason why robotic control couldn’t be combined with PLC functionality and motion in a multi-disciplinary platform such as the company’s iQ Automation controller, which fits into the relatively new category of programmable automation controllers (PACs). By combining motion, programmable logic and robot control to the PAC’s feature set, the controller further eliminates the engineering work needed to integrate disparate control disciplines within the same system. All the CPUs in the rack share the same I/O bus and network interfaces.

Rami Al-Ashqar, Bosch Rexroth electric drives and controls product manager, suggests opening the robotic controller. Instead of having a dedicated robot control in a black box-where nobody knows what’s inside it-why not use an everyday PAC to run the robot, conveyor belt and I/O as well. For the end-user, he says, this “open system” is much easier to maintain and retrofit.

Robots on the move

According to a recent PMMI study entitled Robotics: Usage and Trends in Packaging Applications, 22% of the food and food preparation industries use robotics in packaging and 30% of the beverage industry uses robotics. Robotics usage continues to grow with palletizing and case/carton packaging being the most widely used application for packaging robots. PMMI members mentioned the food/food prep industry as the primary market for robotics. More than 90% of packagers say robots are successful at performing their jobs, and nearly 80% of packagers feel robots are important to their packaging operations. According to the study, vision systems and faster and increased flexibility of robotics are expected trends in the near future.

Improvements that make robotic application set up easier and increase performance are in the works as suppliers attempt to address more of the unstructured applications-such as slaughtering and processing chicken, hog and beef. Prehn describes a single camera-based system that can work with several robots on a line. In addition, a major step for Fanuc has been to automate the calibration procedure. A calibration grid is placed under the camera, and the robot moves around in its space. At the end of the process, the camera is calibrated with respect to the robotic frame. Prehn suggests that even novice users should be able to get the robot moving to the right spot.

Other advances, says Motley, include adopting an edge-based geometric pattern matching algorithm and electronic shutters in cameras. The first is very forgiving of gray-level changes in lighting, which simplifies setup. The second allows the camera to adjust automatically for lighting changes, and Motley says demonstrations have shown the system works in reduced lighting environments. In addition, Motley sees the use of 3-D algorithms in pallet stacking becoming important to create stable and conveyable loads for maximum density while taking into consideration delivery destinations and configurations.

Not long ago the patents expired on the delta-style robot, meaning that many suppliers have released first generation products. According to Otto, the future for delta-style robots looks strong, especially in  the food industry with large growth potential in bakery. Traditionally the food market involves hand-pack stations requiring people to load cartons. With a delta robotics systems using vision, handling randomly orientated product becomes a very efficient means of loading cartons. “With the patents now expired, the competition becomes significantly tighter with new delta-style arms entering the market. The key factor to continued success in the delta-style robotic market is application knowledge. It is the ability to apply the delta-style robot technology in the correct application,” concludes Otto.

ABB, which had rights to the original patent, has not slouched on its second generation. In addition to the improvements already noted, the software running the robot can control up to eight robots and eight cameras, working together in one application or in multiple, independent processes.

Key challenges in several food applications include the ability to withstand washdowns and a crevice-free design to avoid harboring bacteria. According to Motley, robots have survived in nasty environments like foundries, but the chemical attack and temperature swings from washdowns represent challenges unique to the food industry that have to be solved to eliminate contamination issues. While the robot must be protected from the environment, the food must also be protected from the robot-meaning the robot must be easy to clean, not harbor bacteria and use food-grade lubricants.

For more information:
Jim Anderson, SICK Inc., 952-941-6780, jim.anderson@sick.com
Jon Hocker, JBT FoodTech, 419-626-0304, jon.hocker@jbtc.com
Jon Otto, Bosch Packaging Technology, 715-243-2497, jon.otto@bosch.com
Steve West, ABB Robotics, 248-391-9000, steven.w.west@us.abb.com
Mark Lampert, Banner Engineering, 763-513-3067, mlampert@bannerengineering.com
Matt Lorig, Mitsubishi Electric Automation, Inc, 847-478-2667, matt.lorig@meau.com
Dick Motley, Fanuc Robotics, 248-377-7000, marketing@fanucrobotics.com
Steve Prehn, Fanuc Robotics, 248-377-7000, marketing@fanucrobotics.com
Roland Czuday, Sigpack, 41 52 674 6654, roland.czuday@bosch.com
Rami AL-Ashqar, Bosch Rexroth, 847-645-3746, rami.al-ashqar@boschrexroth-us.com



The top 10 robotics application mistakes

1:     Underestimating payload and inertia requirements: The most common cause is failure to include the weight of the end-of-arm tool in the payload calculation, and the second most common cause is underestimating or completely ignoring the inertia forces generated by off-center payloads.

2:     Trying to do too much with the robot: Sometimes the awesome capability and flexibility of a robot can cause a designer to over task the robot and make the work cell too complex.

3:     Underestimating cable management issues: Optimizing cable routing to end-of-arm tooling or peripheral devices is crucial for unrestricted movement of the robot mechanic.

4:     LOSTPED or failure to consider all application elements before choosing a robotics system: Carefully consider Load, Orientation, Speed, Travel, Precision, Environment and Duty cycle.

5:     Misunderstanding accuracy versus repeatability: An accurate mechanic can be repeatable, but a repeatable mechanic may or may not be accurate.

6:     Choosing a robotics system based solely on the control system: This is ironic because once the robot is deployed, the uptime is mostly dependent upon the robustness of the mechanic-not the controller.

7:     Failure to accept robotics technology: If the end-user fails to embrace the robotics technology, the project will be doomed to failure.

8:     Overlooking the need for crucial robot options or peripheral devices: Teach pendants, communication cables and even special software options are all examples of items that may be needed but forgotten at the time of the initial order.

9:     Under- or overestimating the capabilities of a robot controller: Underestimating the capabilities of a robot controller can lead to duplication of systems and the incurring of unnecessary costs. Duplicating safety circuitry is very common.

10:  Failure to consider using robotics technology: The size of the initial investment, lack of familiarity with robot technology and past failed attempts are all reasons that people sometimes shy away from using robotics technology.

Source: Bosch Rexroth