Automation Sensors

Machine Vision: An Industry 4.0 Driver

EP Editorial Staff | September 1, 2022

Robot “eyes” is one of many applications in which machine-vision technology can play a significant role in production accuracy and reliability.

MV systems are rapidly proving to be a versatile tool for increasing automated system reliability.

By Navneet Nagi, Motion Ai

In the age of readily available and inexpensive computer resources, many advances play a significant role in commerce. These include high-bandwidth data-transfer technologies and easy access to vast data storage, combined with advanced image-processing algorithms and robust imaging hardware development. Today’s manufacturing industry leverages these advances to improve product quality and reduce waste. As shop-floor assets, contemporary machine-vision (MV) systems generate valuable, unparalleled data for beneficial insights into the production process.

What is a machine vision system and how does it add value to an industrial process?

Fast and accurate MV systems enable 100% product inspection with low false-failure rates for real-time quality control. As more industries adopt artificial intelligence to gain competitive advantage, MV systems also leverage deep learning (DL) to expand the scope of inspection types. This is not possible using discrete, rules-based techniques. MV systems can also output data that can then be fed into centralized data analytics and visualization tools to derive rich insights related to product quality, yield maximization, and process improvement. In other applications, this data can be used to build and validate models enabling predictive maintenance, a principal trend of the modern manufacturing process.

MV systems implement multidisciplinary technologies, such as digital imaging, lighting, optics, image processing, and programmable logic controllers (PLCs), to analyze digital images of a product. This qualitative analysis is based on predefined thresholds for the inspection metrics. The acquired images are processed using computer-vision algorithms. These systems can range from smart cameras that fit in the palm to large inspection cells with multiple inspection stations, including electromechanical actuators, conveyors, and robots for part handling. Large systems perform extremely complex inspections and may use imaging-sensor technologies from across the spectrum.

Other tasks of MV systems include:

• outputting more complex part orientation data for robot guidance
• performing complex part measurements
reading 1D/2D machine-readable codes.

MV systems automate various industrial processes such as quality control, part handling/tracking, and process control.

As one of the techniques to measure quality, the data generated from these systems are typically used to automate separation and isolation of non-compliant product before it leaves the plant. The inspection data can be further analyzed in tandem with the non-compliant products to identify process deficiencies such as failing machinery, improper process-parameter setup, or foreign material introduction. Additionally, this analysis can be essential for improving waste-reduction methods. Manual inspections are ambiguous, slow, and inaccurate, whereas extremely powerful MV systems typically run in near-real time and quickly identify and correct process issues.

MV applications

Defect detection is the most common MV application. These MV systems implement discrete or analytical image-processing algorithms for relatively simple flaws but may also use deep learning techniques to detect highly subjective defects or those with weak definitions. Application examples include finding anomalies on automotive parts, missing paint/primer on oriented strand board (OSB) plywood, incomplete thermal seals, and font-agnostic OCR.

MV systems excel at performing real-time part measurements with high accuracy and repeatability. Known as MV gauging systems, these capture an image using specialized lenses designed to minimize distortions and then place virtual calipers to measure various features extremely quickly. These systems can be installed in-line to measure critical features of every part being produced.

One class of MV systems is designed to serve as a robot’s “eyes” to provide guidance. Typically, a vision sensor images a bin full of parts; the vision algorithms identify the candidate parts in the visible layer of the bins and determine their location and orientation in space. This location and orientation data is then transferred to the robot to be efficiently picked up and placed in a specific location. Examples include product orientation on a conveyor, part transfer and singulation from bin to conveyor, and part alignment. This solution is also implemented for automated assembly and machining.

MV systems can also provide vision-based feedback to control processes. The MV-generated measurement data can analyze trends, using this feedback to make corrections to the process parameters online—with minimal or no operator or quality personnel intervention.

Advanced applications use 3D sensors to generate a point-cloud representation of the object such as this automotive engine. Image: Motion Ai

Technologies used

An MV system’s primary component is a vision sensor or industrial camera. Depending on the application, sensors capable of operating within specific electromagnetic spectrum bands can be chosen to best show the features of interest with maximum contrast. Most applications implement sensors operating in the visible light range of 380 to 700 nanometers, like the human eye. Applications that require inspecting the integrity of a thermal signature implement a camera operating in the medium-wave infrared (MWIR) range. Examples include the inductively sealed caps on food/beverage and pharmaceutical packaging or the heating coil for a heated car seat. Pharmaceutical, produce, and meat inspection applications may implement hyperspectral imaging technologies to isolate the defects with optimal accuracy.

Sensors typically acquire images in a rectangular format. These are called area-scan images; the sensors are categorized as area-scan sensors. Specialized applications such as continuous-web, round-bottle, or label inspections use a specialized line-scan sensor. The images are generated by combining all the acquired lines. Area-scan and line-scan sensors generate 2D images.

Advanced applications use 3D sensors to generate a point-cloud representation of the object. The point-cloud or the height-to-grayscale representation can be used to process 3D images and measure features of varying heights. Typical applications for this technology include pallet inspection, package dimensioning, feature-height measurements for automotive parts, tire DOT code and tread inspection, surface roughness inspection, and robot guidance.  

Discrete computer vision algorithms are extremely powerful and solve most applications. They typically need a small set of images and vision inspection servers without GPUs to deploy in production. However, these solutions may become highly complex and hard to maintain and scale when production shows variation. One example is OCR applications; training and maintaining them is prohibitively complex when additional fonts are introduced. Such scenarios have a high likelihood of reduced accuracy, a huge risk for validated systems—especially in the pharmaceutical, automotive and aerospace industries.

Deep-learning application 

The application of DL models has proven effective due to the limitations of discrete, rules-based computer-vision algorithms, plus the increasing demand for solving some of the tougher MV applications, specifically classification and anomaly detection. Deep learning falls into the machine-learning field, all under the scientific discipline of AI.

Industries require extremely low false-failure rates for MV systems to minimize waste. DL deployments typically require a large, good-quality dataset to train accurate production-deployment models. Such datasets are meticulously and manually generated by personnel who know the defect representations and the industrial process. This can become cumbersome and result in project time and capital overruns. Deployment of such algorithms typically takes longer to validate due to their iterative nature. In the current state, deep learning implementations are best suited to solve narrow problems, which are extremely difficult for discrete algorithms. Hybrid algorithms implementing discrete algorithms and DL models—leveraging the strengths of both techniques—have proven immensely successful.

Machine-vision systems can play a highly effective role in making modern manufacturing more efficient. These systems can be implemented at almost every stage of manufacturing across various industries and serve as hubs that generate rich data to help quality engineers, manufacturing engineers, production managers, and executives better understand the process. MV systems are indispensable in the modern factory, eliminating slow and inaccurate manual inspection processes. AI and DL technologies add capabilities to tackle some of the toughest inspection requirements while maintaining or improving existing accuracy. EP

Navneet Nagi is a Senior Machine Vision Application Engineer at Integro Technologies Corp., a division of Motion Ai, Birmingham, AL (ai.motion.com). He has developed, scaled, and helped deploy numerous automated inspection systems in production across industries ranging from agriculture to aerospace. Nagi has extensive experience developing algorithms for automated industrial inspection applications using traditional computer vision (CV) techniques and deep learning (DL).

FEATURED VIDEO

Sign up for insights, trends, & developments in
  • Machinery Solutions
  • Maintenance & Reliability Solutions
  • Energy Efficiency
Return to top