Geometric object location is highly robust

Aug. 1, 2000
Machine vision became popular in the early 1980s. At the time, much academic research was done to explore various algorithms or approaches that could improve the capability of vision systems to locate objects (often manufactured parts) reliably.

Jean-Noel Berube

Geometric object location algorithms are insensitive to factors that include variations in lighting and background, as well as object scale, orientation, and overlap.

Machine vision became popular in the early 1980s. At the time, much academic research was done to explore various algorithms or approaches that could improve the capability of vision systems to locate objects (often manufactured parts) reliably. These early researchers had envisioned various approaches that could lead to enhanced vision robustness, reliability, and speed. Unfortunately, due to the lack of general availability of suitable computing platforms, the industry had to rely first on easy-to-implement recognition techniques such as blob analysis and normalized gray-scale correlation algorithms. The performance limitations of these traditional algorithms have been much documented in the past.

The rapid rise in computer processor speed is transforming the machine-vision industry. Thanks to the advent of the Intel Pentium II—along with its associated MMX graphics technology—it has today become practical to implement algorithms that allow computers to locate objects based on their geometric contour characteristics. Compared to traditional approaches, geometric object location (GOL) offers important advantages. Some of these are its robustness and insensitivity to nonlinear lighting and varying shading situations, part occlusions, background scene variations, low-contrast images, and poorly defined image edges.

Traditional techniques

Blob analysis is a technique that involves of classifying image pixels as background or object using a thresholding algorithm, then joining the classified pixels to define discrete objects based on neighborhood connectivity rules. Finally, various moments of the connected objects are computed to determine object position, size, and orientation. The technique is fast and, as long as images are not degraded, has subpixel accuracy. It also handles some variation in part orientation and size. However, it does not tolerate touching or overlapping parts, works poorly when confronted by image degradation, and cannot handle all object shapes.

FIGURE 1. In the first step of geometric object location, a raw image is acquired by a camera (top left). Contours are extracted, from which a contour image is created (bottom left). A previously acquired model image serves as a reference (top right). A fit between the model image and the contour image gives the position of the object, as depicted in a so-called instance image (lower right).
Click here to enlarge image

Normalized gray-scale correlation improves on blob analysis by removing the thresholding operation and by discriminating parts using the full gray-scale intensity information. It is a mathematical process that identifies a region in an image that "looks most like" a given reference model image. The model image is overlaid on the image at many different locations; at each, the degree of match is evaluated, and the position with the best match represents the part location.

Traditional vision algorithms usually require that the part be more or less fixtured in a known position. However, even when the part is mechanically fixtured, an optical confirmation of its precise position is still needed before optical inspection can proceed. This approach exhibits some major drawbacks. For example, parts must be fixtured to stay within ±7° from the trained position. The tool configuration is unique to each part; when new parts are introduced, a new configuration must usually be set up, introducing significant development effort and cost. In addition, as only small portions of a part are used for location confirmation, significant position information is lost, rendering the application more sensitive to external environmental conditions.

GOL is discriminating

An image-processing program developed by HexaVision Technologies offers an example of a software implementation of true GOL algorithms. Based on open PC standards, it is intended to deliver many of the long-hoped-for advantages of GOL techniques. For example, using GOL, parts can be identified and located regardless of their orientation, and the orientation can then be measured to within 0.01°. Scale variations of 10% to 1000% with respect to the model can be detected and measured precisely. Touching and overlapping parts can be detected and located.

FIGURE 2. In a calibration-control window, a template of regularly spaced dots imaged by the camera is used to compensate for nonsquare pixels, lens distortion, and perspective errors. Once the calibration is done, all acquired images are calibrated in world units?such as millimeters or inches?instead of pixel units.
Click here to enlarge image

In addition, simultaneous identification of many parts (of the same model or not) is possible with GOL. This feature is especially interesting for sorting applications in a mixed-flow production environment. For unambiguous identification, part models that are almost identical can be easily discriminated by specifying, for a given model, precise features that must be found to detect the said part model. Similar to capabilities available in more traditional software packages, GOL techniques allow for the automatic generation of models with optional manual fine-tuning capability.

These algorithms are insensitive to nonlinear lighting variations caused, for instance, by specular reflection on the work surface; they are also the basis for a complete family of model-based inspection tools that perform at subpixel precision (reaching from 0.1 to as high as 0.025 pixel).

Geometric object location simplifies vision-system integration, in large part because the algorithm itself is simple. In the first step of GOL, a gray-scale image is acquired with the camera. Edge-detection techniques are applied on this image to generate a contour description (see Fig. 1). Next, a model of the part is created using contour features that best describe this part (the system can support as many models as required by the application).

Parts can be found by finding occurrences of the model contours in the image. Model-based inspection tools are then added using the model image as reference. As many tools can be added as demanded by the application. The programmer does not need to worry about moving the tools to where the parts are found, as all of this is done by the vision software package. Finally, code is written to provide measurements to human operators or to other machine interfaces.

Calibration removes image distortions

Proper use of GOL requires a calibration procedure. Rather than being restrictive, however, calibration is a simple step that brings flexibility as well as higher accuracy to machine vision. Calibration permits correction of image errors and allows portability from site to site.

Optical subsystems used in machine vision typically introduce three types of image errors. The first arises because most camera pixels are not square, inducing a scale in the x-axis that differs from that in the y-axis. The effect becomes apparent when an object is rotated in the field of view of the camera. Although this sort of distortion may not be apparent to the human eye, it is sufficiently strong to reduce precision. The second occurs because cameras are rarely perfectly perpendicular with respect to the work surface—a nonorthogonality that generates perspective distortion. Third, camera lenses induce some amount of radial distortion. Such distortion is generally stronger with lenses that have a short focal distance.

These three error sources must be corrected to ensure the robustness and accuracy of the object-location tools. The calibration techniques now available make the corrections and take only a few minutes to perform. Typically, a calibration target consisting of an array of evenly distributed dots is placed on the work surface. The user enters the nominal distance between each dot and clicks a button in the calibration interface, initiating an automatic calibration (see Fig. 2). This procedure ensures proper object location and accurate inspection procedures. To allow for flexible setups—particularly when dealing with multiple camera or robotic applications—multiple calibrations are intrinsically supported.

The recent advent of GOL algorithms is simplifying the implementation of machine-vision systems. For most applications, the added cost of the object-location tools is more than offset by simpler application coding and lower long-term application maintenance expenses. Furthermore, more robust operation will usually result. These new tools—and continual improvements to their performance—have led industrial machine-vision users to consider many new applications for machine vision.

JEAN-NOEL BERUBE is vice president of sales and marketing at HexaVision Technologies Inc., 200-1020 Route de l'Eglise, Sainte-Foy, QC G1V 3V9, Canada; e-mail: [email protected].

Sponsored Recommendations

March 31, 2025
Enhance your remote sensing capabilities with Chroma's precision-engineered optical filters, designed for applications such as environmental monitoring, geospatial mapping, and...
March 31, 2025
Designed for compatibility with a wide range of systems, Chroma's UV filters are engineered to feature high transmission, superior out-of-band blocking, steep edge transitions...
March 31, 2025
Discover strategies to balance component performance and system design, reducing development time and costs while maximizing efficiency.
March 31, 2025
Explore the essential role of optical filters in enhancing Raman spectroscopy measurements including the various filter types and their applications in improving signal-to-noise...

Voice your opinion!

To join the conversation, and become an exclusive member of Laser Focus World, create an account today!