Machine vision system

Views:
 
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

Robotic Vision System:

Robotic Vision System Robot vision may be defined as the process of extracting, characterizing and interpreting information from images of a three dimensional world The process can be divided into the following principal areas Sensing Preprocessing Segmentation Description Recognition Interpretation

Vision system:

Vision system Two-dimensional (or) Three dimensional model of the scene According to the gray levels Binary Image Gray Image Color Image

Vision System – Stages:

Vision System – Stages Analog to digital conversion Remove noise Find regions (objects) in space Take relationships(measurements) Match the description with similar description of known objects

Block Diagram Of Vision System:

Block Diagram Of Vision System FRAME GRABBER I/F LIGHT ANALOG SIGNAL

Slide 5:

Function 1. Sensing and digitizing image data 2.Image processing and analysis 3.Applications Typical Techniques & Applications Signal conversion -Sampling -Quantization -Encoding Image storage/Frame grabber lighting -Structured light -Front/back lighting -Beam splitter -Retro reflectors - Specular illumination -Other techniques Data reduction -Windowing -Digital Conversion Segmentation - Thresholding -Region growing -Edge ddetection Feature Extraction -Descriptors Object Recognition -Template matching -Other algorithms -Inspection -Identification -Visual servoing and navigation

The Image and Conversion:

The Image and Conversion The image presented to a vision system’s camera is light, nothing more (varies intensity and wave length) Designer ensure the pattern of light presented to the camera is one that can be interpreted easily Designer ensure what camera sees, the image has minimum clutter Designer ensure blocking extraneous light – sunlight, etc.., that might affect the image Conversion Light energy converted to electrical energy Image divided into discrete pixels Note: A color camera considered as three separate cameras, each basic color The best portion of the image is produced by the light passing through a lens along the len’s axis A pixel is generally thought of as the smallest single component of a digital image

The camera:

The camera Common Imaging device used for robot vision system: Charge couple device (CCD) Vidicon camera Solid state camera Charge Injection device Pinhole camera

Charge couple device (CCD):

Charge couple device (CCD) The Charge couple device (CCD) is a silicon based integrated circuit, provided to the user as single chip

Vidicon Camera:

Vidicon Camera

Pin Hole Camera:

Pin Hole Camera A pinhole camera is a simple camera without a lens and with a single small aperture – effectively a light-proof box with a small hole in one side. Light from a scene passes through this single point and projects an inverted image on the opposite side of the box. The human eye in bright light acts similarly, as do cameras using small apertures. Up to a certain point, the smaller the hole, the sharper the image, but the dimmer the projected image. Optimally, the size of the aperture should be 1/100 or less of the distance between it and the projected image. Because a pinhole camera requires a lengthy exposure, its shutter may be manually operated, as with a flap of light-proof material to cover and uncover the pinhole. Typical exposures range from 5 seconds to several hours. A common use of the pinhole camera is to capture the movement of the sun over a long period of time. This type of photography is called Solargraphy. The image may be projected onto a translucent screen for real-time viewing (popular for observing solar eclipses; see also camera obscura), or can expose photographic film or a charge coupled device (CCD). Pinhole cameras with CCDs are often used for surveillance because they are difficult to detect.

Frame Grabber:

Frame Grabber A hardware electronic device used to capture and store the digital image This captures individual, digital still frames from an analog video signal (or) a digital video stream Frame grabbers were the predominant way to interface cameras to pc's Analog frame grabbers Which accept and process analog video signals, include these circuits Digital frame grabbers Which accept and process digital video streams, include these circuits Circuitry common to both analog and digital frame grabbers A bus interface through which a processor can control the acquisition and access the data Memory for storing the acquired image

Functions of Machine vision system:

Functions of Machine vision system Image formation Processing of Image Analyzing the Image Interpretation of Image

Image formation:

Image formation There are two parts to the image formation process: The geometry of image formation , which determines where in the image plane the projection of a point in the scene will be located. The physics of light , which determines the brightness of a point in the image plane as a function of illumination and surface properties. The image sensor collects light from the scene through a lens, using photo sensitive target, converts into electronic signal

Processing of Image:

Processing of Image An analog to digital convertor is used to convert analog voltage of each into digital value Voltage level for each pixel is given by either 0 or 1 depends on threshold value On the other hand grey scale system assigns upto 256 different values depending on intensity to each pixel

Image digitization:

Image digitization Sampling means measuring the value of an image at a finite number of points. Quantization is the representation of the measured value at the sampled point by an integer.

Slide 16:

Image digitization

Slide 17:

256 gray levels (8bits/pixel) 32 gray levels (5 bits/pixel) 16 gray levels (4 bits/pixel) 8 gray levels (3 bits/pixel) 4 gray levels (2 bits/pixel) 2 gray levels (1 bit/pixel) Image quantization(example)

Slide 18:

original image sampled by a factor of 2 sampled by a factor of 4 sampled by a factor of 8 Image quantization(example)

Analysis of Image:

Analysis of Image Image analysis is the extraction of meaningful information from images that is prepared by Image processing techniques; and to identify objects (or) facts of the it (or) its environment This analysis takes place at the central processing unit of the system Three important tasks performed here Measuring the distance of an object – 1-Dimensional Determining object orientation – 2-Dimensional Define Object Position

Interpretation of Image:

Interpretation of Image The must common image interpretation is template matching In binary system, the image is segmented on the basics of white and black pixels The complex images can be interpreted by grey scale technique and algorithms

Image Understanding:

Image Understanding A computer needs to locate the edges of an object in order to construct drawings of the object within a scene, which lead to shapes, which lead to image understanding. The final task of robot vision is to interpret the information (such as object edges, regions, boundaries, colour and texture) obtained during image analysis process. This is called image understanding or machine perceptions A robot vision system must interpret what the image represents in terms of information about its environment. Threshold decides which elements of the differentiated picture matrix should be considered as edge candidates.

Vision System and Identification of Objects:

Vision System and Identification of Objects Vision system is concerned with the sensing of vision data and its interpretation by a computer The typical vision system consists of the camera and digitizing hardware, a digital computer and hardware & software necessary to interface them The operation of the vision system consists of the following functions: (a) Sensing and digitizing image data; (b) Image processing and Analysis; (c) Application

Possible Sensors for Identification of Objects:

Possible Sensors for Identification of Objects Robot system interfaces to a vision system can provide excellent opportunity to produce a better quality outputs

Use of a sensing array to determine the Orientation of Object moving on a conveyor belt:

Use of a sensing array to determine the Orientation of Object moving on a conveyor belt

Sensing array to identify the presence of an object moving on a conveyor belt and to measure the width of the object:

Sensing array to identify the presence of an object moving on a conveyor belt and to measure the width of the object

Robot Welding System with Vision:

Robot Welding System with Vision Teach box is used to position the end-effector at various points The terminal is used for communicating with robot and also to indicate system conditions, editing and executing the robot work program The welding path is traversed by the robot manipulator and can be programmed using programming languages such as VAL, RAIL, etc. The various welding parameters such as feed rate, voltage, current, etc. can be incorporated in the program

Robot Welding System with Vision:

The data is processed by a set of algorithms and the relevant information is analyzed by the computer and compared by the programmed path for welding Any kind of deviations from the programmed path can be taken care by the system itself, giving welds of uniforms and consistent quality Robot Welding System with Vision

Slide 29:

THANKS M.GANESH MURUGAN