It was my university project, and now I share it with you. Unfortunately it's only in Hungarian language, but I hope, at least the vi-s can help also the non-hungarian readers.
My paper is about machine vision in LabVIEW environment. Various image processing methods are presented, considering that it must be able to used in non-industrial environment like low-budget university and student projects.
First I review the basics of programming in LabVIEW, the dataflow programming method. I write about it only so quick to make the following chapters clear for those people who have no previous experience with LabVIEW programming.
Then I start with the easier image processing methods. First the color matching: here the program looks for a predefined color pattern and gives back the coordinates of it. Then the pattern matching algorithm on a black-white image: first we make the picture monochrome with one of the various options and then the program searches for the predefined shape or geometry pattern.
Thereafter the more sophisticated image processing methods come. I write about the two dimensional bar code (QR Code and Data Matrix) scanning, which are more and more frequently used in the industry and all day life.
Next I present how to use two horizontally offset webcam to create a depth image, like human vision. The correct settings are very important. It can be done with a black-white grid showed in different angel to the camera. Of course the configuration can be saved so it has to be done only once per camera configuration. For more info, see this video:
After that the program is ready to process the images real time to a depth image. This depth image is presented on a colored graph, where each color represent a depth, and the actual depth value can be read by hover the mouse over the point.
Of course this technology has its own limitations. It can’t be expected to get a perfect 3D image from the image of two midrange webcam, but it can help us to choose the closer so the more important part of the image, where the previous image processing technologies should be used. It saves us resources and combining with a moving robot, the robot will be able to turn its head towards the closest activity. The measurement range highly depend on the configurations of the cameras. Very close cameras give better results in case of closer targets and cameras with higher distance can be used in farer targets.
All of my vi-s and my paper are available here: https://1drv.ms/f/s!An5KBEiOStfLgvAf6yFFrktZR4LmqQ
Special thanks to: