----------
It was my university project, and now I share it with you. Unfortunately it's only in Hungarian language, but I hope, at least the vi-s can help also the non-hungarian readers.
My paper is about machine vision in LabVIEW
environment. Various image processing methods are presented, considering that
it must be able to used in non-industrial environment like low-budget
university and student projects.
First I review the basics of programming in
LabVIEW, the dataflow programming method. I write about it only so quick to
make the following chapters clear for those people who have no previous
experience with LabVIEW programming.
Then I start with the easier image processing
methods. First the color matching: here the program looks for a predefined color
pattern and gives back the coordinates of it. Then the pattern matching algorithm
on a black-white image: first we make the picture monochrome with one of the
various options and then the program searches for the predefined shape or
geometry pattern.
Thereafter the more sophisticated image
processing methods come. I write about the two dimensional bar code (QR
Code and Data Matrix) scanning, which are more and more frequently used in the
industry and all day life.
Next I present how to use two horizontally
offset webcam to create a depth image, like human vision. The correct settings
are very important. It can be done with a black-white grid showed in different
angel to the camera. Of course the configuration can be saved so it has to be
done only once per camera configuration. For more info, see this video:
After that the program is ready to
process the images real time to a depth image. This depth image is presented on
a colored graph, where
each color represent a depth, and the actual depth value can be read by hover
the mouse over the point.
Of course this technology has its own limitations. It
can’t be expected to get a perfect 3D image from the image of two midrange
webcam, but it can help us to choose the closer so the more important part of
the image, where the previous image processing technologies should be used. It
saves us resources and combining with a moving robot, the robot will be able to
turn its head towards the closest activity. The measurement range highly depend
on the configurations of the cameras. Very close cameras give better results in
case of closer targets and cameras with higher distance can be used in farer
targets.
All of my vi-s and my paper are available here: https://1drv.ms/f/s!An5KBEiOStfLgvAf6yFFrktZR4LmqQ
Special thanks to:
- my tutor: Dr. Aradi, Petra, BME-MOGI
- Kl3m3n from ni.com
Best regards:
Mark
Hi I was trying to have a look at your VIs but the dropbox links don't seem to be working.
ReplyDeleteSorry, I changed to onedrive meanwhile, and forget to update the links. Now I updated it, check it again :) I also just realized, that the names of the vis is in Hungarian, so if you are planning to do something with stereo processing, you should check 5.1 for calibration and 5.2 for generating the depthimage.
DeleteIf you have any question, please feel free to ask :)
Hi Mark
ReplyDeleteWould be nice file names in english :-)
Hi !
ReplyDeleteIt seems that your links are not working, could you please update them ?
thanks a lot !
Regards,
Vincent
Sorry, I was AFK in the last days. The link should work now :)
DeleteHi Mark,
DeleteIt seems that your links are not working, could you please update them again?
Thank a lot.
Buenos dias. Gracias por los VI
ReplyDeleteHello, the onedrive-link seems not working, would glad to try your vi
ReplyDeleteHi; The cartoon was great, please give me a sample of this program so that I can learn?
ReplyDeleteHi; Can you provide me with a stereo vision program?
ReplyDeleteHello, I have stereo vision with LabVIEW problem. The image is not displayed in 3D. Please help contact e-mail Arisara.khawdokmai@gmail.com Thank you.
ReplyDeleteHI! Can you tell me how to correct stereo vision.
ReplyDeleteI need material of correction.
THANKS!
Hi,
ReplyDeleteIt seems that your links are not working, could you please update them ?
thanks a lot !