After about a gazillion fits and restarts associated with reading the manual and getting syntax and basic concepts wrong, I have finally gotten the bean sorter web interface up and running. The idea is to be able to control the bean sorter vision system from a cell phone, tablet, or PC. I have been going around in circles for about two weeks now, but it appears I am ready to move on to the next thing. Here is a shot of my phone accessing the site from outside the LAN. I am glad to be done with this. The next step is to not just load a previously captured image, but to allow users to capture images by clicking a link.
Kiwi and I continue to program hard on our Raspberry Pi and coffee bean project. You can see from the image at the right that we made a small breakthrough. The capture control is now on the Raspberry Pi as well as the ability to display the last image via the web. I suppose it is not such a big deal, but since I have generally worked on image processing algorithm development and the analytics that go along with that, I am not such a great web programming. It has been frustrating but fun to work through getting the syntax and understanding the idiosyncrasies of embedded web servers and programming tools. Nevertheless, I have the main part of the start of the struggle behind me and look forward to returning focus to the application at hand. The camera is set up with the calibration target underneath. The next step will be to create a way for the web server to talk to the camera control program so a user can capture an image on demand, calibrate the system, run a bunch of beans, look at logs to assure everything is working right, etc. etc. I have done more of that kind of thing than web programming, but I am sure there is a lot of minutiae that I have forgotten.
After a struggle that took way longer than it should, I was able to get the Raspberry Pi ready for development. It is no exposed to the outside world so our partners in Texas and our mechanical guru in Montana can access the Raspberry Pi from their cell phones, tablets or other connected devices. The web page is just a place holder for other stuff we will do it, but everyone needs a login page.
You can say a lot of things about this image–it is blurry, it is too dark, it manifests the starry night problem, etc., etc. Still, it is our first image out of the bean sorter cam connected to a Raspberry Pi. I am going to do some infrastructure stuff to be able to pull stuff down easily from the embedded computer, but I will be moving on to work on the lights Gene sent me within a few days. Of course those days extend out quite a bit because I have a day job. Nevertheless, one has to take their satisfaction when they can get it and this is satisfaction any engineer might understand.
Today was a good day at work. It all had to do with sorting and measuring spuds on a conveyor, but that is a story for when we see each other face to face. The other reason it is a good day is the image below. I spoke prematurely when I said I had everything ready to go with the bean sorting development environment for the Raspberry Pi. I was wrong. It turns out the stuff I had on my development computer was incompatible with the stuff on the Raspberry Pi and it took me until about 15 minutes ago to get it all sorted out. Hopefully, I have a shot at getting the camera going on the RPi and maybe even getting started in on controlling the lights we need for the project with the RPi. Another fun filled weekend!
A couple more hours and remote debug is up and running. Develop on my desktop, deploy and debug over wifi to the Raspberry Pi. It took about 14 hours all told, but it was interesting and all worth the investment. Now it is on to getting the camera working on the RPi.
I got up to my office about 7:00 AM this morning and have been programming steadily since then. Well, I call it programming. Really what I was doing was trying to figure out how to get Raspberry Pi programs I write and build on my laptop (that I use as a desktop) to cross compile with Qt Creator so they will run on the Raspberry Pi which is what we started with on our coffee bean sorting project because it is cheap and we are cheap. I finally got it all to work about 12 hours later. I am wildly happy to have the bulk of this out of the way. Now I can bet back to thinking about coffee beans. Now the program I compiled previously on the Raspberry Pi should be fundamentally easier to debug.
The one good part about all this is that when I am programming I am generally not eating and the time flies. I did a pretty good job of staying on my diet.
Things are happening fast and furious with the bean sorter project. The pre-prototype lights Gene made for me arrived. They are just perfect. He had them all hooked up so all I have to do is flip a switch. In addition, two light controllers arrived in a separate package. Both of the are Pulse Width Modulated dimmers, but one is controlled manually with a dial and the other is controlled digital via a computer–in this case, a RaspberryPi. Can hardly wait for the weekend to see if I can get the control part of this thing going.
Yesterday, I spent my spare time on creating a camera calibration for our bean sorter project. The purpose of the calibration is to convert measurements of beans in captured images from pixel units to mm units. Images are made up of pixels, so when measurements are performed we know how big things are in terms of pixels. Something might be 20 pixels wide and 17.7 pixels high (subpixel calculations is a topic for another day). Knowing the width of something in an image is pretty worthless because the real world width ( e.g. in millimeters) of that object will vary greatly based on magnification, camera angle and a bunch of other stuff. That is a big problem if the camera moves around a lot.
Fortunately, in our case, the camera will be in a fixed location and the distance to the falling beans will always be the same. That allows us to make some fixed calculations to convert pixel units to millimeters. To that end, we put a “calibration target” in the cameras field of view at the position where through which the beans will fall. In our case that calibration target is a checkerboard pattern with squares of a known size. If we take a picture of the checkerboard pattern, then find the location of each square in the image in pixels, and store that information away.
Notice the red marks at each intersection of squares in the checkerboard–those are the found pixel positions (e.g. 133.73 pixels from the top of the image and 214.5 pixels from the left edge of the image). We can then convert the positions and sizes of found beans in the image from pixel units to mm units by using equations derived from the know mm sizes of the squares and the found position of the squares in the image as measured in pixel units. I used to have to hand write the equations to do this, but now there are open source libraries for this, so I was able to do the whole thing in an evening.
Our bean sort project has heated back up–literally and figuratively. Gene finished the prototype light stands and shipped them out. Here is a picture of them working. They heat up a ton when they are under constant current, so I have ordered a manual PWM light driver (with a dial for brightness) and an PWM driver that I should be able to control with one of the DAC’s on the Raspberry Pi. All of that should arrive within the next few days which means I am the one who is holding everyone up because there are so many things that I have to do before I can test them. I can hardly wait to get them in my hands.
In the mean time, I am working on the pixel to world coordinate calibration model for the beans. I hope to have that done sometime today–by Thursday at latest. Then I can work on controlling the lights and capturing images with the Raspberry Pi.
Some good news and some good news arrived yesterday. The first is that my participation in the sickle cell disease diagnostic project is wrapping up. I will still be on call for the machine vision elements of the project, but I will not be tasked with the day to day programming any longer. The second is a good friend (Gene C.) I have known since I was a child has agreed to work with me on a side project. We are going to make a “cheap but good” coffee bean inspection machine. There are lots of machines that do that, but none of them are particularly cheap in the way we want our machine to be cheap. We hope to do this for another friend who lives in Dallas.
I bought two lights I plan to use for the project. One of them is a back light and one of them is a ring light. I am pretty sure we will not be able to use these in our finished instrument, but they will certainly help me with development of lighting and optics. I still need to buy (at least) a few m12 mount lenses and a cheap USB microscope. I already have a camera with the wrong lens, but it has allowed me to start writing the program I will use to do image processing and classification algorithm development. I got it to take pictures before I went to bed last night.