The browser based GUI for the bean sorting project is now up and running and being served from the Raspberry Pi. I only have one camera running right now because I only have one camera, but it does all the things that need to be done. There is a lot underneath the hood on this thing, so it should serve as a good base for other embedded machine vision projects beside this one.
In terms of particulars, I am using a Flask (Python3)/uWSGI/nginx based program that runs as a service in the Raspberry Pi. Users access this service wirelessly (anywhere from the internet). The service passes these access requests to the C++/OpenCV based vision application which is also running as a service on the Raspberry Pi. Currently, we can snap images show “live” video, read the C++ vision log, and do other such tasks. We probably will use something other than a Raspberry Pi for the final product with a USB 3.0 port and the specific embedded resources we need, but the Raspberry Pi as been great for development and will do a great job for prototypes and demonstration work.
The reason I put the “live” of “live” video in scare quotes is that I made the design decision not to stream the video with gstreamer. In the end applications I will be processing 1 mega-pixel images at 20-30 frames per second which is beyond the bandwidth available for streaming at any reasonable rate. The purpose of the live video is for camera setup and to provide a little bit of a reality check at runtime by showing results for each 30th to 100th image as a reality check along with sort counts. There is no way we could stream the images at processing rates and we want to see something better than the degraded streamed image.
This last weekend, I spent most of the weekend taking Grandpa Milo and Grandma Sarah around and working on the learning/development stuff I have described here over the last few weeks. It has felt like I have been trying to drink through the proverbial fire hose in an effort to learn too much stuff at once, so I have started to break it up into bite-size chunks. When I did that, I realized I needed to do some infrastructure work before I even started. So this weekend, I decided to spend most of my time getting set up to work rather than invest a lot of time in learning. I held to that for the most part; the exception being that I started in on a set of tutorials on how to use GIT.
So, here is what I did:
- Decided to use DropBox as a way to back up and share a bunch of stuff (bought a tera-byte for a year).
- Set up a web server with WAMP on the new (cheap) desktop computer we had Fry’s make for us (on a special).
- Made it available from other places with the help of Duck DNS (awesome free service).
- Added an ftp server to that.
- Installed Ubuntu LAMP server on the old desktop (32-bit x386)
- Set up a GIT repository on that.
- Made it available in other places with Duck DNS
- Installed R and RStudio on all the computers
- Went through the first third of a GIT tutorial because I am so pathetic at that. It was great and I am up and going now.
- Added Qt, Qt Creator and OpenCV to the Linux server
- Added XMing to my laptop
- Learned how to SSH to the Linux box to perform code testing remotely
Next, I am going to start working up the learning curve on Machine Learning with R and continue to code on my previous projects. All-in-all, it was a great weekend. Lorena and I even went out to eat a couple of times. Now, all I have to do is start working in a few walks and my life might arrive at a sense of normalcy again.
Betty Blonde #303 – 09/15/2009
Click here or on the image to see full size strip.
My daughter Kelly drew a comic strip called Betty Blonde five days per week for two years starting when she was thirteen years old. I wrote a program called BleAx a few years back to help her accumulate the four hand drawn panels of her daily comic strip into a single image with a title, date, copyright, borders and that sort of thing. The program allowed the her to automatically upload the strip to a website for display. I did the whole thing by hand for about a year, then spent about six months writing BleAx whenever I had an hour or so free, here and there. BleAx stands for Betty Blonde Aggregator of Comix.
I wrote BleAx in Python and still have it, but have decided to rewrite it as a learning exercise. I normally write programs in C/C++ in my day job, but have recently been wrapping some of the time critical stuff I write in C++ in a Python wrapper so engineers who do not normally write in a “non-garbage-collected” language can use it easily. I now have started using a set of libraries called PySide to write Qt GUI’s in Python. It took me a bit of time and hassle to get my environment set up to automate the GUI development and C/C++ wrapping in so I did not have to go through a ton of manual processes to build the programs and put the results where they needed to be. I do a lot of work with OpenCV so I will talk about how to use that effectively in this environment, too.
I am sure my process is not perfect and that is part of the reason I am doing this publicly, so some of the people that might read this can beat up my process and tell me how to do it better. To that end, I am going to start rewriting BleAx. I do not have a ton of time, so this will be a little bit of a slow process. I am mostly doing it just for fun and documentation, but if it helps anyone else, that will be great.
Betty Blonde #222 – 05/22/2009
Click here or on the image to see full size strip.
We received a great Christmas gift last night. The big, big boss of our company in Sydney (not just the big boss from Prescott) wrote a letter and gave us an extra five days of vacation over the holidays because we had such a tough year and because of some health issues in our new executive team. The reality is that no one has taken much of a vacation over the last two years and most of us have worked just about every weekend. It will be nice to spend a couple of unfettered weeks with the family.
Just as good, I have been put on a project that involves writing programs in two different languages using a couple of libraries I really like in both of those languages.
“Why two languages?” you ask.
Well, C++ is a language that is very good for doing things very efficiently and effectively, but that can really get you in trouble if you do not know what you are doing. Well written C++ code generally runs much faster than code written in higher level languages like Python. It lets you do just about anything you want and does not provide any restrictions with respect to leaking memory or jumping off into areas of memory that are totally unrelated to what you are doing. Python is a great language for people who are not so comfortable with the freedom of C++. It also allows user to write a lot of functionality fast and has lots and lots of add-on libraries to do lots and lots of things easily.
I normally use C++ because of the need for speed. Other members of my team need to use my code in programs they can develop rapidly for use in scientific experiments and production code for the instruments we make. So, we have decided that I will write my machine vision code in C++, then wrap it up in a Python wrapper using a tool called SWIG. All the tools I normally use in C++ to build GUI’s (Qt) and perform image processing tasks (OpenCV) are available in Python as libraries. The Qt libraries we use are called PySide and the OpenCV libraries are just called Python OpenCV.
I have set up my environment so that whenever I write a C++ library, the Python wrapped results are automatically built and stuck into the correct directory for use by the rest of the team. In addition, when I build a GUI with Qt Designer, I can run a batch file that turns the C++ code into a Python program. I have to do a little merging with that if I change the GUI, but it is all quite painless. I think I might write up what I have done and post it here. I am sure I have some inefficiencies and someone might be able to make some suggestions.
Betty Blonde #219 – 05/19/2009
Click here or on the image to see full size strip.
Day 84 of 1000
I wrote a post a few days back about C++ programming. I have been trying to figure out whether the language has enough legs for me to make a living with it for the next 10-20 years. I have reflected on that post quite a bit since the day I wrote it and think the answer is most certainly, yes. That was solidified even more after a recent visit to Charlotte. I went up there to see some new friends who need some help with machine vision. They have other devices beside machine vision. Every one of those devices has some kind of GUI, spreadsheet, or scripting language to handle the bulk of their applications. Many of them also have a software development kit (SDK) so all the functionality of the devices is accessible programmatically via libraries.
That is the thing that reinvigorated my enthusiasm for C++. All the easy functionality is available via the easy programming methods (GUI’s, spreadsheets, scripting languages, etc.). The problem is that many of the devices generate a ton of data. That opens up two opportunities: 1) Development of new functionality inside the device and 2) Analysis of the data generated by the device on an external computer. Number 1) is exciting because the computational capability (processor speed, memory, etc.) is so small that a very efficient, machine-centric language (C++) is the best option. Number 2) is exciting because the devices in question are shoveling lots and lots of data around and need real-time calculation results. That also calls for lots of efficiency of memory management and speed. It is possible to use Java, C#, Python, and even BASIC, but C++ works great and will always have an edge when it comes to those topics.
So, on Saturday at the NCSU D.H. Hill Library, I updated QT Creator on my laptop and will update the OpenCV libraries to the latest version. When I am all set up, I will download the IPP and prepare for my next project. In C++!!!