"In the world ye shall have tribulation: but be of good cheer; I have overcome the world." –John 16:33

Category: Software Page 2 of 6

New computers are not as fun anymore

Our friend, Bonnie, picked up the computer that arrived at our house when we were off visiting in Boston and Tempe last week. Lorena met with her for lunch yesterday to pick it up. It is a beautiful, brand new, Dell 5491 14″ touchscreen, i7 laptop with all the requisite amounts of memory and drive space to work on the relatively large images with which I work in my job. I love the computer, but it is a hassle to switch all the work I have been doing on the personal computer I used while I waited for my work computer.

First, I am reminded of the invasive nature of Windows (not to suggest Apple is any less so–they are probably even worse). I work in Linux, so I have to install that, but leave the computer dual-booted because I write cross-platform software. Then, I need to install Qt, OpenCV, Boost, and a ton of tools like Gimp, ImageJ, Filezilla, the Brave browser, Git, VirtualBox, etc, etc, etc. AND then I get to do it all again so all this stuff is available on both Windows and Linux. It will be really nice when it is complete, but it will be a full day of work to be up and running where I was with the previous computer.

I am not complaining TOO  much–it will be really nice when I am up and running and I really do not mind the kind of brain-dead work, but I lose a day and there is a lot to do.

Pixel 2 XL: Stitching images for a panorama


I thought this was very cool. I took a set of pictures with my new Pixel 2 XL cellphone to make a panorama of our roof as it was getting installed. The picture in the previous post is from the sequence. Before I got to stitch it together, the camera did it for me without me even asking. It also made an animation. I was pretty impressed with the quality of the image stitching, too.

Reengaging with water research

Over the last couple of days, I had a couple of long and interesting talks with my old friend, Troy, with whom I worked on the GaugeCam project when we lived in North Carolina. Troy is an Assistant Professor at University of Nebraska right now with lots of interesting research going on. We discussed the idea of me reengaging on some of his research again when I started to approach retirement. Well, retirement is rapidly approaching and it looks like the stars might be starting to align. This is still just wishful thinking, but we have talked about a few specific ideas and I even called and talked to my old Masters degree professor, Carroll Johnson long retired from University of Texas at El Paso. We have hope we can make something happen.  If this idea comes to fruition, I hope to be writing about it here on a semi-regular basis.

Upgrade with Ubuntu Bionic Beaver

The new Ubuntu operating system (Bionic Beaver 18.04 LTS) came out yesterday and I installed it today. So far it is great. I had been using Xubuntu up until now, but have decided I am going to try Ubuntu for awhile and then Linux Mint (Cinnamon) when it comes out for awhile before I decide on where to the settle for the next few years. I am really glad Ubuntu went back to Gnome and a way from Unity. That was the main reason I switched to Xubuntu in the first place–so I did not have to deal with Unity. My good buddy, Lyle W. has been raving about Mint for quite a few years now as have many others, so I think I need to give that a try.

A “for personal use” 3D/RGB camera

I bought a RealSense 3d/RGB camera today from Intel. I have wanted to get one for awhile and try it out, but now I have an actual reason. I am working with a friend from an old job on a small project and we are actually using them in my day job. The camera takes aligned 2d and 3d images. It is (relatively) cheap and has an SDK that will allow me to pull the images into some fun environments where I can use OpenCV and the PCL on them. Looking forward to it, but the sad part is they are so popular it is on backorder. I will have to be patient.

Programming as a second career

I enter image description herehave a good friend who has had a long and successful career in a very specific kind of Information Technology Services. He is retired from that now as the constant travel and search for new consulting opportunities are fairly onerous. He is older, but does not want to retire completely, so he wants to deviate his career to something that allows him to use his skills and experience in a way that will not require. In talking, he mentioned he has some ideas for niche software tools for which only someone with his level of experience would even know there is a need. But there is a big hole to fill for this because, although he has programmed and been in that world for decades, he has never been a production programmer himself.

He called me and asked me how I would go about it if I were him. Since he has many of the SQL skills he needs to do the job, he really only needs something to glue his idea together. I suggested he needed, as a mechanism to learn, to build a rough prototype of his product in Python. Then, he needed to go learn best practices and rewrite the thing a couple of times and have some experienced programmers do code reviews on what he is doing from the ground up based on the code reviews.

I told him a good language choice right now is Python. I hope this was all good advice. He has time on his hands and can take a year (or more) if needed to put this thing together. It sounds like my kind of project and I am a little envious. The combination of skills he has to do this are things I don’t have–he has domain skills and has identified a specific real need and he has the background to program in that domain with some intense preparation.

Beansorter: GUI and live video up and running

The browser based GUI for the bean sorting project is now up and running and being served from the Raspberry Pi. I only have one camera running right now because I only have one camera, but it does all the things that need to be done. There is a lot underneath the hood on this thing, so it should serve as a good base for  other embedded machine vision projects beside this one.

In terms of particulars, I am using a Flask (Python3)/uWSGI/nginx based program that runs as a service in the Raspberry Pi. Users access this service wirelessly (anywhere from the internet). The service passes these access requests to the C++/OpenCV based vision application which is also running as a service on the Raspberry Pi. Currently, we can snap images show “live” video, read the C++ vision log, and do other such tasks. We probably will use something other than a Raspberry Pi for the final product with a USB 3.0 port and the specific embedded resources we need, but the Raspberry Pi as been great for development and will do a great job for prototypes and demonstration work.

The reason I put the “live” of “live” video in scare quotes is that I made the design decision not to stream the video with gstreamer. In the end applications I will be processing 1 mega-pixel images at 20-30 frames per second which is beyond the bandwidth available for streaming at any reasonable rate. The purpose of the live video is for camera setup and to provide a little bit of a reality check at runtime by showing results for each 30th to 100th image as a reality check along with sort counts. There is no way we could stream the images at processing rates and we want to see something better than the degraded streamed image.

Bean sorter–First image from the Raspberry Pi oCam combo

You can say a lot of things about this image–it is blurry, it is too dark, it manifests the starry night problem, etc., etc. Still, it is our first image out of the bean sorter cam connected to a Raspberry Pi. I am going to do some infrastructure stuff to be able to pull stuff down easily from the embedded computer, but I will be moving on to work on the lights Gene sent me within a few days. Of course those days extend out quite a bit because I have a day job. Nevertheless, one has to take their satisfaction when they can get it and this is satisfaction any engineer might understand.

Bean sorter–Groundhog day success

Today was a good day at work. It all had to do with sorting and measuring spuds on a conveyor, but that is a story for when we see each other face to face. The other reason it is a good day is the image below. I spoke prematurely when I said I had everything ready to go with the bean sorting development environment for the Raspberry Pi. I was wrong. It turns out the stuff I had on my development computer was incompatible with the stuff on the Raspberry Pi and it took me until about 15 minutes ago to get it all sorted out. Hopefully, I have a shot at getting the camera going on the RPi and maybe even getting started in on controlling the lights we need for the project with the RPi. Another fun filled weekend!

Bean sorter–Remote (wireless) debug on Raspberry Pi

A couple more hours and remote debug is up and running. Develop on my desktop, deploy and debug over wifi to the Raspberry Pi. It took about 14 hours all told, but it was interesting and all worth the investment. Now it is on to getting the camera working on the RPi.

Bean sorter–Cross compiling for the Raspberry Pi

I got up to my office about 7:00 AM this morning and have been programming steadily since then. Well, I call it programming. Really what I was doing was trying to figure out how to get Raspberry Pi programs I write and build on my laptop (that I use as a desktop) to cross compile with Qt Creator so they will run on the Raspberry Pi which is what we started with on our coffee bean sorting project because it is cheap and we are cheap. I finally got it all to work about 12 hours later. I am wildly happy to have the bulk of this out of the way. Now I can bet back to thinking about coffee beans. Now the program I compiled previously on the Raspberry Pi should be fundamentally easier to debug.

The one good part about all this is that when I am programming I am generally not eating and the time flies. I did a pretty good job of staying on my diet.

Bean sorter camera calibration

Yesterday, I spent my spare time on creating a camera calibration for our bean sorter project. The purpose of the calibration is to convert measurements of beans in captured images from pixel units to mm units. Images are made up of pixels, so when measurements are performed we know how big things are in terms of pixels. Something might be 20 pixels wide and 17.7 pixels high (subpixel calculations is a topic for another day). Knowing the width of something in an image is pretty worthless because the real world width ( e.g. in millimeters) of that object will vary greatly based on magnification, camera angle and a bunch of other stuff. That is a big problem if the camera moves around a lot.

Fortunately, in our case, the camera will be in a fixed location and the distance to the falling beans will always be the same. That allows us to make some fixed calculations to convert pixel units to millimeters. To that end, we put a “calibration target” in the cameras field of view at the position where through which the beans will fall. In our case that calibration target is a checkerboard pattern with squares of a known size. If we take a picture of the checkerboard pattern, then find the location of each square in the image in pixels, and store that information away.

Notice the red marks at each intersection of squares in the checkerboard–those are the found pixel positions (e.g. 133.73 pixels from the top of the image and 214.5 pixels from the left edge of the image). We can then convert the positions and sizes of found beans in the image from pixel units to mm units by using equations derived from the know mm sizes of the squares and the found position of the squares in the image as measured in pixel units. I used to have to hand write the equations to do this, but now there are open source libraries for this, so I was able to do the whole thing in an evening.

Dropping Beans: Finding the beans

Gene and I continue to make progress on our bean inspection project. Here is the first pass at measuring bean size as beans drop past the camera. This includes finding the bean in the image, calculating its contour and measuring how big it is in the image. The next step is to convert the bean size in the image measured in pixel units to the size in millimeters. I am half way into that.

The other thing I did this weekend was load up a Raspberry Pi with the latest Raspbian OS and got it running on the network. Right now, I am doing all my work on my Linux PC, but the idea is to move it over to the Raspberry Pi as soon as it is a little further up the development path because that is a much cheaper computer. There are some other options that might be even better (cheaper and faster), but I have a Raspberry Pi, so that is where we will start.

Gene is not sitting still either. He has built me up some prototype lighting, but I will save that for a post of its own.

Hooking up the new camera

I bought a global shutter camera from Ameridroid for Gene’s and my new project. It is a pretty amazing little camera, especially for the price. It is a USB 3.0 so it runs fast. I do not have the lens I need for the application we are doing so I ordered a three lens kit (need it anyway). I hope to be able to start testing beans falling past the camera before the end of the holiday, but that might be a little ambitious.

The other really good thing about this camera compared to the ov5640 cameras I have been using is that the Korean company, WithRobot, that makes the camera provides great, freely available libraries to control all the things the camera does. If I can get the camera control into our proto-type program, we will have made a major step in getting to the point where we can actually start developing a product.

The value of a vision system

Yesterday I bought a machine vision camera for the project my buddy Gene and I are doing to build a (semi-)cheap little machine to inspect coffee beans. We need something called a global shutter camera because the beans will be in motion when we capture their images. In the past a camera like this would have cost in the $1000 range. Over the years they dropped to $300-$400. Yesterday, I paid $135 for this camera–quantity 1–and that included shipping. If this is coupled with a Raspberry Pi and OpenCV (~$200 with a power supply, heat sink, and other necessary stuff), it is possible to build a vision system that is faster (by a lot) and smarter (by a lot) than the vision systems we used to sell when I started at Intelledex in 1983 for $30k (~$74k in today’s dollars). The upshot is that it is now possible to do tasks for cheap that no one would have ever thought possible. There are large categories of machine vision problems that companies are accustomed to paying through the nose to solve. That is truly not necessary anymore if one is smart enough to put the pieces together. I hope we are smart enough.

Sorting coffee beans

A cheap ring lightSome good news and some good news arrived yesterday. The first is that my participation in the sickle cell disease diagnostic project is wrapping up. I will still be on call for the machine vision elements of the project, but I will not be tasked with the day to day programming any longer. The second is a good friend (Gene C.) I have known since I was a child has agreed to work with me on a side project. We are going to make a “cheap but good” coffee bean inspection machine. There are lots of machines that do that, but none of them are particularly cheap in the way we want our machine to be cheap. We hope to do this for another friend who lives in Dallas.

Cheap back lightI bought two lights I plan to use for the project. One of them is a back light and one of them is a ring light. I am pretty sure we will not be able to use these in our finished instrument, but they will certainly help me with development of lighting and optics. I still need to buy (at least) a few m12 mount lenses and a cheap USB microscope. I already have a camera with the wrong lens, but it has allowed me to start writing the program I will use to do image processing and classification algorithm development. I got it to take pictures before I went to bed last night.

Statistical chart that changed my approach to analytics for machine vision

My buddy (the brilliant) Andrew B. posted the following image on his Twitter feed along with a link to the article from which it came. Those who work in this arena will understand. I get angsty about whether I have chosen the right model. Most of the time, it turns out that, if I did not chose the best one, I got pretty close. Thanks Andrew.

Which model should I use?

Using Mattermost

Using Mattermost from home serverI have installed a program named Mattermost on my home server. I have been using it for a couple of weeks and it is very powerful. In my previous job, we used a similar program named Slack that we used extensively. Both of these are super-capable chat clients. I figured out how to do task lists in Mattermost. This is a life saver for a lot of the stuff I am doing.

I like Mattermost best because it is free at the low level I need, easy to use, I can run it on my home server, and it does everything I want it to do.

Cheap cameras used for unintended purposes

Cheapy USB camerasI will have one more work week in Texas after today. I enjoy my job and the people where I work a lot and it was agonizing to turn in my notice. Part of the job I love the most is the requirement to create sophisticated machine vision and video analytics applications with cheap USB cameras and ARM embedded computers that run embedded Linux, usually Debian. We prototype a lot of the stuff on Raspberry Pi’s which is great because there is such a big user community it is easy to quickly get answers about just about anything. There are four cameras in the article accompanying this post that range in value between $20 and $50.

All of the cameras work just fine right out of the box for the purpose for which they were design–that is generally streaming video with camera controlling the capture gain and offset. Conversely, it reduces the repeatability and precision of most machine vision application if the offset, gain and lighting controls are not managed by the application. So, it has been part of my job to dive into the driver code far enough to figure out how to set the registers that need to be set to control cheap cameras well enough to work in accord with the stringent requirements of many machine vision applications. That takes a lot of patience and, although it is not exactly rocket science, it is very rewarding when the last piece of minutiae is chased down and the stuffs starts working.

One thing I have learned is that this “big data” thing is here to stay, at least in my world of machine vision, embedded computing and video analytics. There are tons of things you can almost do deterministically that become tractable when enough data and machine learning are thrown at them. I am loving working with Weka and R and the machine learning functionality in the OpenCV library because they open up new vistas, not to mention I can more frequently say, “I think I can do that” and not squint my eyes and wonder whether I am lying.

What I do on Saturday

Working on sickle cell disease on Saturday with KiwiI got up early and walked to work where I spent several hours figuring out the technology we have available on our project will not allow us to do what we want to do. Lorena and I had a late breakfast and I walked back to the apartment to work on the sickle cell diagnosis project for CWRU. We are at the point where we need to start testing our system in the field to be continue to receive grant money. That means I am on a strict time schedule with a fairly continuous stream of small, but important short term deliveries. It is a little bit of a challenge right now with my day job, the house purchase and the move, but all I have to do is survive for three months of this and I actually might have done something good for humanity–Africa and India in particular. Of course, lots of people have the skills to do this kind of thing, so I am grateful to get the chance to do it.

That in addition to having lots of technical help from Kiwi.

Page 2 of 6

Powered by WordPress & Theme by Anders Norén