Category Archives: Software

Sorting coffee beans

A cheap ring lightSome good news and some good news arrived yesterday. The first is that my participation in the sickle cell disease diagnostic project is wrapping up. I will still be on call for the machine vision elements of the project, but I will not be tasked with the day to day programming any longer. The second is a good friend (Gene C.) I have known since I was a child has agreed to work with me on a side project. We are going to make a “cheap but good” coffee bean inspection machine. There are lots of machines that do that, but none of them are particularly cheap in the way we want our machine to be cheap. We hope to do this for another friend who lives in Dallas.

Cheap back lightI bought two lights I plan to use for the project. One of them is a back light and one of them is a ring light. I am pretty sure we will not be able to use these in our finished instrument, but they will certainly help me with development of lighting and optics. I still need to buy (at least) a few m12 mount lenses and a cheap USB microscope. I already have a camera with the wrong lens, but it has allowed me to start writing the program I will use to do image processing and classification algorithm development. I got it to take pictures before I went to bed last night.

Using Mattermost

Using Mattermost from home serverI have installed a program named Mattermost on my home server. I have been using it for a couple of weeks and it is very powerful. In my previous job, we used a similar program named Slack that we used extensively. Both of these are super-capable chat clients. I figured out how to do task lists in Mattermost. This is a life saver for a lot of the stuff I am doing.

I like Mattermost best because it is free at the low level I need, easy to use, I can run it on my home server, and it does everything I want it to do.

Cheap cameras used for unintended purposes

Cheapy USB camerasI will have one more work week in Texas after today. I enjoy my job and the people where I work a lot and it was agonizing to turn in my notice. Part of the job I love the most is the requirement to create sophisticated machine vision and video analytics applications with cheap USB cameras and ARM embedded computers that run embedded Linux, usually Debian. We prototype a lot of the stuff on Raspberry Pi’s which is great because there is such a big user community it is easy to quickly get answers about just about anything. There are four cameras in the article accompanying this post that range in value between $20 and $50.

All of the cameras work just fine right out of the box for the purpose for which they were design–that is generally streaming video with camera controlling the capture gain and offset. Conversely, it reduces the repeatability and precision of most machine vision application if the offset, gain and lighting controls are not managed by the application. So, it has been part of my job to dive into the driver code far enough to figure out how to set the registers that need to be set to control cheap cameras well enough to work in accord with the stringent requirements of many machine vision applications. That takes a lot of patience and, although it is not exactly rocket science, it is very rewarding when the last piece of minutiae is chased down and the stuffs starts working.

One thing I have learned is that this “big data” thing is here to stay, at least in my world of machine vision, embedded computing and video analytics. There are tons of things you can almost do deterministically that become tractable when enough data and machine learning are thrown at them. I am loving working with Weka and R and the machine learning functionality in the OpenCV library because they open up new vistas, not to mention I can more frequently say, “I think I can do that” and not squint my eyes and wonder whether I am lying.

What I do on Saturday

Working on sickle cell disease on Saturday with KiwiI got up early and walked to work where I spent several hours figuring out the technology we have available on our project will not allow us to do what we want to do. Lorena and I had a late breakfast and I walked back to the apartment to work on the sickle cell diagnosis project for CWRU. We are at the point where we need to start testing our system in the field to be continue to receive grant money. That means I am on a strict time schedule with a fairly continuous stream of small, but important short term deliveries. It is a little bit of a challenge right now with my day job, the house purchase and the move, but all I have to do is survive for three months of this and I actually might have done something good for humanity–Africa and India in particular. Of course, lots of people have the skills to do this kind of thing, so I am grateful to get the chance to do it.

That in addition to having lots of technical help from Kiwi.

Working on sickle cell disease

sickle cell disease vision softwareThis weekend, I put the final touches on the prototype/demo version of the sickle cell disease software I am developing for Case Western Reserve University and HemexHealth. It will be demonstrated to potential partners in this week. I am not sure how much longer I will be needed for this project other than some tweaks to make it work better and easier to use, but it has been one of the most gratifying projects on which I have ever worked. It has huge potential to do good. I hope I get the opportunity to do more projects like this again in my lifetime.

Sickle cell disease diagnosis project

HemexHealth sickle cell Anemia diagnosis deviceDown 7.4 of 60

After things started to settle down a little in our lives since the funeral, I had been trying to figure out what to do next. The folks were gone and the kids are on their own and are way too low maintenance for our taste (still going through withdrawal from their going off to college three years ago). Fortunately, I was recently selected to help a group of researchers at Case Western Reserve University and a company named HemexHealth develop a product with an incredible social mission. I really do not know much about how it all works (after all, I type for a living), but the product is designed to rapidly and inexpensively diagnosis sickle cell disease. I DO know how to do my part of the product and am thankful for the opportunity to contribute to such a noble endeavor.

It is going to be a ton of work, but this is exactly the type of project I love. If this is not a good hobby project, I do not know what is. The other thing it will do is take up enough time that maybe Lorena will fill some modicum of guilt about browbeating me into exercising so much. “It’s for a good cause honey and you know I program better with a belly full of biscuits and gravy!”

Continued work on Gaugecam

Kiwi and Dad work on GaugeCam togetherKiwi continues to help me with my work on the Gaugecam project. We all recevied an email yesterday describing some of the new information that will appear in the next refereed journal article. Some of it will have an impact on my work–we will know what to do to make the system even more accurate under changing conditions. It is slow work since I have so much other stuff going, but my hope is that I can turn this into my retirement project. I hope to have a demo of some of the stuff we are doing to put up here within the next few months.

YUYV (YUV422) to BGR/RGB conversion (for Logitch C270 camera using openCV)

I had an irritating problem doing a simple image conversion for my GaugeCam project where I am capturing images with a USB camera that I want to process with OpenCV on a Beaglebone Black embedded computer. I am using a Logitech C270 camera for my development work on the desktop, but we will be using a different, more industrial quality camera when we get ready to put the devices we are building in the field. At any rate, I usually can just do an Internet search and find some code I can cut and paste to do this simply types of conversions so I though I would just put this out there in case anyone wants to use it. If you have questions on how to use it with OpenCV, just ask. Feel free to just cut and paste as needed–use at your own risk, it works in my application.This is not a tutorial, just a convenience for whoever can use it. I know the format is not great–I will get around to adding something to the blog for code pasting if I ever do any more of it.

A couple of additional notes:

  • I am converting this to BGR (for OpenCV) rather than the RGB specified in Wikipedia.
  • I am using the boost::algorithm::clamp method to do the clamping (using namespace boost::algorithm). You can do clamping with something like this if you like: MIN( 255, MAX( 0, x ) )
  • You might have to convert “u_char” to “unsigned char” depending on what other includes you use.
  • I am assuming the stride of both the source and destination buffers are equal to the width.
  • I am assuming the output buffer has been allocated.
  • I am assuming the input buffer is a YUYV buffer that is two-thirds the size of the output buffer in the format specified in the Wikipedia link.
  • The way I am using this is passing the cv::Mat data pointer into the method as the output buffer.

// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// Conversion algorithm from: https://en.wikipedia.org/wiki/YUV
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
int ConvertYUYV_2_BGR( const int nWidth, const int nHeight,
                       u_char *pPixSrc, u_char *pPixDst )

{
    if ( NULL == pPixSrc || NULL == pPixDst )
    {
        cerr << “FAIL: Cannot convert YUYV to BGR from/to NULL pixel buffers” << endl;
        return -1;
    }

    int nStrideSrc = nWidth * 2;
    int nStrideDst = nWidth * 3;
    u_char *pSrc = pPixSrc;
    u_char *pDst = pPixDst;
    int nRow, nCol, nColDst, c, d, e;
    for ( nRow = 0; nRow < nHeight; ++nRow )
    {
        for ( nCol = 0, nColDst = 0; nCol < nStrideSrc; nCol +=4, nColDst += 6 )
        {
            d  = ( int )pSrc[ nCol + 1 ] – 128;    // d = u – 128;
            e  = ( int )pSrc[ nCol + 3 ] – 128;    // e = v – 128;

           
            // c = y’ – 16 (for first pixel)
            c = 298 * ( ( int )pSrc[ nCol ] – 16 );

                     // B – Blue
            pDst[ nColDst     ] = ( u_char )clamp( ( c + 516 * d + 128 ) >> 8, 0, 255 );
           
// G -Green
            pDst[ nColDst + 1 ] = ( u_char )clamp( ( c – 100 * d – 208 * e + 128 ) >> 8, 0, 255 );
           
// R – Red
            pDst[ nColDst + 2 ] = ( u_char )clamp( ( c + 409 * e + 128 ) >> 8, 0, 255 );

            // c = y’ – 16 (for second pixel)
            c = 298 * ( ( int )pSrc[ nCol + 2 ] – 16 );

            // B – Blue
            pDst[ nColDst + 3 ] = ( u_char )clamp( ( c + 516 * d + 128 ) >> 8, 0, 255 );
                     // G -Green
            pDst[ nColDst + 4 ] = ( u_char )clamp( ( c – 100 * d – 208 * e + 128 ) >> 8, 0, 255 );

                     // R – Red
            pDst[ nColDst + 5 ] = ( u_char )clamp( ( c + 409 * e + 128 ) >> 8, 0, 255 );
        }
        pSrc += nStrideSrc;
        pDst += nStrideDst;
    }
    return 0;
}

Good progress on webification of GaugeCam software

My work to develop a camera with a web interface for GaugeCam is progressing nicely. Right now, I am just working on GUI kinds of things. I have live images and snapshots from the camera working and have moved on to a good little chunk of work to get region-of-interest selection and ruler tool setup working on the web as it worked in the original software. Sadly, the hard drive on my computer at home went bad so I am fighting through that for a little while.
GaugeCam webification progresses

Beaglebone Black development — Bringing up a website

I got my we GaugeCam development site, http://gaugecam-dev.duckdns.org/, that I run from my home office on the BBB up and going again. You can see it here. It is pretty rudimentary right now, but I will start moving the new GaugeCam software there as I get it written. I am, again, putting up a list of the things I did to get there for my own self so I can duplicate it again and when I get to the next project. This post is going to be a list of links to a couple of videos and the stellar duckdns site that provides free dynamic DNS services for hobby and volunteer projects like this. So here is the list that got me up and running:

    Pushback on Christian’s latest blog post

    Christian recently wrote a technical post on his blog about demosaicing of images captured with Fujifilms new X-trans sensor. He tested some methods to perform the demosaicing, wrote a first pass of his own demosaicing code and then posted about it all on the blog. That kind of thing is pretty interesting to guys who work in that area and/or have cameras. He got a couple of nice comments on the blog post itself, but what boggled my mind was that some guy wrote this over on Hacker News where his article got some coverage:

    [sic] someone is wasting a [sic] phd scholarship to solve a problem that only [sic] exist because people keep dumping money on a company that damages their own product by now releasing source or specs?

    What a tool. That is like saying people are wasting McDonald’s, Amazon’s or the local donut shop’s money to solve a problem just because the guy solving the problem happens to work for that company. PhD scholarship students are not slaves. Some of their time is their own. Beside that, Christian is not on scholarship. He is a Research Fellow and A Dean’s Fellow, so he is an employee, just like if he were working at McDonald’s, Amazon or the local donut shop. And who cares how the problem was caused. People have the problem and it an interesting problem. Why not solve it? What kind of a waste of oxygen writes a comment like that?

    CoffeeSig web GUI

    CoffeeSig single camera GUI startI spent much of the day today trying to figure out how to use CSS to control the way the web pages look in the Wt application I am writing. This is the one I hope to use to learn how to capture images from 1-n cameras to the web with analysis in real-time. It is fun, interesting and frustrating all at the same time. The funny deal is the frustrating part is that it is difficult to get all browser types to behave the same way. I have decided I will just aim at Firefox and Chrome because they are the most ubiquitous in my little world. The companies that make those browsers have, in my humble opinion, very sketchy reputations, but that is another story for another time.

    The next step will be to start hooking up the camera. I think I might go back to the license plate reader as my first application for this thing, but I am not sure yet. When I get a camera hooked up that is controllable from my local network with a browser, I will port it over to the little BeagleBone Black I have been running as a web server that does not do much from the apartment for quite awhile now. My buddy John H. from Arizona is helping with this whole thing. He will be a big help because we will be getting into some pretty serious 3d/time domain image processing here as we get past the one camera application.

    Machine learning

    Professionally, I have to make a (semi) dramatic change in direction to learn some new stuff so I can do my job. I have to drop my work on my EKG project and GaugeCam for the next few months because I need to learn more about machine learning. I have done a little of it with R, Weka and OpenCV, but I have a need to delve into it more deeply to build a product that is commercially viable now so I am going to chose between learning more about R or learn about scikit-learn with Python. I am leaning toward scikit-learn because they say it is easier to learn for someone who is used to procedural languages like C/C++/Python/etc. I am actually kind of excited. I actually have real data with which I can get started and real problems I can try to solve that might be a help both commercially and altruistically. I will try to put some of my results up here as I go along.

    Video of the EKG running with my own software


    My hard work paid off this weekend. I am working with my long-time friend and colleague, Frank, to develop some EKG software for our $27 EKG’s. Actually, the EKG part has gone up now to $51 and the Arduino needed to run it costs another $20. At any rate, the software shown here accommodates six channels (even though that has not yet been tested because I only have one channel). It needs some cleanup, but it works great.

    Strip charts for the EKG

    When I started building my $27 EKG, I just assumed there would be an excellent library to chart the output to the screen in a compelling and useful way. There are a couple of libraries that are pretty good, but they are either really old, have bad open source licenses, are not fast enough (we need to eat a lot of data in real time) or they do not do exactly what we want. It is a little bit of a hassle to write something like this when in a rush, but it could not be helped. That is what I did most of the day yesterday. I hope to have the thing all up and running in the next few days. It will be useful to have an unencumbered library for a lot of the things we want to do with this little project and probably for future projects, too, so it is not a loss.

    Endianness “bytes” me one more time.

    I had a little bit of a breakthrough on my EKG project last night. I actually had the idea when I was completely away from the project for a few days. It caused me to re-read the manual where it said the readings from the EKG are sent down the serial cable in big endian order. Each value for a 10-bit number takes up two bytes. The high order byte can either be first or last. The receiving computer expected little endian order. I now swap the bytes before they are plotted or recorded and we get the beautiful plot above. You can barely see four little lines below the left side of the signal plot. Those lines make up the legend for the electrode channels. The system can handle six channels, but we are going to try to do just four on this setup. The next step is to get the graph to be a moving strip chart. The graph, as it is right now, just writes over itself.

    I completely duplicated my current setup for a friend, Frank who is joining this project. He is way more skilled than I in a lot of this stuff–especially the electrical engineering parts. I need to order myself an additional three channels of electronics, but that is on its way to Frank right now.

    P.S. We are thinking of cross platforming (Windows/Linux) and open sourcing (free as in both freedom and beer) the software and writing a user guide/tutorial on how to set the thing up if anyone shows any interests because there does not seem to be anything out there that is really hobby friendly. If I am wrong, maybe someone can correct me. Because of our day jobs we are still months away from that.

    Sometimes a lot of work does not manifest much

    One of the most painful aspects of the work I do is that I need to learn to work with new software libraries on a regular basis. The pain is associated with learning new syntax, parameters, and usages. One generally knows what the libraries are supposed to do, but cannot get them to work until all of the nuances, idiosyncrasies and minutiae are well understood. For extensive libraries, that just takes a lot of time–at least for me. There are some libraries I have used for so long (OpenCV, Boost, Qt, etc.) that I can rapidly do the vast bulk of what needs to be done in a new application because I am intimate with the minutiae. But there is always something that changes and requires the use of new libraries–obsolescence, license changes, functionality changes and that sort of thing that require the adoption of new libraries. I actually kind of enjoy learning new stuff, but it is a lot more fun when there is no schedule or budget to create stress.

    What was that all about? I have found some libraries I want to use to plot my EKG. They look great and I wish I would have started working with them sooner. I am confident now (well, not 100%, but very confident) they will be an excellent fit for this and future projects so I am starting to use them. Last night I spend three hours to get from the top images to the bottom image, then discovered I was probably using the wrong chart type for the thing I wanted to do, so I spent another hour to start getting the new graph type in place, but never got it quite working. This kind of thing is normal for me. Maybe I am just slow, but perseverance counts both in software development and in learning. Maybe I will be able to get the chart going tonight.

    Technology caught up with us (that is a good thing)

    I have had little time to work on the GaugeCam project due to other responsibilities. We got a helping hand with this product when we found that there are now cameras available that do precisely the part of the product we did not want to do and at which we were not that good. The camera in this post is an example of that. Before, we had to put together a cellphone enabled remote camera with mounting systems, batteries, a solar setup, etc. Now, you can just buy it and install it yourself. So now I think we will be able to concentrate on the software and the water level data that is accumulated from the product which is really our strong point anyway.

    Now I will be able to concentrate on my EKG project a little more before I go back to GaugeCam. Also, I will be able to use the BeagleBone Black I purchased on the EKG if I want. I am hoping to communicate between the Arduino/EKG electronics and the mothership computer via Bluetooth, but I am not sure I can get it to go fast enough. The Bluetooth will handle it, but I do not know if the Arduino can shovel the bits fast enough for the EKG sample rate I need (1K Hz). We shall see!