I’ve written a great image detection program to make cool time lapses of my Sintratec Kit 3D prints. Check it out.
Recently, I’ve been reviewing the massive pile of footage and timelapses that I’ve been collecting over the last year while using my Sintratec Kit. I was planning on making a massive video of all my timelapses and post it to my channel. However, after loading up tens of thousands of images, the content was really choppy and seizure-inducing.
A week ago, I uploaded my Dumbbell Ends experiment (where I used 100% used-powder and made a new set of dumbbell end nuts for my exercise set). While showing the timelapse, I zoomed in on only the best parts of the collection. It created a really neat growth video of the build. It’s comparable to how a strobe light will help to determine top-dead-center while timing an engine, or also comparable to how a CAT scan cuts through a body. Instead of showing frame after frame of warm-up time, fin application, powder heating, outline creation, and partial hatching, my process would basically take only the best image frames from the timelapse and render them into a video.
I figured it would make a lot more sense to post a video trying to isolate these moments, rather than make an hour long video of random 3D prints. It would make for a more compelling timelapse video, and I could use this method for future timelapses as I continue to make progress using my Sintratec Kit for various automotive, architectural, and experimental uses.
The problem is, there are tens of thousands of images. Literally hundreds of gigabytes of images. Some long print jobs, amassing over 1000 layers each, can take sometimes 24-27 hours, and with a photo being taken every five or ten seconds, it can be a large quantity of photos to examine. The best solution to this is to develop a program to determine which images are good and which are inadequate.
You might be asking, why am I photographing all my 3d printing jobs? The answer is simple. You can’t learn from the machine if you aren’t paying attention. Since I can’t spend most of my day staring at the machine, I set up a timelapse camera to record every few seconds to inform me of any issues or successes. In addition to my recording, I have a live camera linked to my smartphone for real-time monitoring to make sure everything is functioning properly. If something happens, I can use the real-time camera to recognize and remedy the immediate problem, and use the timelapse to review the last few minutes of layers to see if something can be diagnosed from the data.
For months, I’ve been quietly reviewing the images and saving them on a backup hard drive. I honestly didn’t think they would be used for much other my verification process, and since 99% of my 3d print jobs go without a problem, the images are usually archived just to free up the SD card. It wasn’t until the quarantine started and I finished up my Sintratec Kit Assembly videos, that I returned to the timelapse footage to add some supplementary footage between major shots. I compiled a bunch of timelapses and used them here and there.
However, no matter how I dolled up the timelapse videos with gradual zooms and pans, I never really liked that aesthetic. Sintratec Kits have IR lights located on the ceiling of the printer “hat,” and these lights are localized heaters to warm up the top layer of the print bed powder. They are constantly flickering, adjusting their luminosity to achieve the proper print surface temperature before the laser is allowed to make a path. The flickering is not that prevalent in real time, but with a timelapse, it can appear to be almost strobic. Also, the applicator arm regularly moves from left to right to apply one-tenth of a millimeter of powder for each layer, and it can obstruct the final laser-hatch pattern if the camera takes a shot at the wrong time. Lastly, each layer of the build is not uniform, resulting in different laser durations for each layer. That means one layer might take 6 seconds to run, while another might take 16. All this factors in the irregularity of the timelapse video, making it chunky and ugly.
The solution? Author a program that can identify good layers from bad layers. Save the good images in a dedicated location and leave the unwanted photos behind. Seems simple, but it’s not.
The first place to start was to search each image for the color of the sintered powder. If the image has a lot of that color, then it would mean it’s probably a good candidate for selection. However, as mentioned in the previous section, the IR lights are constantly flashing, that means the color is constantly changing. One frame of the timelapse might have a brighter sintered layer than the last, leaving it hard to tell what is what. Also, the window of the kit has an orange tinted layer, used to protect people from laser refraction that could damage one’s eyes. That cuts out a lot of color differentiation. The last hurdle is, the placement of the camera is slightly different every time. That leads to different color scale variations per print job, as well as different regions on the image where the printable area might have been captured. All those considerations make this a more complicated project than it seems.
After two days of sporadic programming, I was able to make something that captures about 98% of the good images and saves them into a directory. It’s really spiffy. This is going to save me hours of time in the future.
Here’s how it works. The easiest way to define this is by calling it a smart copy-and-paster tool. The major inputs are the “Source” and “Destination” folders. It reports the progress in two statusbars, and it has an UI that lets the user select acceptable color values and reject unacceptable colors as well.
After picking a source directory, it will load the first image. Since the Sintratec Kit has a lengthy warm-up and layer schedule, this sorting tool allows the user to skip ahead to further images. Once a typical layer is found, the user can right click on a pixel that appears to be typical of that particular sintering color. Every image is made up of a grid of pixels, with with a corresponding R (Red), G (Green), and B (Blue) value. Since this program uses that selected RGB value, and the sampled color values are stored in the UI for review. Also, a buffer region surrounding the target color is stored to allow for variation in the color spectrum. JPG images are not populated with pixels of homogeneous colors, and if you zoom in, you will find a variety of colors in what appears to be a uniform region. Also, the buffer values allow for the variations in IR colors between each frame.
The next option is the count of acceptable pixels. Sintratec Kits make every layer by starting with an outline and filling it in with a hatch pattern. My goal is to have a picture of the finished layer, not a bunch of incomplete or unhatched layers. So, if an image has a 15% sintered powder and the next image has 98% sintered powder, I’d rather accept the latter. This conditional allows for the program to find layers of acceptable color, but skip any that have minimal sintering completed.
Lastly, I made a selection box for testing. Scanning the entire pixel grid for colors would be a waste of resources and time, so I implemented a simple selection rectangle that can be drawn anywhere on the frame. This proves useful because each print job can have a different region on the shot. The X and Y values of the pixel are saved, and that is scaled to match the image dimensions to report the total of pixels to be scanned on the selection box. The proportion of target pixels to scanned pixels is important for estimating the acceptable pixel count.
I figured that would suffice, and I ran my early version of the code. It didn’t really work. Turned out, choosing by RGB value allowed for crazy things to occur. Sometimes a completely blank bed would get in, because the IR light dimmed to just the right value. Another annoying thing was many “good” frames would not be selected because the color buffer couldn’t find them. Also, the applicator arm would get in the way, thinking that the shadow on the side was a dark sintered area of powder. I needed a few more tests before it could properly identify a good layer from a bad.
The first enhancement was setting up an “ignore” pixel condition. If the bed had an unacceptable pixel count (like of a white color), then that would indicate something was totally wrong. This helped to remove some of the applicator arm interference images. But that didn’t always help.
The second enhancement fixed everything, and basically was the key to near-perfect layer identification. The crux of the solution was to figure out the change in color, not the color itself. While finding the sintering color helps in the beginning, the major way this tool works is by sorting images based on how sharp the change in color is. I wanted to accomplish this using the fastest math I could employ, so I converted all the RGB’s as vectors. I’m confident you as a listener gets this concept, because well, it should be obvious to even the most dimwitted individual, who holds an advanced degree in hyperbolic topology. ng ggg. It’s simple. Just find the vector differential between neighboring pixels, and calculate the magnitude. Then set up a textbox to allow the user to filter based on rate of change. Not only did this work amazingly for finding the sintered layer (regardless of IR color), but it also skipped the applicator frames, usually because the movement-blur of the swipe created a slower rate of pixel change because of the gradient.
The program takes less than a second to process a large region, and it goes faster if less pixels are found or used. It reports all the loading, processing, and removing of each image, and it reports all the found values for each frame. This is useful for the times that I estimated what would be a good acceptable count of pixels, but either guessed a little too high or too low. I just restart the session with amended values.
The latest enhancement I’m developing is exporting an XML of the filter criteria to the destination directory so that it can reload the previous session’s data. Also I’d like to make a percentage estimation on a target color, so I don’t have to keep estimating the proportion.
So far, it’s working really well, and I’ve already converted several existing timelapses into new directories. I’ll probably make a long video displaying a bunch of different print jobs I’ve done over the last year. I’ll conclude this video with my favorite profile that I printed on the machine to date. This was the dumbbell end nut and bicycle flag shaft extension video. Really cool.
Our department is looking at getting a Sintratec Kit for teaching 3d printing and your website has been fantastic to get up to speed on the Kit construction, thank you. Great description of how you tried to wipe out the ‘rubbish’ time-lapse images. Do you think it be possible to put a small webcam (ie RPi camera) through the roof of the hat to get a top-down view of the powder bed? Could you use a signal from the stepper controller for the powder delivery system to trigger image acquisition just before it moves across the bed?
I thought about setting up a switch that would snap a photo every time the wiper would end, but there are two problems. First, the temperature in the chamber is hot. Like 175 deg C. That would rule out most plastic switches. Also, the wiper doesn’t always return back to the end stop, so it’s not a reliable position.
The problem with putting a camera in the hat is the temperature and space. The laser shines through a tiny 2cm-by-2cm hole in the hat, and I’d be very reluctant to putting another hole in that hat. Also, there isn’t a lot of room to put a spare camera. Lastly, whatever camera you use, you’d have to have it near the chamber, and it can get very hot. Just putting my gopros outside the window is kind of sketchy, because the heat emanating from the viewport is kind of impressive. The first time I installed my gopro at that spot, I thought maybe my case would melt or warp, but thankfully it didn’t. So putting a camera (even behind a piece of glass) in the hat would be kind of tough to resist the heat.
This utility I wrote seems to do the trick just fine, and I don’t have to change my workflow or risk damaging the machine. But maybe an ingenious programmer/hacker/machinist could install a sturdy camera in the Sintratec Kit and do what you are saying. If you do it, send me a message and I’d love to interview you for this blog. 🙂