I am implementing the ascii decoder for an earlier exercise, STL binary decoder, 26Sept2012. The STL ascii file specification I am using is available on wiki. A few C# 6.0 feature enhancements have been added (null propagation, string interpolation, expression-bodied members). Files tested are available in the assets directory under the project solution. Next step would be to add color decoding, logging and test it more thoroughly with defective STL files. Decoder needs to be more flexible, currently normal needs to be in the first line.
Few years late on this technology but I was still giddy assembling the Google Cardboard. To my surprise, the price of this cardboard viewer varied from $5.00 – $20.00. Then there were many other fancier headsets in the price of hundreds.
I bought the generic brand of cardboard viewer. It arrived with two magnets, lens and perforated lines, minus the 2D barcode. There was very little assembly required.
Before attempting any exciting development project(s), my daughter and I have already found hours of entertainment with the various demos provided. My favorite was the northern lights under the lighthouse. It reminded me of SplitRock lighthouse, near Duluth.
Anyone has recommendation on purchasing a ‘real’ viewer ? For development, is it better with iOS vs Android ? Saw amazing 3D painting tools with Vocativ on facebook, so cool !!
Reading Google Cardboard and SDK documentation.
I love the Red Green show. “If women don’t find you handsome, they should at least find you handy.” Well, inspired by Red Green, I made this tacoma tonneau cover with 3 – Extrutech 24″ vinyl tongue-groove panels, 6 – aluminum U-bars, 12 – 1/4″ x 3″ bolts and 14 – 1/4″ x 1-1/2″ bolts. To cut the vinyl panels, I used a handheld Dewalt 3.5″ circular saw. Edges are wrapped under J-channels designed for the panels.
Thanks to the great designers of toyota, truck bed came pre-assembled with a rail system and four adjustable cleats. The 1/4″ x 3″ bolts secure to the cleats perfectly. I just have to purchase two additional cleats for the top rail.
No duct tape necessary.
As a next step from my last blog – utility app, challenge is to write a demo app replacement.
Thanks to Mightex’s software engineering, there already exists some sample code in C++, C#, VB for interfacing with driver and camera. You can download them from Mightex TCE1209-U page.
My personal copy of Visual Studio is 2008 and it is able to load the solution CSharp_Application under the SDK directory. The SDK is written in C++ so I am building this as an x86 app for interop.
The sample application is threaded and already has code for interfacing with the driver. User interface even has a button to start and stop frame grabs. All I need to do are three tasks, display video rendering, save buffered image to file and offer a ‘run-on’ mode to save frames indefinitely.
Sample code provided an unsafe pointer to each 16 bit pixel during after a frame-grab. All I have to do is bitblit pixels to screen continuously. For C#, I choose to use pictureBox as my video display. I create 2 bitmap buffers to handle the alternating process of render and display.
// create 2 buffers for preview
bmps = new Bitmap;
bmps = new Bitmap(picBox.Width, picBox.Height, PixelFormat.Format24bppRgb);
bmps = new Bitmap(picBox.Width, picBox.Height, PixelFormat.Format24bppRgb);
rect = new Rectangle(1, 0, picBox.Width – 1, picBox.Height);
// copy previous buffer data to current except current frame.
g = Graphics.FromImage(bmps[notCurrent]);
g.DrawImage(bmps[current], 0, 0, rect, GraphicsUnit.Pixel);
// sample frame-grab pixels into new buffer
for (i = 0; i < frameSize; i++)
// bit shift – 12 bpp to 8bpp
byte p = (byte)((uint)*frameptr >> 4);
// preview – sample 1 out of 10 pixels
if (i % 10 == 0)
*Bmpptr = p; Bmpptr++;
*Bmpptr = p; Bmpptr++;
*Bmpptr = p; Bmpptr += (Bmpdata.Stride-2);
// copy new buffer into pictureBox
picBox.Image = (Image)bmps[current];
Sized at 500 x 205, it is scaling 10X smaller the specified 2048pixels/frame. So, I am sampling 1 pixel out of 10 for video preview. All previous frames (1 to n-1) are copied to (0,0) so as to provide a left moving video.
Buffered Image to file
The main goal of this application is to provide Andy a way to specify image size and then save the image in a common format. The Buffered tab offer a standard saveFileDialog1 to create file name and destination. .NET Bitmap class saves the 8bpp image into gif, bmp, jpg, png, tif.
How large can our Bitmap be ? Here is the Microsoft documentation on System.Drawing.Bitmap. Constructor signature for width and height are Int32 which has a range of 2,147,483,647. 32bit pointer with 2048 image height leaves us 1048575 pixels for width (minus header and padding). In my tests, writing a 100,000 x 2048 bitmap crashes at bitmap.save() regardless of destination format type. The largest bitmap I am able to save is 80,000 x 2048 pixels as bmp.
Inherently, the camera is 12bpp and produces 16bpp frames (4bits blank). Logically I should be able to render and store png16 or bmp16 with indexed colors. Also, another little quark about working with bitmap 8bpp index palette, one cannot directly set the palette values. Instead, the following order of value assignments are required. Read more from Charles Petzold’s article.
// set gray scale palette
ColorPalette pal = buffer.Palette;
for (int i = 0; i < 256; i++)
pal.Entries[i] = Color.FromArgb(255,i,i,i);
buffer.Palette = pal;
One advantage of a line scanning camera is that there is no specified image width. One may wish to scan indefinitely or for a very long time. So, it would be nice to use this ‘run-on’ mode to continuously scan and save each frame to a separate file. For combining the frames, use the utility written for the last blog. I will update it to handle png input next. It should be noted that run-on mode is a secondary feature because of performance issue. Writing image to file is a slow process while other frame-grabs get dropped; it takes about 500ms to iterate 1 frame save on my MacBookPro running Vista on VMWare. We may revisit this feature later if it should be desirable.
To use this feature, you should use an empty directory. If you prefer otherwise, the selected directory is scanned and displays an error popup if any file(s) with prefix of “frame” is founded. Sorry I am not offering a choice in name or file format. I am sticking with png files for now.
Here are some images I am creating at Lake Calhoun today; the original images are bundled in Github repository under directory exampleResults.
It is a fun day making panoramic images at lake Calhoun. I am learning that the software exposure value can speed up the frame rate and change the image brightness. Unfortunately, the value of 1 locks up the application for unknown reason.
Here is a video demo. thanks to my better-half for filming.
1) Update utility to handle png inputs with prefix name “frame”.
2) Debug above mention error with exposure value of “1”. problem in driver, block user from setting value of 1 for now. 3) Debug and implement the ability to use 8bpp gray scale bitmap if possible. done ! 4) Sent it back to Andy for feedback !
1) Setting Bitmap Palette – Charles Petzold http://www.charlespetzold.com/pwcs/PaletteChange.html
Application use direction
- plug in camera.
- execute application.
- select ‘camera1’ in left-upper combo box.
- start preview by clicking upper-right-button, ‘start preview’.
Buffered image mode
- to save scans by buffering, click on the ‘buffered tab’ (bottom).
- enter the numeric line count, (image width).
- select or enter destination file name path.
- check all the file types that apply (bmp, png, tiff, gif, etc).
- if not already selected, select radio button (upper-right) for “buffered image”.
- click on button “save” to start recording.
- a popup dialog will display when successfully scanned and saved to file.
- to save scans by ‘run-on’, click on the ‘run-on tab’ (bottom).
- select a destination directory (preferably empty).
- select radio button (upper-right) for “run-on”.
- click on button “save’ to start recording.
- click on button ‘stop’ (previously ‘save’) or ‘stop preview’ to exit run-on mode.
Greatly appreciate my professor, Andy Davidhazy for loaning his line scan camera for an ‘enrichment’ project.
The challenge: figure out why (possibly fix) the camera-demo app is producing images with poor tonal quality.
Given: Mightext TCE1209-U camera with demo application as well as sample project source in C++, C#, VB.
After much anticipation, camera arrives with 50mm lens adaptor already assembled. No power supply needed, USB will do! Software installation includes a device driver for Windows 7 and demo app (available from website). The demo application display image result similar to an oscilloscope, one line per time interval. Output options are Windows bitmap or raw ASCII files.
With Windows bitmap option known to be a problem, I am diving in to work with the ASCII output option. Here is one of my ‘better’ result from the first try. Here, I am swiping the camera view across my kitchen. You can see a blur version of my daughter in silhouette.
For a better test, I printed the following test page with grayscale (short of buying a Kodak IT-8 target).
The ascii output files each contain 2048 lines (camera pixel width) . Each pixel has a depth from 0 – 4098 (12 bits per pixel) and is represented in a line delimited by a carriage return. Since displays and common pictorial file formats are in 8bits per pixel, I am writing the utility to requantize 12bpp ASCII into 8bpp binary image. With .NET library, System.Drawing.Bitmap class offer all of the features I need to manipulate and save the files into bmp, gif, tif, png, jpg and others. For better visual result, I am applying histogram equalization to minimize the bit depth compression (12bpp to 8bpp). Here is my result.
The utility app has three tab pages, Source, Image processing and Storage. Source code in C#, Visual Studio 2008 solution is available on GitHub. Included in the solution is a debug executable, test source files and results.
Image processing – Internally, it loads all the source files to find image black point, white point, mode and number of shades. It also builds a histogram and look-up-table for 12bpp->8bpp conversion. User may write the histogram to file (ASCII) for more detail analysis. Also, user may override the histogram-equalization-look-up-table by changing the white point and black point.
Storage – Internally, it loads all the source files, apply histogram-equalization (above mentioned) to assemble an 8bit per pixel bitmap which is then saved as bmp, gif, tif, png or jpg.
Conclusion / Next step:
Probably should invest more time investigating the tone reproduction issue. Instead, a quick utility is devised and moving onward to the next solution by writing a replacement demo app. Most of the features in this utility will be ported to the next.
Future enhancements are as follow:
1) an auto-gamma correction
2) a display of the cumulative histogram.
3) FFT filter to remove sine wave if the vertical lines in the image are not artifacts by yours truly.
4) save image into png16.
5) high dynamic range feature to blend the additional 4 bits of data (alternative bit depth conversion 12bpp -> 8bpp).
6) write a demo-app replacement that display live image and output better pictorial binary image files.
1) Peripheral photography article by Andrew Davidhazy.
2) Digital Image Processing by Gonzalez, Woods. – class text – Digital Image processing I with Dr. Rao at Rochester Institute of Technology.
3) My viewer exercise project in C#, Visual Studio 2005
4) Mightex TCE1209-U camera documentation.
5) Photographics Materials & Processes by Dr. Strobel – class text – M&P with Dr. Strobel, Jack Holm, Russ Kraus at Rochester Institute of Technology.
Found this great article titled “Develoing Android* Application with Voice Recognition Features”, by Stanislav of Intel. Included is a SpeechRecognitionHelper class which instantiate and invoke Google SpeechRecongizer service. Unlike iOS OpenEar library, the free Google SpeechRecognizer service requires internet access but is impressively faster in performance and accuracy. Apparently offline processing is possible for some devices with Jellybean and later. Goto Settings->Language and input->Voice search->Offline speech recognition and install your language packet. Inside the Dictation project Manifest file, comment out this line for internet permission.
<uses-permission android:name=”android.permission.INTERNET” />
For features and UI, there are four buttons, Dictate, Last, Clear and Save. Dictate will invoke the Google SpeechRecognizer service and render/append the interpreted text on to the page. I am assuming each dictation is a complete sentence and insert a period after. The Last button will remove your last dictation instance. The Clear button will remove all dictations. The save button will write/append all text on screen to file along with a time stamp. Additional features such as deletion is probably a good idea but not implemented. Below are some screenshots of the app in operation.
Complete Dictation project source maybe found on Github.
According to Samsung SPen developer documentation, we should be using the latest Samsung Mobile SDK, version 3.0. SPen SDK version 2.3 will no longer be supported after the 2014 calendar year. The developer website is full of examples for the SPen usage for native mobile development in java. Here is an SCanvasView intro page. For the most simple pen drawing exercise with SPen SDK version 2.3, I am creating this SCanvasView exercise, Note2Stylus. All the smarts like anti-aliasing, smoothing and line thickness is handled by the SCanvasView right out of the box, very cool. This app is created with Eclipse Juno – Android developer tool with SDK version 23. The target hardware device is a Samsung Note 2 with Android 4.4.2.
Prototype and testing on Android 4.4.2, Saumsung Note II, the MachWaves app is build with eclipse-ADT, consisting of three tab-fragments, camera, configuration and about (help) page.
On tab1, camera feature current only supports still photo in portrait mode. The highlight is default with a green color but should make it to tab2, configuration page in next iteration. What else would be helpful in configuration ? (highlight stroke thickness, saved file style, auto-save, angle in radian, ??). Tab3 is the help page with a browser link to the generic support page.
Saved files are currently placed in /storage/emulated/0/Pictures/ directory. The file name follows the structure, d MMM yyyy HH:mm:ss.png. There is currently no way of saving the highlighted image with angle and mach number information. Would that feature be of interest ? Maybe I should offer the option to save file as jpeg or gif too.
Touch start/end events for two lines are drawn as highlight for the Canvas layer. For the Mach angle, I determine intersecting vectors and calculate their dot product. Mach number is estimated by the relationship, sin (mach-angle) = 1 / mach-number. For more information, see reference, Compressible-Fluid Dynamics by Philip A. Thompson, 1988.
Download from GooglePlay, here or pull the source code available on GitHub. Feel free to use it in anyway you see fit though higher quality code will be available next iteration. Any suggestion is greatly appreciated.
Thank you Professor Andy Davidhazy, RIT for teaching schileren method among many other flow visualization techniques. Thank you Dr. Brown of NASA Ames Research center for including me in your experiments.
Decades ago, I was privileged to work (co-op) at the Imaging Technology Branch of NASA Ames Research Center. For most of the experience, I was in awe, fascinated by the brilliant minds, facilities and experiments much like a child would in Disney World.
In one particular photographic assignment at the High-Reynolds laboratory, I was allowed to inquire/propose a moire schileren technique I read and learned from an old text, Schlieren Methods, Douglas William Holder, 1963 (rare but available at RIT library). For knife edge replacements, I created various ronchi rulings with the following result-best case (a) parallel-ronchi-rules-supersonic-flow (b) rotated-ronchi-rules-no-flow (c) rotated-ronchi-rules-supersonic-flow.
Recently, I developed these mobile apps, Machwaves to offer simple photo/video capability and measure mach angle, mach number. As I have no qualifying fluid dynamic experience, I welcome any input or correction on this application. Source code in objective-C and Java is freely available. Information about the app and download sites are available here.