Pages

Friday, March 22, 2013

Week of 3/18

Scanning has begun, but there is still plenty of work to be done.

Since my last post, I was able to refactor a lot of the main scanning code to break up where the for loops are.  This provides for cleaner code.  I also set up the overlay view to be able to test the scan.  When running through the scan (with just video of myself -- no laser), I got some memory exceptions.  It turns out you cannot multiply a float matrix (32F) by a double matrix (64F), so I fixed the errors and was able to run through a scan with no errors.

On Tuesday, Tyler and I came up with a makeshift scanning stage idea.  I put it together and attempted a scan...the results were definitely not good.  Wednesday, I came back and started working through it and determined there was some logic errors in our red midpoint calculation.  One error was when checking for a zero crossing, I had a check if it was equal to zero...well if it is black, the midpoint is zero (max and min red components are zero).  So I took out this piece (at least for now).  Also, our idea of calculating the midpoint at each pixel was causing some problems.  If a black pixel had a min red point of 0 and a max of 1 for a midpoint of 0.5, it was finding a zero crossing between a -0.5 and a 0.5 value.  This means that if we find the midpoint at each pixel, every pixel must be hit by the laser (as the Brown/Siggraph notes mention for the shadow).  Instead, I changed it to find the midpoint across a row of pixels.  When the lights are off, this should work fine.  For the Siggraph structured light shadow, it probably wouldn't as the object itself causes a shadow.  But when red is the only visible piece, it is okay if not every pixel is hit by the laser.

When I made that change, this is what resulted as the scan of a coffee cup, shown in a program called MeshLab:


This is actually an inverted view.  Since we are converting to the back world coordinates, this is a view from behind.  So if we want to plot the view from the front, we will probably want to rotate around the Y-axis by 180 degrees.

The next problem is the depth.  This scan isn't really that great depth-wise.  It is more of a 2D image with a coffee cup sticking out (in from this view).  The stage pieces are not perpendicular but more of 130 degree angles.  I spent 5+ hours thinking/debugging this, including scanning just a our stage.  I thought it might be how we are obtaining the extrinsic values...but I'm not convinced of this.  I looked at the Siggraph notes, but I'm still not sure what the problem is.  Does it really matter where the checkerboard is on the stage when you do the extrinsic calculation?  The lambda value is n^t(plane_point-camera_origin)/(n^t(camera_point))...so do our assumptions of (0,0,0) as the plane origin and (0,1,0) [or (0,0,1)] as the plane normal work?  I want to talk to Dr. Wolff about this problem as so far I haven't been able to find a problem.

Meanwhile, Tyler has worked on fixing some things in the overlayview and scanning and preparing for the progress bar.  He changed the overlayview to use an alpha channel.

Grady got the interrupt to work correctly and has a serial library written.  He is working on integrating it into the app.  He also worked with a DC motor instead...but we might just stick with the stepper motor if we can get a smoother motion.

During spring break, we may meet once or twice.  We need to get our scanning stage built.  After spring break, crunch time gets closer.  We need to figure out the progress bar and figure out the scan problems and getting the stepper motor integrated.  The scan problems are what concerns me most as it is very difficult to debug logic errors with it.  But hopefully I can find what is wrong and/or get some advice/help from Dr. Wolff.

Spring 'break' is here.

No comments:

Post a Comment