Pages

Wednesday, May 15, 2013

Week of 5/13

Our capstone project is officially DONE.  We submitted our paper around 6:15 PM today.  It is posted on our website at cs.plu.edu/~scanners.

It has been a great ride...sure there were some stressful moments and some struggles, but in the end, I am very proud of the project we were able to put together and of the results that we obtained.

I hope readers of this blog have gained a better understanding of 3D scanning and has had gained a deeper interest in the world of 3D graphics.

Thanks for reading.

Jeff

Thursday, May 9, 2013

Week of 5/6

Well, Academic Festival is behind us...and boy was our presentation a success.

We filled the room with quite a few people and gave an hour-long presentation, including questions.  We got quite a few questions, which I am not too surprised with given the nature of our topic.  Overall I've heard positive things about the presentation.  I am proud of how we did and in our project and results overall.

We are now working on our final document which is due next Wednesday.  I will likely have one more blog post following that.

Check out our updated website with the presentation slides and video at www.cs.plu.edu/~scanners.

Until next week!

Thursday, May 2, 2013

Week of 4/29

Academic Festival is just about here!

We have prepared our slides, practiced, and run some good scans.  We present Saturday at 10:45 AM.

Since my last post, we did have some problems...It turns out that OpenCV looks for a certain color pattern on the chessboard to know where to start the calibration.  Without being consistent in intrinsic and extrinsic calibrations, there will be problems.  Anyways, I finally figured out how we needed to do it and drew arrows on the chessboard so we know the correct way to do the calibrations according to our code.

Also, Grady implemented a gear box out of Lego Mindstorms.  That combined with the Servo motor provides a good, slow motion of the laser, and we have gotten pretty good results.

Hopefully our presentation will go well!  Come out and see it!

Thursday, April 25, 2013

Week of 4/22

We are now just 9 days away from presenting!

Since my last post, we have mostly been just putting some finishing touches on the code.  I tried to get some camera settings in the code but ended up reverting most of them.  I was able to figure out how to turn off the auto-focus (sort of--by setting it to the current value), but the user will need to set things like exposure and gain for their own camera.

I refactored the write-to-file method; we now have messages showing where the process of the scan is at (e.g. Finding Red Points).  I also added another check to eliminate more noise; anything that is greater than -0.1 in the z-component (since the z-axis of the plane points away from the camera) is not included.  This means points on the plane or behind it are not included.

Tyler made a few more GUI changes, and the hardware code is now integrated with the main code.  Thus our coding is complete! (Other than the C code used to be put on the dev board, which Grady is/will be tweaking).  We have a new Servo motor on its way that will hopefully allow us to get small, smooth increments with the pulsewidth modulation.

On Wednesday, I brought in various items to scan and I was pleased overall with most of the results!

We have started getting our slides ready and spent some time figuring out how our slideshow will be ordered, as well as time estimates for the various segments of the presentation.

In the next week, we will continue working on our presentation, setup the scanning environment, and do the final scans of objects to show as results, as well as one to use as our demo (likely of a wooden kangaroo).  This will be done using our actual stage, higher quality camera, and the motor (along with the laser powered through the board).  We plan on filming the motor/laser as well to use in conjunction with the screen capture.

We are closing in!  Hope everyone will enjoy the project and results!

Thursday, April 18, 2013

Week of 4/15

We are now just 16 days from presenting our capstone!

Last Saturday, we met and made significant headway (after helping Dr. Wolff with passport weekend).  We added additional region clicks to limit the data to the region the object is located in, added a filename field for output, I updated (most of) the floats to doubles, and we finished up the destructors.

This week, we made more good progress.  Grady and I met with Dr. Wolff on Tuesday to discuss what we have been working on and what we planned to do this week.

Tuesday, Grady cleaned up some Git problems we were having due to line endings.  He has also been working on integrating a QThread into the serial code.  This will allow the motor to spin in its own thread while the video is being captured in a separate thread.  He had numerous problems getting it to work, but Thursday night, I'm pleased to say he did get it working! So we now have another very key piece of the hardware integration ready.  We will wait to merge it with the main code so that I can continue testing the scan.

Tyler meanwhile has been working on improving the GUI.  He has made it look quite a bit better (rather than the classic Windows look).  The color scheme and buttons are better and the error message functionality is cleaner and informative.  There are still a few things to fix (e.g. some directory fields are allowing focus, clearing error messages), but overall, the look of our application is coming along nicely.

I added bounds checking to eliminate points that lie on either plane (within z= +/- 0.1) as well as any data points that are outside the sphere with a radius equal to the distance from the camera to the back plane origin.  With the object regions and these restrictions, we improve the process time a bit.  Also, I did some code cleanup (such as eliminating repeated identical calculations and changing AB-AC to A(B-C) where A,B,&C are matrices).  Also, we now are clearing data as soon as possible from the scanModel rather than calling a resetScan method.  I started working on some camera settings as well; I have implemented getting the current settings with the lights on, doing a scan with the lights off with custom settings, and then resetting the camera settings to the pre-scan settings.  I tried using a larger version of the image (that shows more), but it slowed down processing so I will likely leave it at 640x480 (though I may mess with the FPS settings and try combining that with a larger image).  Hopefully I will be able to find a flag to turn off autofocus of Grady's camera, but we will see...

Last but not least, I did a scan of Dr. Wolff's orange miniature brain, and was quite pleased with how the reconstructed mesh turned out:

A photo of the brain

A reconstructed model of the brain

We will likely use something with more profound and pronounced features (that is a bit larger).

Until next time :)

Thursday, April 11, 2013

Week of 4/8

Since my last post, we haven't done a bunch of coding, but have done some.

This week was consumed mostly with presentations: going to class on Tuesday and meeting with Dr. Wolff, preparing for the presentation on Wednesday, and then presenting and seeing other presentations on Thursday.  For our presentation, we were able to have a demo of the scan as well as displaying it in MeshLab.

As far as coding goes, Grady has been working on figuring out threading so we can check for the serial flag to stop the motor while also collecting the images from the camera.  We figured out today we can simply use the timeout of the timer within the scanningView rather than a while loop in the controller.  When the timer times out, we will check that the motor is not yet finished rotating.  If it isn't, we will send the image through the controller to the model.  If it is finished, we will stop the timer and begin processing the scan data.

Also, I have improved the progress bar so that it obtains values from the model of how many loops will be used; that is, the number of rows to process and two times the number of images, plus 1 for writing to the file.  This may get updated with horizontal bounding (see below).

Tyler has worked on some memory management, and I rewrote the createPointCloud method (taken from OpenCV's) so that we can use double points rather than floats.  I also started working on getting the filename to save the points rather than hard-coding; there will be an additional field for the filename to append to the directory.  (Note: this isn't an ideal way of saving a filename, but due to time constraints, it is probably the easiest with what we have).  I plan on changing as much as possible to use double data rather than float so that we can maintain as much accuracy as possible.

In the coming weeks left for implementation, we plan on doing the following:

    • Integrating Hardware: Get the motor worked into the project
    • Complete Memory Deallocation and Management
      • Finish up destructors
      • Clearing data as soon as possible during/after sacn
    • GUI Improvements
      • Qt Stylesheets (if time)
      • Threading for progress bar (maybe)
    • Implementing additional region clicks: Horizontal bounding on object to reduce error
    • Use double calculations whenever possible
    • Attempt to get better and more consistent scan results
      • Bounds checking for object points
        • Don't include plane points
        • Ignore extreme outliers
    • Set up final scanning environment
    • Other (e.g. output file, cleanup/refactoring)
    • Mesh work (merge if time allows)
    We only have a couple more weeks before we need to prepare for our presentation.  Hopefully we can successfully implement most of the above and have a more solid product for academic festival.

    Until next time.

    Thursday, April 4, 2013

    Week of 4/1

    Phew! Debugging the scan took a while, but I FINALLY figured out the logic error.

    I was able to narrow down what appeared to be an error in our calculations of the points on the back plane. It turns out I was write, and indeed it was an error in our lambda calculation.

    The lambda calculation is in the following format:
    lambda = (n^t(point_on_plane - camera_origin))/(n^t(image_point))

    We can use the plane origin as the point in the plane and (0,0,1) as the plane normal vector.  We know the camera origin and calculated the image point.

    What we were doing is converting the world pieces to camera coordinates.  This is a PROBLEM, because we are converting a normal vector while treating it as a point, using RP+T to convert it to world coordinates. Wednesday, I had the idea of converting things to world coordinates instead, allowing me to use the normal vector.  I converted both the image point and the camera origin to world coordinates.  I thought I had a breakthrough, obtaining a z-coordinate of zero.  However, the numbers didn't look quite right.  I used my graphing calculator to do matrix multiplications for checking things.  I finally left around 9:45 Wednesday night.

    Thursday I was hoping to get it!  I felt I may be on the right track (after looking at my calculator some more at home Wednesday night).  But I wasn't getting very far.  Then I started looking at the lambda calculation and associated geometrical pieces.  I realized something and started working it out on the whiteboard and was gratified by my calculator producing the correct answer.  Here is what it was: lambda*u is a ray, NOT a 3D point.  The 3D point is origin + lambda*u; this must be converted to world all at once.  This comes out as the following: R^t*origin + lambda*R^t*u + R^t*T.  I then applied the same ray-plane equation and resolved for lambda.  I went through the calculations, and when I got lambda, I converted back to camera, divided the x and y by z, and verified that they matched the image point coordinates. BINGO! I made changes to the code as needed, rescanned, and I got not only perpendicularity of the planes, but also got 3D curvature of a scanned coffee cup (there is some noise, but that's likely expected).  It's about time :)

    Other happenings: Grady merged his branch from serial and all our code is back in one repo.  Tyler implemented the progress bar (it shows up but has some bugs to work out) and destructors (these are not all correctly working yet...some of them are commented out until Tyler can look at them more).

    We plan to meet Saturday.  Hopefully we can get everything working on all three computers (e.g. updated serial lib and files, includes in VS) and continue making headway.  Grady is/will be looking at pulsewidth modulation; hopefully he can do it in time and we can have a smoother motion with our motor.  Tyler will work on the progress bar and destructors more.  And I will continue working on the scan, likely working to disclude the plane points from the scan (things that have z~0) leaving the object as the primary piece.

    In all, Thursday was a good day.  We have a presentation next Thursday that will take some time to prepare for, but I'm feeling better about the project overall now that the scanning bug is fixed.  Hopefully we'll even reduce the noise in the scan if possible.

    Until next time...

    UPDATE (4/5/13):
    Here is what a scan of the coffee cup on its side now looks like: