Wednesday, March 21, 2012

e-con Launches Stereo Camera Reference Design with TI OMAP and Aptina WVGA

PRWeb: e-con Systems an embedded design services company specializing in development of advanced camera solutions announces what is says to be the world’s first Stereo Vision camera reference design based on TI’s OMAP/DM37x family of processors and Aptina's 1/3-inch Global Shutter monochrome WVGA image sensors MT9V024. The Capella reference design aims to machine vision, robotics, 3D object recognition and other applications.


Youtube demo tells more about the reference design:

19 comments:

  1. Monitors aren't refreshing at 1 kHz, it is just when the timer application chose to display the time, so the synchronization could still be off as those cameras are capturing the same laptop frame not millisecond.

    ReplyDelete
  2. The frame rate is low. And for 3D stereo matching, the disparity can be very small and pixel level synchronization is needed. The mili-second synchronization precision is far from enough.

    ReplyDelete
  3. The video clip shows severe vertical misalignment.

    ReplyDelete
  4. MT9V024 has a stereo mode where 2 sensors are pixel clock synchronized.

    ReplyDelete
  5. As far as I know MT9V024 provides pixel synchronous Video. I also saw their website with one more Video. Here is the link http://www.youtube.com/watch?v=KJjWZ6LvmGw&feature=player_embedded. The Video says they have 45uS accuracy between the two frames. Also looking at the docs, they are claiming VGA @30fps from both the cameras simultaneously. The Video frame rate shown might due to the GUI interface and the frames might be grabbed before that.

    It would be good to have color and they have only monochrome which is not good!

    ReplyDelete
    Replies
    1. Yes, we transmit one pixel of each frame at a time to the OMAP pipeline. Hence this timing is possible.

      Delete
  6. This looks like a joke. At least the electricity ON button works.

    ReplyDelete
    Replies
    1. We would like to understand what you are looking at and would like to see if there is something missing in our product. Could you please explain further?

      Delete
  7. Monochrome sensor should be good enough for majority of the stereo applications. But the resolution is not sufficient.

    ReplyDelete
  8. Whats the difference between this and the bumble bee?

    ReplyDelete
    Replies
    1. The bumble bee is USB based whereas this one is standalone. It looks like bumble bee doesnt allow you to develop your own algorithms whereas this one does and hence allows you to integrate stereo vision in your products.

      The bumble bee supports 1.3M.

      Delete
  9. MT9V024 has color CFA variant.

    ReplyDelete
  10. Not sure that this is that big a deal. So they attached two cameras to a processor. That's the easy part... As others have mentioned, even synchronizing is already supported by the hardware.

    ReplyDelete
    Replies
    1. The DM3730 has only one parallel camera interface and here there are two camera inputs. I think the combination of two cameras in to one input is what makes this hardware different.

      The OMAP4 has two camera interfaces. So with OMAP4 we wouldnt be needing this circuit at all.

      Delete
    2. The trick part is interfacing the OMAP which has only one camera interface with two incoming camera streams and still achieve pixel synchronous camera streams. The two camera streams are combined to one stream and then passed through the OMAP pipeline and then they are again split. So, this is something which we wanted to do since it helps our customers to build stereo products. Our customers have tried to do this in the past with 2 USB cameras and had lot of performance issues and hence we had launched this solution.

      Delete
    3. No this is easy with the used Sensor. This particular Aptina Sensor allows to link together TWO Sensors and combines it into one stream.

      So basically all you have here is an Adapter Board that has two Sensors and a Levelshifter.

      Delete
    4. There are two things to look at. One is the hardware block and another is the OMAP ISP block. What we have achieved is merging these two streams in to one on teh camera stream side and then breaking these streams in to two separate streams and still achieve 30fps on both the Left and right sides. This is to make sure you get the real world data so that you dont miss on the depth calculation. This involves some extensive work on the OMAP ISP. Finally, if this was that simple, someone would have done it before and it would not have taken us around 6 months to develop this!

      Delete
    5. Do we get the full source for the OMAP ISP side code that you have developed? Is this given as a part of a kernel driver?

      Delete

All comments are moderated to avoid spam and personal attacks.