%0 Journal Article
%J Computer Vision and Image Understanding
%D 2004
%T A hierarchy of cameras for 3D photography
%A Neumann, Jan
%A Fermüller, Cornelia
%A Aloimonos, J.
%K Camera design
%K Multi-view geometry
%K Polydioptric cameras
%K Spatio-temporal image analysis
%K structure from motion
%X The view-independent visualization of 3D scenes is most often based on rendering accurate 3D models or utilizes image-based rendering techniques. To compute the 3D structure of a scene from a moving vision sensor or to use image-based rendering approaches, we need to be able to estimate the motion of the sensor from the recorded image information with high accuracy, a problem that has been well-studied. In this work, we investigate the relationship between camera design and our ability to perform accurate 3D photography, by examining the influence of camera design on the estimation of the motion and structure of a scene from video data. By relating the differential structure of the time varying plenoptic function to different known and new camera designs, we can establish a hierarchy of cameras based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the motion estimation problem can be solved independently of the depth of the scene which leads to fast and robust algorithms for 3D Photography. In between are multiple view cameras with a large field of view which we have built, as well as omni-directional sensors.
%B Computer Vision and Image Understanding
%V 96
%P 274 - 293
%8 2004/12//
%@ 1077-3142
%G eng
%U http://www.sciencedirect.com/science/article/pii/S1077314204000505
%N 3
%R 10.1016/j.cviu.2004.03.013
%0 Conference Paper
%B Ninth IEEE International Conference on Computer Vision, 2003. Proceedings
%D 2003
%T Eye design in the plenoptic space of light rays
%A Neumann, J.
%A Fermüller, Cornelia
%A Aloimonos, J.
%K 3D ego-motion estimation
%K Assembly
%K B-splines
%K Camera design
%K CAMERAS
%K captured image
%K compound eyes
%K Computer vision
%K data mining
%K eye
%K eye-carrying organism
%K Eyes
%K filter optimization
%K image representation
%K image resolution
%K Information geometry
%K Laboratories
%K light field reconstruction
%K light gathering power
%K light rays
%K mixed spherical-Cartesian coordinate system
%K Motion estimation
%K natural evolution process
%K natural eye designs
%K natural image statistics
%K optical nanotechnology
%K Optical signal processing
%K optimal eye design mathematical criteria
%K Organisms
%K plenoptic image formation
%K plenoptic space
%K plenoptic video geometry
%K sampling operators
%K sensory ecology
%K Signal design
%K Signal processing
%K signal processing framework
%K signal processing tool
%K square-summable sequences
%K visual acuity
%X Natural eye designs are optimized with regard to the tasks the eye-carrying organism has to perform for survival. This optimization has been performed by the process of natural evolution over many millions of years. Every eye captures a subset of the space of light rays. The information contained in this subset and the accuracy to which the eye can extract the necessary information determines an upper limit on how well an organism can perform a given task. In this work we propose a new methodology for camera design. By interpreting eyes as sample patterns in light ray space we can phrase the problem of eye design in a signal processing framework. This allows us to develop mathematical criteria for optimal eye design, which in turn enables us to build the best eye for a given task without the trial and error phase of natural evolution. The principle is evaluated on the task of 3D ego-motion estimation.
%B Ninth IEEE International Conference on Computer Vision, 2003. Proceedings
%I IEEE
%P 1160-1167 vol.2 - 1160-1167 vol.2
%8 2003/10/13/16
%@ 0-7695-1950-4
%G eng
%R 10.1109/ICCV.2003.1238623
%0 Conference Paper
%B Third Workshop on Omnidirectional Vision, 2002. Proceedings
%D 2002
%T Eyes from eyes: new cameras for structure from motion
%A Neumann, J.
%A Fermüller, Cornelia
%A Aloimonos, J.
%K Algorithm design and analysis
%K Camera design
%K CAMERAS
%K Design automation
%K differential stereo
%K Educational institutions
%K ego-motion estimation
%K Eyes
%K Geometrical optics
%K IMAGE PROCESSING
%K Layout
%K Motion estimation
%K observed scene
%K Optical films
%K polydioptric camera
%K RETINA
%X We investigate the relationship between camera design and the problem of recovering the motion and structure of a scene from video data. The visual information that could possibly be obtained is described by the plenoptic function. A camera can be viewed as a device that captures a subset of this function, that is, it measures some of the light rays in some part of the space. The information contained in the subset determines how difficult it is to solve subsequent interpretation processes. By examining the differential structure of the time varying plenoptic function we relate different known and new camera models to the spatiotemporal structure of the observed scene. This allows us to define a hierarchy of camera designs, where the order is determined by the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a new camera, which we call the full field of view polydioptric camera, for which the problem is linear and stable. In between are multiple-view cameras with large fields of view which we have built, as well as catadioptric panoramic sensors and other omni-directional cameras. We develop design suggestions for the polydioptric camera, and based upon this new design we propose a linear algorithm for ego-motion estimation, which in essence combines differential motion estimation with differential stereo.
%B Third Workshop on Omnidirectional Vision, 2002. Proceedings
%I IEEE
%P 19 - 26
%8 2002///
%@ 0-7695-1629-7
%G eng
%R 10.1109/OMNVIS.2002.1044486