199 research outputs found
04251 -- Imaging Beyond the Pinhole Camera
From 13.06.04 to 18.06.04, the
Dagstuhl Seminar 04251 ``Imaging Beyond the Pin-hole Camera. 12th Seminar on Theoretical Foundations of Computer Vision\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Vision Sensors and Edge Detection
Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing
Martian orbital photographic study Technical summary report
Objectives and instrumentation for Mars orbital photographic experimen
Pointing, Acquisition, and Tracking Systems for Free-Space Optical Communication Links
Pointing, acquisition, and tracking (PAT) systems have been widely applied in many applications, from short-range (e.g. human motion tracking) to long-haul (e.g. missile guidance) systems. This dissertation extends the PAT system into new territory: free space optical (FSO) communication system alignment, the most important missing ingredient for practical deployment.
Exploring embedded geometric invariances intrinsic to the rigidity of actuators and sensors is a key design feature. Once the configuration of the actuator and sensor is determined, the geometric invariance is fixed, which can therefore be calibrated in advance. This calibrated invariance further serves as a transformation for converting
the sensor measurement to actuator action.
The challenge of the FSO alignment problem lies in how to point to a 3D target by only using a 2D sensor. Two solutions are proposed: the first one exploits the invariance, known as the linear homography, embedded in the FSO applications which involve long link length between transceivers or have planar trajectories. The second one employs either an additional 2D or 1D sensor, which results in invariances known as the trifocal tensor and radial trifocal tensor, respectively. Since these invariances have been developed upon an assumption that the measurements from sensors are free from noise, including the uncertainty resulting from aberrations, a robust calibrate algorithm is required to retrieve the optimal invariance from noisy measurements.
The first solution is suffcient for most of the PAT systems used for FSO alignment since a long link length constraint is generally the case. Although PAT systems are normally categorized into coarse and fine subsystems to deal with different requirements, they are proven to be governed by a linear homography. Robust calibration algorithms have been developed during this work and further verified by simulations. Two prototype systems have been developed: one serves as a fine pointing subsystem, which consists of a beam steerer and an angular resolver; while the other serves as a coarse pointing subsystem, which consists of a rotary gimbal and a camera. The average pointing errors in both prototypes were less than 170 and 700 micro-rads, respectively.
PAT systems based on the second solution are capable of pointing to any target within the intersected field-of-view from both sensors because two sensors provide stereo vision to determine the depth of the target, the missing information that cannot be determined by a 2D sensor. They are only required when short-distance FSO communication links must be established. Two simulations were conducted to show the robustness of the calibration procedures and the pointing accuracy with respect to random noise
Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems
We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment, which is constrained by size, weight, energy consumption, and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements, and it motivates the proposed solutions presented in this thesis.
To deliver the portability goal with a single off-the-shelf camera, we have taken two approaches: The first one, and the most extensively studied here, revolves around an unorthodox camera-mirrors configuration (catadioptrics) achieving a stereo omnidirectional system (SOS). The second approach relies on expanding the visual features from the scene into higher dimensionalities to track the pose of a conventional camera in a photogrammetric fashion. The first goal has many interdependent challenges, which we address as part of this thesis: SOS design, projection model, adequate calibration procedure, and application to VO. We show several practical advantages for the single-camera SOS due to its complete 360-degree stereo views, that other conventional 3D sensors lack due to their limited field of view. Since our omnidirectional stereo (omnistereo) views are captured by a single camera, a truly instantaneous pair of panoramic images is possible for 3D perception tasks. Finally, we address the VO problem as a direct multichannel tracking approach, which increases the pose estimation accuracy of the baseline method (i.e., using only grayscale or color information) under the photometric error minimization as the heart of the “direct” tracking algorithm. Currently, this solution has been tested on standard monocular cameras, but it could also be applied to an SOS.
We believe the challenges that we attempted to solve have not been considered previously with the level of detail needed for successfully performing VO with a single camera as the ultimate goal in both real-life and simulated scenes
Terahertz Technology and Its Applications
The Terahertz frequency range (0.1 – 10)THz has demonstrated to provide many opportunities in prominent research fields such as high-speed communications, biomedicine, sensing, and imaging. This spectral range, lying between electronics and photonics, has been historically known as “terahertz gap” because of the lack of experimental as well as fabrication technologies. However, many efforts are now being carried out worldwide in order improve technology working at this frequency range. This book represents a mechanism to highlight some of the work being done within this range of the electromagnetic spectrum. The topics covered include non-destructive testing, teraherz imaging and sensing, among others
Optical Wireless Communication for Mobile Platforms
The past few decades have witnessed the widespread adaptation of wireless
devices such as cellular phones and Wifi-connected laptops, and demand for wireless
communication is expected to continue to increase. Though radio frequency (RF)
communication has traditionally dominated in this application space, recent decades
have seen an increasing interest in the use of optical wireless (OW) communication
to supplement RF communications. In contrast to RF communication technology,
OW systems offer the use of largely unregulated electromagnetic spectrum and large
bandwidths for communication. They also offer the potential to be highly secure
against jamming and eavesdropping. Interest in OW has become especially keen in
light of the maturation of light-emitting diode (LED) technology. This maturation,
and the consequent emerging ubiquity of LED technology in lighting systems, has
motivated the exploration of LEDs for wireless communication purposes in a wide
variety of applications. Recent interest in this field has largely focused on the
potential for indoor local area networks (LANs) to be realized with increasingly
common LED-based lighting systems. We envision the use of LED-based OW to
serve as a supplement to RF technology in communication between mobile platforms,
which may include automobiles, robots, or unmanned aerial vehicles (UAVs). OW
technology may be especially useful in what are known as RF-denied environments,
in which RF communication may be prohibited or undesirable.
The use of OW in these settings presents major challenges. In contrast to
many RF systems, OWsystems that operate at ranges beyond a few meters typically
require relatively precise alignment. For example, some laser-based optical wireless
communication systems require alignment precision to within small fractions of a
degree. This level of alignment precision can be difficult to maintain between mobile
platforms. Additionally, the use of OW systems in outdoor settings presents the
challenge of interference from ambient light, which can be much brighter than any
LED transmitter.
This thesis addresses these challenges to the use of LED-based communication
between mobile platforms. We propose and analyze a dual-link LED-based system
that uses one link with a wide transmission beam and relaxed alignment constraints
to support a more narrow, precisely aligned, higher-data-rate link. The use of an
optical link with relaxed alignment constraints to support the alignment of a more
precisely aligned link motivates our exploration of a panoramic imaging receiver for
estimating the range and bearing of neighboring nodes. The precision of such a
system is analyzed and an experimental system is realized. Finally, we present an
experimental prototype of a self-aligning LED-based link
Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory
This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks
Reconstruction active et passive en vision par ordinateur
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal
- …