33,604 research outputs found
Frequency-modulated continuous-wave LiDAR compressive depth-mapping
We present an inexpensive architecture for converting a frequency-modulated
continuous-wave LiDAR system into a compressive-sensing based depth-mapping
camera. Instead of raster scanning to obtain depth-maps, compressive sensing is
used to significantly reduce the number of measurements. Ideally, our approach
requires two difference detectors. % but can operate with only one at the cost
of doubling the number of measurments. Due to the large flux entering the
detectors, the signal amplification from heterodyne detection, and the effects
of background subtraction from compressive sensing, the system can obtain
higher signal-to-noise ratios over detector-array based schemes while scanning
a scene faster than is possible through raster-scanning. %Moreover, we show how
a single total-variation minimization and two fast least-squares minimizations,
instead of a single complex nonlinear minimization, can efficiently recover
high-resolution depth-maps with minimal computational overhead. Moreover, by
efficiently storing only data points from measurements of an
pixel scene, we can easily extract depths by solving only two linear equations
with efficient convex-optimization methods
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Helicopter flights with night-vision goggles: Human factors aspects
Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques
The Infocus Hard X-ray Telescope: Pixellated CZT Detector/Shield Performance and Flight Results
The CZT detector on the Infocus hard X-ray telescope is a pixellated
solid-state device capable of imaging spectroscopy by measuring the position
and energy of each incoming photon. The detector sits at the focal point of an
8m focal length multilayered grazing incidence X-ray mirror which has
significant effective area between 20--40 keV. The detector has an energy
resolution of 4.0keV at 32keV, and the Infocus telescope has an angular
resolution of 2.2 arcminute and a field of view of about 10 arcminutes. Infocus
flew on a balloon mission in July 2001 and observed Cygnus X-1. We present
results from laboratory testing of the detector to measure the uniformity of
response across the detector, to determine the spectral resolution, and to
perform a simple noise decomposition. We also present a hard X-ray spectrum and
image of Cygnus X-1, and measurements of the hard X-ray CZT background obtained
with the SWIN detector on Infocus.Comment: To appear in the proceedings of the SPIE conference "Astronomical
Telescopes and Instrumentation", #4851-116, Kona, Hawaii, Aug. 22-28, 2002.
12 pages, 9 figure
Structured Light-Based 3D Reconstruction System for Plants.
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance
Overall evaluation of LANDSAT (ERTS) follow on imagery for cartographic application
The author has identified the following significant results. LANDSAT imagery can be operationally applied to the revision of nautical charts. The imagery depicts shallow seas in a form that permits accurate planimetric image mapping of features to 20 meters of depth where the conditions of water clarity and bottom reflection are suitable. LANDSAT data also provide an excellent simulation of the earth's surface, for such applications as aeronautical charting and radar image correlation in aircraft and aircraft simulators. Radiometric enhancement, particularly edge enhancement, a technique only marginally successful with aerial photographs has proved to be high value when applied to LANDSAT data
Application of remote sensing in the study of vegetation and soils in Idaho
There are no author-identified significant results in this report
- …