18,320 research outputs found
System analysis and integration studies for a 15-micron horizon radiance measurement experiment
Systems analysis and integration studies for 15-micron horizon radiance measurement experimen
Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect
Recently, the new Kinect One has been issued by Microsoft, providing the next
generation of real-time range sensing devices based on the Time-of-Flight (ToF)
principle. As the first Kinect version was using a structured light approach,
one would expect various differences in the characteristics of the range data
delivered by both devices. This paper presents a detailed and in-depth
comparison between both devices. In order to conduct the comparison, we propose
a framework of seven different experimental setups, which is a generic basis
for evaluating range cameras such as Kinect. The experiments have been designed
with the goal to capture individual effects of the Kinect devices as isolatedly
as possible and in a way, that they can also be adopted, in order to apply them
to any other range sensing device. The overall goal of this paper is to provide
a solid insight into the pros and cons of either device. Thus, scientists that
are interested in using Kinect range sensing cameras in their specific
application scenario can directly assess the expected, specific benefits and
potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and
Image Understanding (CVIU
Silicon Solar Cell Process Development, Fabrication and Analysis, Phase 1
Solar cells from RTR ribbons, EFG (RF and RH) ribbons, dendritic webs, Silso wafers, cast silicon by HEM, silicon on ceramic, and continuous Czochralski ingots were fabricated using a standard process typical of those used currently in the silicon solar cell industry. Back surface field (BSF) processing and other process modifications were included to give preliminary indications of possible improved performance. The parameters measured included open circuit voltage, short circuit current, curve fill factor, and conversion efficiency (all taken under AM0 illumination). Also measured for typical cells were spectral response, dark I-V characteristics, minority carrier diffusion length, and photoresponse by fine light spot scanning. the results were compared to the properties of cells made from conventional single crystalline Czochralski silicon with an emphasis on statistical evaluation. Limited efforts were made to identify growth defects which will influence solar cell performance
A feasibility study of limb volume measuring systems
Evaluation of the various techniques by which limb volume can be measured indicates that the odometric (electromechanical) method and the reflective scanner (optical) have a high probability of meeting the specifications of the LBNP experiments. Both of these methods provide segmental measurements from which the cross sectional area of the limb can be determined
3D scanning of cultural heritage with consumer depth cameras
Three dimensional reconstruction of cultural heritage objects is an expensive and time-consuming process. Recent consumer real-time depth acquisition devices, like Microsoft Kinect, allow very fast and simple acquisition of 3D views. However 3D scanning with such devices is a challenging task due to the limited accuracy and reliability of the acquired data. This paper introduces a 3D reconstruction pipeline suited to use consumer depth cameras as hand-held scanners for cultural heritage objects. Several new contributions have been made to achieve this result. They include an ad-hoc filtering scheme that exploits the model of the error on the acquired data and a novel algorithm for the extraction of salient points exploiting both depth and color data. Then the salient points are used within a modified version of the ICP algorithm that exploits both geometry and color distances to precisely align the views even when geometry information is not sufficient to constrain the registration. The proposed method, although applicable to generic scenes, has been tuned to the acquisition of sculptures and in this connection its performance is rather interesting as the experimental results indicate
An Infrared Camera for Leuschner Observatory and the Berkeley Undergraduate Astronomy Lab
We describe the design, fabrication, and operation of an infrared camera
which is in use at the 30-inch telescope of the Leuschner Observatory. The
camera is based on a Rockwell PICNIC 256 x 256 pixel HgCdTe array, which is
sensitive from 0.9-2.5 micron. The primary purpose of this telescope is for
undergraduate instruction. The cost of the camera has been minimized by using
commercial parts whereever practical. The camera optics are based on a modified
Offner relay which forms a cold pupil where stray thermal radiation from the
telescope is baffled. A cold, six-position filter wheel is driven by a
cryogenic stepper motor, thus avoiding any mechanical feed throughs. The array
control and readout electronics are based on standard PC cards; the only custom
component is a simple interface card which buffers the clocks and amplifies the
analog signals from the array.Comment: 13 pages, 17 figures. Submitted to Publications of the Astronomical
Society of the Pacific: 2001 Jan 10, Accepted 2001 Jan 1
Validity and reliability of an inertial sensor for wheelchair court sports performance
The purpose of the current study was to determine the validity and reliability of an inertial sensor for assessing speed specific to athletes competing in the wheelchair court sports (basketball, rugby, and tennis). A wireless inertial sensor was attached to the axle of a sports wheelchair. Over two separate sessions, the sensor was tested across a range of treadmill speeds reflective of the court sports (1.0 to 6.0 m/s). At each test speed, ten 10-second trials were recorded and were compared with the treadmill (criterion). A further session explored the dynamic validity and reliability of the sensor during a sprinting task on a wheelchair ergometer compared with high-speed video (criterion). During session one, the sensor marginally overestimated speed, whereas during session two these speeds were underestimated slightly. However, systematic bias and absolute random errors never exceeded 0.058 m/s and 0.086 m/s, respectively, across both sessions. The sensor was also shown to be a reliable device with coefficients of variation (% CV) never exceeding 0.9 at any speed. During maximal sprinting, the sensor also provided a valid representation of the peak speeds reached (1.6% CV). Slight random errors in timing led to larger random errors in the detection of deceleration values. The results of this investigation have demonstrated that an inertial sensor developed for sports wheelchair applications provided a valid and reliable assessment of the speeds typically experienced by wheelchair athletes. As such, this device will be a valuable monitoring tool for assessing aspects of linear wheelchair performance
A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles
This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed
The NASA SBIR product catalog
The purpose of this catalog is to assist small business firms in making the community aware of products emerging from their efforts in the Small Business Innovation Research (SBIR) program. It contains descriptions of some products that have advanced into Phase 3 and others that are identified as prospective products. Both lists of products in this catalog are based on information supplied by NASA SBIR contractors in responding to an invitation to be represented in this document. Generally, all products suggested by the small firms were included in order to meet the goals of information exchange for SBIR results. Of the 444 SBIR contractors NASA queried, 137 provided information on 219 products. The catalog presents the product information in the technology areas listed in the table of contents. Within each area, the products are listed in alphabetical order by product name and are given identifying numbers. Also included is an alphabetical listing of the companies that have products described. This listing cross-references the product list and provides information on the business activity of each firm. In addition, there are three indexes: one a list of firms by states, one that lists the products according to NASA Centers that managed the SBIR projects, and one that lists the products by the relevant Technical Topics utilized in NASA's annual program solicitation under which each SBIR project was selected
LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System
Collision avoidance is a critical task in many applications, such as ADAS
(advanced driver-assistance systems), industrial automation and robotics. In an
industrial automation setting, certain areas should be off limits to an
automated vehicle for protection of people and high-valued assets. These areas
can be quarantined by mapping (e.g., GPS) or via beacons that delineate a
no-entry area. We propose a delineation method where the industrial vehicle
utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to
detect passive beacons and model-predictive control to stop the vehicle from
entering a restricted space. The beacons are standard orange traffic cones with
a highly reflective vertical pole attached. The LiDAR can readily detect these
beacons, but suffers from false positives due to other reflective surfaces such
as worker safety vests. Herein, we put forth a method for reducing false
positive detection from the LiDAR by projecting the beacons in the camera
imagery via a deep learning method and validating the detection using a neural
network-learned projection from the camera to the LiDAR space. Experimental
data collected at Mississippi State University's Center for Advanced Vehicular
Systems (CAVS) shows the effectiveness of the proposed system in keeping the
true detection while mitigating false positives.Comment: 34 page
- …