155 research outputs found
Shift-encoded optically multiplexed imaging
In a multiplexed image, multiple fields-of-view (FoVs) are superimposed onto a common focal plane. The attendant gain in sensor FoV provides a new degree of freedom in the design of an imaging system, allowing for performance tradeoffs not available in traditional optical designs. We explore design choices relating to a shift-encoded optically multiplexed imaging system and discuss their performance implications. Unlike in a traditional imaging system, a single multiplexed image has a fundamental ambiguity regarding the location of objects in the image. We present a system that can shift each FoV independently to break this ambiguity and compare it to other potential disambiguation techniques. We then discuss the optical, mechanical, and encoding design choices of a shift-encoding midwave infrared imaging system that multiplexes six 15×15  deg FoVs onto a single one megapixel focal plane. Using this sensor, we demonstrate a computationally demultiplexed wide FoV video.United States. Air Force Office of Scientific Research (FA8721-05-C-0002
The future of space imaging. Report of a community-based study of an advanced camera for the Hubble Space Telescope
The scientific and technical basis for an Advanced Camera (AC) for the Hubble Space Telescope (HST) is discussed. In March 1992, the NASA Program Scientist for HST invited the Space Telescope Science Institute to conduct a community-based study of an AC, which would be installed on a scheduled HST servicing mission in 1999. The study had three phases: a broad community survey of views on candidate science program and required performance of the AC, an analysis of technical issues relating to its implementation, and a panel of experts to formulate conclusions and prioritize recommendations. From the assessment of the imaging tasks astronomers have proposed for or desired from HST, we believe the most valuable 1999 instrument would be a camera with both near ultraviolet/optical (NUVO) and far ultraviolet (FUV) sensitivity, and with both wide field and high resolution options
Spectral image utility for target detection applications
In a wide range of applications, images convey useful information about scenes. The “utility” of an image is defined with reference to the specific task that an observer seeks to accomplish, and differs from the “fidelity” of the image, which seeks to capture the ability of the image to represent the true nature of the scene. In remote sensing of the earth, various means of characterizing the utility of satellite and airborne imagery have evolved over the years. Recent advances in the imaging modality of spectral imaging have enabled synoptic views of the earth at many finely sampled wavelengths over a broad spectral band. These advances challenge the ability of traditional earth observation image utility metrics to describe the rich information content of spectral images. Traditional approaches to image utility that are based on overhead panchromatic image interpretability by a human observer are not applicable to spectral imagery, which requires automated processing. This research establishes the context for spectral image utility by reviewing traditional approaches and current methods for describing spectral image utility. It proposes a new approach to assessing and predicting spectral image utility for the specific application of target detection. We develop a novel approach to assessing the utility of any spectral image using the target-implant method. This method is not limited by the requirements of traditional target detection performance assessment, which need ground truth and an adequate number of target pixels in the scene. The flexibility of this approach is demonstrated by assessing the utility of a wide range of real and simulated spectral imagery over a variety ii of target detection scenarios. The assessed image utility may be summarized to any desired level of specificity based on the image analysis requirements. We also present an approach to predicting spectral image utility that derives statistical parameters directly from an image and uses them to model target detection algorithm output. The image-derived predicted utility is directly comparable to the assessed utility and the accuracy of prediction is shown to improve with statistical models that capture the non-Gaussian behavior of real spectral image target detection algorithm outputs. The sensitivity of the proposed spectral image utility metric to various image chain parameters is examined in detail, revealing characteristics, requirements, and limitations that provide insight into the relative importance of parameters in the image utility. The results of these investigations lead to a better understanding of spectral image information vis-à -vis target detection performance that will hopefully prove useful to the spectral imagery analysis community and represent a step towards quantifying the ability of a spectral image to satisfy information exploitation requirements
Unraveling the Thousand Word Picture: An Introduction to Super-Resolution Data Analysis
Super-resolution microscopy provides direct insight into fundamental biological processes occurring at length scales smaller than light’s diffraction limit. The analysis of data at such scales has brought statistical and machine learning methods into the mainstream. Here we provide a survey of data analysis methods starting from an overview of basic statistical techniques underlying the analysis of super-resolution and, more broadly, imaging data. We subsequently break down the analysis of super-resolution data into four problems: the localization problem, the counting problem, the linking problem, and what we’ve termed the interpretation problem
Recommended from our members
Application of optical techniques to surveying
This thesis addresses the problem of acquiring spatial data concerning points on the surface of structures such as underground tunnels and sewers. These data can usefully provide knowledge of deformation, shape, area, volume, and position of structures. Such data can be further analysed to give insight into clearances, deterioration, flow rates and in-fill volumes or can be used to give knowledge of the present state of structures and their position.
Few systems address the problem of reliably acquiring this data in a manner that is fast and accurate while remaining flexible, adaptable and robust. This thesis considers a solution to the problem of fast and accurate spatial data acquisition concerning commonly found structures using the technique of optical triangulation with a linear array camera and diode laser light source.
Optical triangulation is a technique that has not fully matured for medium range measurement with few systems having been developed and little research material produced. However, the research carried out for this thesis shows that providing all the factors that contribute errors of measurement are understood, then a fast, robust and high accuracy system can be developed.
The development of the optical triangulation technique for use in surveying was addressed through a programme of prototype development, testing, and refinement. Three prototypes were built that demonstrated the reliability, accuracy, speed and robustness of this technique.
The errors associated with the a triangulation measuring system when applied to surveying application is considered from the intrinsic errors which are the same for any triangulation system and the extrinsic errors which are particular to the use of this system in surveying situations.
A calibration bench was constructed for consideration of the triangulation system which was automatic and used an interferometer to provide high accuracy measurement of the performance of the triangulation system. Calibration and interpolation trials were conducted and the results analysed. An analysis of the subpixel accuracy achieved with the discrete pixel CCD imagers has been performed and an analysis made.
One of the main disadvantages of optical triangulation when applied to the range 0-5 metres is that of non-linearity. A method of correction has been developed and analysed which is believed to be novel and makes a significant improvement to the measuring system.
The conclusion of this research is that an improved system of measurement has been produced which has a number of novel features. Trials show that the measuring system could be developed commercially to provide a solution to measurements of structures within the range of the device and with greater accuracy than comparable equipment designed for the same purpose
LASER Tech Briefs, September 1993
This edition of LASER Tech briefs contains a feature on photonics. The other topics include: Electronic Components and Circuits. Electronic Systems, Physical Sciences, Materials, Computer Programs, Mechanics, Machinery, Fabrication Technology, Mathematics and Information Sciences, Life Sciences and books and reports
Application of advanced technology to space automation
Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits
Recommended from our members
Filopodia-independent roles of the actin bundling protein fascin in promoting cell motility
Fascin is an actin bundling protein whose overexpression has in recent years been systematically linked to increased metastasis and poor outcome in cancer patients. It is well established that fascin expression correlates with enhanced cell migration; however, the underlying mechanisms are poorly understood. We combined various methods of high-resolution live cell imaging and computational analysis to investigate the role of fascin in increasing cell motility. We found that fascin promotes collective migration in normal epithelial cells and that this behavior is in agreement with protrusive activities at the single cell level. Traction force measurements indicated that fascin expression level is negatively correlated with traction stress levels and that a cell expressing high levels of fascin protrudes over longer distances than cells with lower levels. Together this led to the hypothesis that fascin distributes cell traction more efficiently, which lowers the load on individual adhesions and actin filaments growing against increasing membrane tension during one protrusion cycle. Measurements of adhesion formation and maturation indicate that fascin expression indeed promotes nascent adhesion formation over a wide area behind the leading edge. In metastatic cells with high fascin expression, we observed decreased invasion upon fascin knock down. These observations demonstrate a role for fascin in promoting cell motility in normal and neoplastic cells, in part by templating nascent adhesions at the leading edge
Data fusion for human motion tracking with multimodal sensing
Multimodal sensor fusion is a common approach in the design of many motion tracking systems. It is based on using more than one sensor modality to measure different aspects of a phenomenon and capture more information about it than what would be available otherwise from a single sensor. Multimodal sensor fusion algorithms often leverage the complementary nature of the different modalities to compensate for shortcomings of the individual sensor modalities. This approach is particularly suitable for low-cost and highly miniaturised wearable human motion tracking systems that are expected to perform their function with limited resources at their disposal (energy, processing power, etc.). Opto-inertial motion trackers are some of the most commonly used approaches in this context. These trackers fuse the sensor data from vision and Inertial Motion Unit (IMU) sensors to determine the 3-Dimensional (3-D) pose of the given body part, i.e. its position and orientation. The continuous advances in the State-Of-the-Art (SOA) in camera miniaturisation and efficient point detection algorithms along with the more robust IMUs and increasing processing power in a shrinking form factor, make it increasingly feasible to develop a low-cost, low-power, and highly miniaturised wearable smart sensor human motion tracking system. It incorporates these two sensor modalities. In this thesis, a multimodal human motion tracking system is presented that builds on these developments. The proposed system consists of a wearable smart sensor system, referred to as Wearable Platform (WP), which incorporates the two sensor modalities, i.e. monocular camera (optical) and IMU (motion). The WP operates in conjunction with two optical points of reference embedded in the ambient environment to enable positional tracking in that environment. In addition, a novel multimodal sensor fusion algorithm is proposed which uses the complementary nature of the vision and IMU sensors in conjunction with the two points of reference in the ambient environment, to determine the 3-D pose of the WP in a novel and computationally efficient way.
To this end, the WP uses a low-resolution camera to track two points of reference; specifically two Infrared (IR) LEDs embedded in the wall. The geometry that is formed between the WP and the IR LEDs, when complemented by the angular rotation measured by the IMU, simplifies the mathematical formulations involved in the computing the 3-D pose, making them compatible with the resource-constrained microprocessors used in such wearable systems. Furthermore, the WP is coupled with the two IR LEDs via a radio link to control their intensity in real-time. This enables the novel subpixel point detection algorithm to maintain its highest accuracy, thus increasing the overall precision of the pose detection algorithm. The resulting 3-D pose can be used as an input to a higher-level system for further use.
One of the potential uses for the proposed system is in sports applications. For instance, it could be particularly useful for tracking the correctness of executing certain exercises in Strength Training (ST) routines, such as the barbell squat. Thus, it can be used to assist professional ST coaches in remotely tracking the progress of their clients, and most importantly ensure a minimum risk of injury through real-time feedback. Despite its numerous benefits, the modern lifestyle has a negative impact on our health due to an increasingly sedentary lifestyle that it involves. The human body has evolved to be physically active. Thus, these lifestyle changes need to be offset by the addition of regular physical activity to everyday life, of which ST is an important element.
This work describes the following novel contributions:
• A new multimodal sensor fusion algorithm for 3-D pose detection with reduced mathematical complexity for resource-constrained platforms
• A novel system architecture for efficient 3-D pose detection for human motion tracking applications
• A new subpixel point detection algorithm for efficient and precise point detection at reduced camera resolution
• A new reference point estimation algorithm for finding locations of reference points used in validating subpixel point detection algorithms
• A novel proof-of-concept demonstrator prototype that implements the proposed system architecture and multimodal sensor fusion algorith
- …