7,338 research outputs found
Imaging Flash Lidar for Safe Landing on Solar System Bodies and Spacecraft Rendezvous and Docking
NASA has been pursuing flash lidar technology for autonomous, safe landing on solar system bodies and for automated rendezvous and docking. During the final stages of the landing from about 1 kilometer to 500 meters above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard flight computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16,000 pixels range images with 7 centimeters precision, at 20 Hertz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument and presents the results of recent flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus) built by NASA Johnson Space Center. The flights were conducted at a simulated lunar terrain site, consisting of realistic hazard features and designated landing areas, built at NASA Kennedy Space Center specifically for this demonstration test. This paper also provides an overview of the plan for continued advancement of the flash lidar technology aimed at enhancing its performance to meet both landing and automated rendezvous and docking applications
Recommended from our members
Dynamic multifactor hubs interact transiently with sites of active transcription in Drosophila embryos.
The regulation of transcription requires the coordination of numerous activities on DNA, yet how transcription factors mediate these activities remains poorly understood. Here, we use lattice light-sheet microscopy to integrate single-molecule and high-speed 4D imaging in developing Drosophila embryos to study the nuclear organization and interactions of the key transcription factors Zelda and Bicoid. In contrast to previous studies suggesting stable, cooperative binding, we show that both factors interact with DNA with surprisingly high off-rates. We find that both factors form dynamic subnuclear hubs, and that Bicoid binding is enriched within Zelda hubs. Remarkably, these hubs are both short lived and interact only transiently with sites of active Bicoid-dependent transcription. Based on our observations, we hypothesize that, beyond simply forming bridges between DNA and the transcription machinery, transcription factors can organize other proteins into hubs that transiently drive multiple activities at their gene targets.Editorial noteThis article has been through an editorial process in which the authors decide how to respond to the issues raised during peer review. The Reviewing Editor's assessment is that all the issues have been addressed (see decision letter)
Recommended from our members
Robust model-based analysis of single-particle tracking experiments with Spot-On.
Single-particle tracking (SPT) has become an important method to bridge biochemistry and cell biology since it allows direct observation of protein binding and diffusion dynamics in live cells. However, accurately inferring information from SPT studies is challenging due to biases in both data analysis and experimental design. To address analysis bias, we introduce 'Spot-On', an intuitive web-interface. Spot-On implements a kinetic modeling framework that accounts for known biases, including molecules moving out-of-focus, and robustly infers diffusion constants and subpopulations from pooled single-molecule trajectories. To minimize inherent experimental biases, we implement and validate stroboscopic photo-activation SPT (spaSPT), which minimizes motion-blur bias and tracking errors. We validate Spot-On using experimentally realistic simulations and show that Spot-On outperforms other methods. We then apply Spot-On to spaSPT data from live mammalian cells spanning a wide range of nuclear dynamics and demonstrate that Spot-On consistently and robustly infers subpopulation fractions and diffusion constants
Imaging Flash Lidar for Autonomous Safe Landing and Spacecraft Proximity Operation
3-D Imaging flash lidar is recognized as a primary candidate sensor for safe precision landing on solar system bodies (Moon, Mars, Jupiter and Saturn moons, etc.), and autonomous rendezvous proximity operations and docking/capture necessary for asteroid sample return and redirect missions, spacecraft docking, satellite servicing, and space debris removal. During the final stages of landing, from about 1 km to 500 m above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard fli1ght computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station from several kilometers distance. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16k pixels range images with 7 cm precision, at a 20 Hz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument design and capabilities as demonstrated by the closed-loop flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus). Then a plan for continued advancement of the flash lidar technology will be explained. This proposed plan is aimed at the development of a common sensor that with a modest design adjustment can meet the needs of both landing and proximity operation and docking applications
Retrospective Motion Correction in Magnetic Resonance Imaging of the Brain
Magnetic Resonance Imaging (MRI) is a tremendously useful diagnostic imaging modality that provides outstanding soft tissue contrast. However, subject motion is a significant unsolved problem; motion during image acquisition can cause blurring and distortions in the image, limiting its diagnostic utility. Current techniques for addressing head motion include optical tracking which can be impractical in clinical settings due to challenges associated with camera cross-calibration and marker fixation. Another category of techniques is MRI navigators, which use specially acquired MRI data to track the motion of the head.
This thesis presents two techniques for motion correction in MRI: the first is spherical navigator echoes (SNAVs), which are rapidly acquired k-space navigators. The second is a deep convolutional neural network trained to predict an artefact-free image from motion-corrupted data.
Prior to this thesis, SNAVs had been demonstrated for motion measurement but not motion correction, and they required the acquisition of a 26s baseline scan during which the subject could not move. In this work, a novel baseline approach is developed where the acquisition is reduced to 2.6s. Spherical navigators were interleaved into a spoiled gradient echo sequence (SPGR) on a stand-alone MRI system and a turbo-FLASH sequence (tfl) on a hybrid PET/MRI system to enable motion measurement throughout image acquisition. The SNAV motion measurements were then used to retrospectively correct the image data.
While MRI navigator methods, particularly SNAVs that can be acquired very rapidly, are useful for motion correction, they do require pulse sequence modifications. A deep learning technique may be a more general solution. In this thesis, a conditional generative adversarial network (cGAN) is trained to perform motion correction on image data with simulated motion artefacts. We simulate motion in previously acquired brain images and use the image pairs (corrupted + original) to train the cGAN.
MR image data was qualitatively and quantitatively improved following correction using the SNAV motion estimates. This was also true for the simultaneously acquired MR and PET data on the hybrid system. Motion corrected images were more similar than the uncorrected to the no-motion reference images. The deep learning approach was also successful for motion correction. The trained cGAN was evaluated on 5 subjects; and artefact suppression was observed in all images
Investigation of the XCAT phantom as a validation tool in cardiac MRI tracking algorithms.
PURPOSE: To describe our magnetic resonance imaging (MRI) simulated implementation of the 4D digital extended cardio torso (XCAT) phantom to validate our previously developed cardiac tracking techniques. Real-time tracking will play an important role in the non-invasive treatment of atrial fibrillation with MRI-guided radiosurgery. In addition, to show how quantifiable measures of tracking accuracy and patient-specific physiology could influence MRI tracking algorithm design. METHODS: Twenty virtual patients were subjected to simulated MRI scans that closely model the proposed real-world scenario to allow verification of the tracking technique's algorithm. The generated phantoms provide ground-truth motions which were compared to the target motions output from our tracking algorithm. The patient-specific tracking error, ep, was the 3D difference (vector length) between the ground-truth and algorithm trajectories. The tracking errors of two combinations of new tracking algorithm functions that were anticipated to improve tracking accuracy were studied. Additionally, the correlation of key physiological parameters with tracking accuracy was investigated. RESULTS: Our original cardiac tracking algorithm resulted in a mean tracking error of 3.7 ± 0.6 mm over all virtual patients. The two combinations of tracking functions demonstrated comparable mean tracking errors however indicating that the optimal tracking algorithm may be patient-specific. CONCLUSIONS: Current and future MRI tracking strategies are likely to benefit from this virtual validation method since no time-resolved 4D ground-truth signal can currently be derived from purely image-based studies
Cardiac cine magnetic resonance fingerprinting for combined ejection fraction, T1 and T2 quantification
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/156191/2/nbm4323_am.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/156191/1/nbm4323.pd
- …