108 research outputs found

    Air Force Institute of Technology Research Report 2016

    Get PDF
    This Research Report presents the FY16 research statistics and contributions of the Graduate School of Engineering and Management (EN) at AFIT. AFIT research interests and faculty expertise cover a broad spectrum of technical areas related to USAF needs, as reflected by the range of topics addressed in the faculty and student publications listed in this report. In most cases, the research work reported herein is directly sponsored by one or more USAF or DOD agencies. AFIT welcomes the opportunity to conduct research on additional topics of interest to the USAF, DOD, and other federal organizations when adequate manpower and financial resources are available and/or provided by a sponsor. In addition, AFIT provides research collaboration and technology transfer benefits to the public through Cooperative Research and Development Agreements (CRADAs)

    Anatomy: The Relationship Between Internal and External Visualizations

    Get PDF
    This dissertation explored the relationship between internal and external visualizations and the implications of this relationship for comprehending visuospatial anatomical information. External visualizations comprised different computer representations of anatomical structures, including: static, animated, non-interactive, interactive, non-stereoscopic, and stereoscopic visualizations. Internal visualizations involved examining participants’ ability to apprehend, encode, and manipulate mental representations (i.e., spatial visualization ability or Vz). Comprehension was measured with a novel spatial anatomy task that involved mental manipulation of anatomical structures in three-dimensions and two-dimensional cross-sections. It was hypothesized that performance on the spatial anatomy task would involve a trade-off between internal and external visualizations available to the learner. Results from experiments 1, 2, and 3 demonstrated that in the absence of computer visualizations, spatial visualization ability (Vz) was the main contributor to variation in spatial anatomy task performance. Subjects with high Vz scored higher, spent less time, and were more accurate than those with low Vz. In the presence of external computer visualizations, variation in task performance was attributed to both Vz and visuospatial characteristics of the computer visualization. While static representations improved performance of high- and low-Vz subjects equally, animations particularly benefited high Vz subjects, as their mean score on the SAT was significantly higher than the mean score of low Vz subjects. The addition of interactivity and stereopsis to the displays offered no additional advantages over non-interactive and non-stereoscopic visualizations. Interactive, non-interactive, stereoscopic and non-stereoscopic visualizations improved the performance of high- and low-Vz subjects equally. It was concluded that comprehension of visuospatial anatomical information involved a trade-off between the perception of external visualizations and the ability to maintain and manipulate internal visualizations. There is an inherent belief that increasing the educational effectiveness of computer visualizations is a mere question of making them dynamic, interactive, and/or realistic. However, experiments 1, 2, and 3 clearly demonstrate that this is not the case, and that the benefits of computer visualizations vary according to learner characteristics, particularly spatial visualization ability

    Data management study. Volume 3 - Lunar/earth data bank study

    Get PDF
    Lunar/earth data bank study, with user information requirements and system design concep

    Ultraviolet disinfection of schistosome cercariae in water

    Get PDF
    Schistosomiasis is a tropical disease that is contracted by skin contact with water containing cercariae, larvae of the Schistosoma parasite. Providing safe water for contact activities (e.g. laundry, bathing) can help reduce transmission. UV disinfection is a widely used form of water treatment and its application in low-income settings is now becoming a reality through solar-powered units. This thesis examines the effectiveness of UV LED disinfection against schistosome cercariae for use in endemic regions. A systematic review revealed that low UV fluences (5–14mJ/cm2 at 253.7nm) were required to achieve a 1-log10 reduction in worm burden in animal hosts, but the direct effect on cercariae had not been quantified. There were insufficient published data to produce UV disinfection guidelines, therefore experiments were carried out to determine the fluence-response of Schistosoma mansoni cercariae at four peak wavelengths in the germicidal range (253.7nm, 255nm, 265nm, and 285nm). The morphology and motility of cercariae were studied under the microscope to determine if they were alive or dead. At the most effective wavelength (265nm) 247mJ/cm2 was required to achieve a 1-log10 reduction in alive cercariae but this reduced to 127mJ/cm2 and 99mJ/cm2 if samples were stored for 1–3 hours post-exposure, respectively. Fluences were much higher than those required to achieve the same reduction in worm burden in previous studies and further research was needed to investigate the potential disinfection mechanisms. Using an immunological assay to detect dimer formation it was found that DNA was damaged at lower fluences (10–50mJ/cm2, depending on wavelength), but this damage would not be expressed until cercariae penetrate the skin and transform into schistosomula. Further research is required to confirm if cercariae are non-viable at these fluences, until then a conservative approach based on the death of cercariae is appropriate. Due to the high fluences required to kill cercariae, UV disinfection alone is unlikely to be an energy- or cost-efficient water treatment method for combatting schistosomiasis, however improvements in efficiency combined with cheaper production costs may make UV LED technology more competitive in the near future.Open Acces

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    A real-time low-cost vision sensor for robotic bin picking

    Get PDF
    This thesis presents an integrated approach of a vision sensor for bin picking. The vision system that has been devised consists of three major components. The first addresses the implementation of a bifocal range sensor which estimates the depth by measuring the relative blurring between two images captured with different focal settings. A key element in the success of this approach is that it overcomes some of the limitations that were associated with other related implementations and the experimental results indicate that the precision offered by the sensor discussed in this thesis is precise enough for a large variety of industrial applications. The second component deals with the implementation of an edge-based segmentation technique which is applied in order to detect the boundaries of the objects that define the scene. An important issue related to this segmentation technique consists of minimising the errors in the edge detected output, an operation that is carried out by analysing the information associated with the singular edge points. The last component addresses the object recognition and pose estimation using the information resulting from the application of the segmentation algorithm. The recognition stage consists of matching the primitives derived from the scene regions, while the pose estimation is addressed using an appearance-based approach augmented with a range data analysis. The developed system is suitable for real-time operation and in order to demonstrate the validity of the proposed approach it has been examined under varying real-world scenes

    Orbiting Rainbows: Optical Manipulation of Aerosols and the Beginnings of Future Space Construction

    Get PDF
    Our objective is to investigate the conditions to manipulate and maintain the shape of an orbiting cloud of dust-like matter so that it can function as an ultra-lightweight surface with useful and adaptable electromagnetic characteristics, for instance, in the optical, RF, or microwave bands. Inspired by the light scattering and focusing properties of distributed optical assemblies in Nature, such as rainbows and aerosols, and by recent laboratory successes in optical trapping and manipulation, we propose a unique combination of space optics and autonomous robotic system technology, to enable a new vision of space system architecture with applications to ultra-lightweight space optics and, ultimately, in-situ space system fabrication. Typically, the cost of an optical system is driven by the size and mass of the primary aperture. The ideal system is a cloud of spatially disordered dust-like objects that can be optically manipulated: it is highly reconfigurable, fault-tolerant, and allows very large aperture sizes at low cost. See Figure 1 for a scenario of application of this concept. The solution that we propose is to construct an optical system in space in which the nonlinear optical properties of a cloud of micron-sized particles are shaped into a specific surface by light pressure, allowing it to form a very large and lightweight aperture of an optical system, hence reducing overall mass and cost. Other potential advantages offered by the cloud properties as optical system involve possible combination of properties (combined transmit/receive), variable focal length, combined refractive and reflective lens designs, and hyper-spectral imaging. A cloud of highly reflective particles of micron-size acting coherently in a specific electromagnetic band, just like an aerosol in suspension in the atmosphere, would reflect the Sun's light much like a rainbow. The only difference with an atmospheric or industrial aerosol is the absence of the supporting fluid medium. This new concept is based on recent understandings in the physics of optical manipulation of small particles in the laboratory and the engineering of distributed ensembles of spacecraft clouds to shape an orbiting cloud of micron-sized objects. In the same way that optical tweezers have revolutionized micro- and nano-manipulation of objects, our breakthrough concept will enable new large scale NASA mission applications and develop new technology in the areas of Astrophysical Imaging Systems and Remote Sensing because the cloud can operate as an adaptive optical imaging sensor. While achieving the feasibility of constructing one single aperture out of the cloud is the main topic of this work, it is clear that multiple orbiting aerosol lenses could also combine their power to synthesize a much larger aperture in space to enable challenging goals such as exoplanet detection. Furthermore, this effort could establish feasibility of key issues related to material properties, remote manipulation, and autonomy characteristics of cloud in orbit. There are several types of endeavors (science missions) that could be enabled by this type of approach, i.e. it can enable new astrophysical imaging systems, exoplanet search, large apertures allow for unprecedented high resolution to discern continents and important features of other planets, hyperspectral imaging, adaptive systems, spectroscopy imaging through limb, and stable optical systems from Lagrange-points. Future micro-miniaturization might hold promise of a further extension of our dust aperture concept to other more exciting smart dust concepts with other associated capabilities

    Cartography

    Get PDF
    The terrestrial space is the place of interaction of natural and social systems. The cartography is an essential tool to understand the complexity of these systems, their interaction and evolution. This brings the cartography to an important place in the modern world. The book presents several contributions at different areas and activities showing the importance of the cartography to the perception and organization of the territory. Learning with the past or understanding the present the use of cartography is presented as a way of looking to almost all themes of the knowledge
    corecore