1,173 research outputs found

    Delta Advanced Reusable Transport (DART): An alternative manned spacecraft

    Get PDF
    Although the current U.S. Space Transportation System (STS) has proven successful in many applications, the truth remains that the space shuttle is not as reliable or economical as was once hoped. In fact, the Augustine Commission on the future of the U.S. Space Program has recommended that the space shuttle only be used on missions directly requiring human capabilities on-orbit and that the shuttle program should eventually be phased out. This poses a great dilemma since the shuttle provides the only current or planned U.S. means for human access to space at the same time that NASA is building toward a permanent manned presence. As a possible solution to this dilemma, it is proposed that the U.S. begin development of an Alternative Manned Spacecraft (AMS). This spacecraft would not only provide follow-on capability for maintaining human space flight, but would also provide redundancy and enhanced capability in the near future. Design requirements for the AMS studied include: (1) capability of launching on one of the current or planned U.S. expendable launch vehicles (baseline McDonnell Douglas Delta II model 7920 expendable booster); (2) application to a wide variety of missions including autonomous operations, space station support, and access to orbits and inclinations beyond those of the space shuttle; (3) low enough costing to fly regularly in augmentation of space shuttle capabilities; (4) production surge capabilities to replace the shuttle if events require it; (5) intact abort capability in all flight regimes since the planned launch vehicles are not man-rated; (6) technology cut-off date of 1990; and (7) initial operational capability in 1995. In addition, the design of the AMS would take advantage of scientific advances made in the 20 years since the space shuttle was first conceived. These advances are in such technologies as composite materials, propulsion systems, avionics, and hypersonics

    Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey

    Full text link
    The Internet of Underwater Things (IoUT) is an emerging communication ecosystem developed for connecting underwater objects in maritime and underwater environments. The IoUT technology is intricately linked with intelligent boats and ships, smart shores and oceans, automatic marine transportations, positioning and navigation, underwater exploration, disaster prediction and prevention, as well as with intelligent monitoring and security. The IoUT has an influence at various scales ranging from a small scientific observatory, to a midsized harbor, and to covering global oceanic trade. The network architecture of IoUT is intrinsically heterogeneous and should be sufficiently resilient to operate in harsh environments. This creates major challenges in terms of underwater communications, whilst relying on limited energy resources. Additionally, the volume, velocity, and variety of data produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise to the concept of Big Marine Data (BMD), which has its own processing challenges. Hence, conventional data processing techniques will falter, and bespoke Machine Learning (ML) solutions have to be employed for automatically learning the specific BMD behavior and features facilitating knowledge extraction and decision support. The motivation of this paper is to comprehensively survey the IoUT, BMD, and their synthesis. It also aims for exploring the nexus of BMD with ML. We set out from underwater data collection and then discuss the family of IoUT data communication techniques with an emphasis on the state-of-the-art research challenges. We then review the suite of ML solutions suitable for BMD handling and analytics. We treat the subject deductively from an educational perspective, critically appraising the material surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys & Tutorials, peer-reviewed academic journa

    GeXSe (Generative Explanatory Sensor System): An Interpretable Deep Generative Model for Human Activity Recognition in Smart Spaces

    Full text link
    We introduce GeXSe (Generative Explanatory Sensor System), a novel framework designed to extract interpretable sensor-based and vision domain features from non-invasive smart space sensors. We combine these to provide a comprehensive explanation of sensor-activation patterns in activity recognition tasks. This system leverages advanced machine learning architectures, including transformer blocks, Fast Fourier Convolution (FFC), and diffusion models, to provide a more detailed understanding of sensor-based human activity data. A standout feature of GeXSe is our unique Multi-Layer Perceptron (MLP) with linear, ReLU, and normalization layers, specially devised for optimal performance on small datasets. It also yields meaningful activation maps to explain sensor-based activation patterns. The standard approach is based on a CNN model, which our MLP model outperforms.GeXSe offers two types of explanations: sensor-based activation maps and visual domain explanations using short videos. These methods offer a comprehensive interpretation of the output from non-interpretable sensor data, thereby augmenting the interpretability of our model. Utilizing the Frechet Inception Distance (FID) for evaluation, it outperforms established methods, improving baseline performance by about 6\%. GeXSe also achieves a high F1 score of up to 0.85, demonstrating precision, recall, and noise resistance, marking significant progress in reliable and explainable smart space sensing systems.Comment: 29 pages,17 figure

    Smart Technology for Telerehabilitation: A Smart Device Inertial-sensing Method for Gait Analysis

    Get PDF
    The aim of this work was to develop and validate an iPod Touch (4th generation) as a potential ambulatory monitoring system for clinical and non-clinical gait analysis. This thesis comprises four interrelated studies, the first overviews the current available literature on wearable accelerometry-based technology (AT) able to assess mobility-related functional activities in subjects with neurological conditions in home and community settings. The second study focuses on the detection of time-accurate and robust gait features from a single inertial measurement unit (IMU) on the lower back, establishing a reference framework in the process. The third study presents a simple step length algorithm for straight-line walking and the fourth and final study addresses the accuracy of an iPod’s inertial-sensing capabilities, more specifically, the validity of an inertial-sensing method (integrated in an iPod) to obtain time-accurate vertical lower trunk displacement measures. The systematic review revealed that present research primarily focuses on the development of accurate methods able to identify and distinguish different functional activities. While these are important aims, much of the conducted work remains in laboratory environments, with relatively little research moving from the “bench to the bedside.” This review only identified a few studies that explored AT’s potential outside of laboratory settings, indicating that clinical and real-world research significantly lags behind its engineering counterpart. In addition, AT methods are largely based on machine-learning algorithms that rely on a feature selection process. However, extracted features depend on the signal output being measured, which is seldom described. It is, therefore, difficult to determine the accuracy of AT methods without characterizing gait signals first. Furthermore, much variability exists among approaches (including the numbers of body-fixed sensors and sensor locations) to obtain useful data to analyze human movement. From an end-user’s perspective, reducing the amount of sensors to one instrument that is attached to a single location on the body would greatly simplify the design and use of the system. With this in mind, the accuracy of formerly identified or gait events from a single IMU attached to the lower trunk was explored. The study’s analysis of the trunk’s vertical and anterior-posterior acceleration pattern (and of their integrands) demonstrates, that a combination of both signals may provide more nuanced information regarding a person’s gait cycle, ultimately permitting more clinically relevant gait features to be extracted. Going one step further, a modified step length algorithm based on a pendulum model of the swing leg was proposed. By incorporating the trunk’s anterior-posterior displacement, more accurate predictions of mean step length can be made in healthy subjects at self-selected walking speeds. Experimental results indicate that the proposed algorithm estimates step length with errors less than 3% (mean error of 0.80 ± 2.01cm). The performance of this algorithm, however, still needs to be verified for those suffering from gait disturbances. Having established a referential framework for the extraction of temporal gait parameters as well as an algorithm for step length estimations from one instrument attached to the lower trunk, the fourth and final study explored the inertial-sensing capabilities of an iPod Touch. With the help of Dr. Ian Sheret and Oxford Brookes’ spin-off company ‘Wildknowledge’, a smart application for the iPod Touch was developed. The study results demonstrate that the proposed inertial-sensing method can reliably derive lower trunk vertical displacement (intraclass correlations ranging from .80 to .96) with similar agreement measurement levels to those gathered by a conventional inertial sensor (small systematic error of 2.2mm and a typical error of 3mm). By incorporating the aforementioned methods, an iPod Touch can potentially serve as a novel ambulatory monitor system capable of assessing gait in clinical and non-clinical environments

    Building an Understanding of Human Activities in First Person Video using Fuzzy Inference

    Get PDF
    Activities of Daily Living (ADL’s) are the activities that people perform every day in their home as part of their typical routine. The in-home, automated monitoring of ADL’s has broad utility for intelligent systems that enable independent living for the elderly and mentally or physically disabled individuals. With rising interest in electronic health (e-Health) and mobile health (m-Health) technology, opportunities abound for the integration of activity monitoring systems into these newer forms of healthcare. In this dissertation we propose a novel system for describing ’s based on video collected from a wearable camera. Most in-home activities are naturally defined by interaction with objects. We leverage these object-centric activity definitions to develop a set of rules for a Fuzzy Inference System (FIS) that uses video features and the identification of objects to identify and classify activities. Further, we demonstrate that the use of FIS enhances the reliability of the system and provides enhanced explainability and interpretability of results over popular machine-learning classifiers due to the linguistic nature of fuzzy systems

    Object detection, distributed cloud computing and parallelization techniques for autonomous driving systems.

    Get PDF
    Autonomous vehicles are increasingly becoming a necessary trend towards building the smart cities of the future. Numerous proposals have been presented in recent years to tackle particular aspects of the working pipeline towards creating a functional end-to-end system, such as object detection, tracking, path planning, sentiment or intent detection, amongst others. Nevertheless, few efforts have been made to systematically compile all of these systems into a single proposal that also considers the real challenges these systems will have on the road, such as real-time computation, hardware capabilities, etc. This paper reviews the latest techniques towards creating our own end-to-end autonomous vehicle system, considering the state-of-the-art methods on object detection, and the possible incorporation of distributed systems and parallelization to deploy these methods. Our findings show that while techniques such as convolutional neural networks, recurrent neural networks, and long short-term memory can effectively handle the initial detection and path planning tasks, more efforts are required to implement cloud computing to reduce the computational time that these methods demand. Additionally, we have mapped different strategies to handle the parallelization task, both within and between the networks

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications
    • 

    corecore