193 research outputs found

    The 2016 UK Space Agency Mars Utah Rover Field Investigation (MURFI)

    Get PDF
    The 2016 Mars Utah Rover Field Investigation (MURFI) was a Mars rover field trial run by the UK Space Agency in association with the Canadian Space Agency's 2015/2016 Mars Sample Return Analogue Deployment mission. MURFI had over 50 participants from 15 different institutions around the UK and abroad. The objectives of MURFI were to develop experience and leadership within the UK in running future rover field trials; to prepare the UK planetary community for involvement in the European Space Agency/Roscosmos ExoMars 2020 rover mission; and to assess how ExoMars operations may differ from previous rover missions. Hence, the wider MURFI trial included a ten-day (or ten-ā€˜solā€™) ExoMars rover-like simulation. This comprised an operations team and control centre in the UK, and a rover platform in Utah, equipped with instruments to emulate the ExoMars rovers remote sensing and analytical suite. The operations team operated in ā€˜blind modeā€™, where the only available data came from the rover instruments, and daily tactical planning was performed under strict time constraints to simulate real communications windows. The designated science goal of the MURFI ExoMars rover-like simulation was to locate in-situ bedrock, at a site suitable for sub-surface core-sampling, in order to detect signs of ancient life. Prior to ā€œlandingā€, the only information available to the operations team were Mars-equivalent satellite remote sensing data, which were used for both geologic and hazard (e.g., slopes, loose soil) characterisation of the area. During each sol of the mission, the operations team sent driving instructions and imaging/analysis targeting commands, which were then enacted by the field team and rover-controllers in Utah. During the ten-sol mission, the rover drove over 100ā€Æm and obtained hundreds of images and supporting observations, allowing the operations team to build up geologic hypotheses for the local area and select possible drilling locations. On sol 9, the team obtained a subsurface core sample that was then analyzed by the Raman spectrometer. Following the conclusion of the ExoMars-like component of MURFI, the operations and field team came together to evaluate the successes and failures of the mission, and discuss lessons learnt for ExoMars rover and future field trials. Key outcomes relevant to ExoMars rover included a key recognition of the importance of field trials for (i) understanding how to operate the ExoMars rover instruments as a suite, (ii) building an operations planning team that can work well together under strict time-limited pressure, (iii) developing new processes and workflows relevant to the ExoMars rover, (iv) understanding the limits and benefits of satellite mapping and (v) practicing efficient geological interpretation of outcrops and landscapes from rover-based data, by comparing the outcomes of the simulated mission with post-trial, in-situ field observations. In addition, MURFI was perceived by all who participated as a vital learning experience, especially for early and mid-career members of the team, and also demonstrated the UK capability of implementing a large rover field trial. The lessons learnt from MURFI are therefore relevant both to ExoMars rover, and to future rover field trials

    Automating Software Development for Mobile Computing Platforms

    Get PDF
    Mobile devices such as smartphones and tablets have become ubiquitous in today\u27s computing landscape. These devices have ushered in entirely new populations of users, and mobile operating systems are now outpacing more traditional desktop systems in terms of market share. The applications that run on these mobile devices (often referred to as apps ) have become a primary means of computing for millions of users and, as such, have garnered immense developer interest. These apps allow for unique, personal software experiences through touch-based UIs and a complex assortment of sensors. However, designing and implementing high quality mobile apps can be a difficult process. This is primarily due to challenges unique to mobile development including change-prone APIs and platform fragmentation, just to name a few. in this dissertation we develop techniques that aid developers in overcoming these challenges by automating and improving current software design and testing practices for mobile apps. More specifically, we first introduce a technique, called Gvt, that improves the quality of graphical user interfaces (GUIs) for mobile apps by automatically detecting instances where a GUI was not implemented to its intended specifications. Gvt does this by constructing hierarchal models of mobile GUIs from metadata associated with both graphical mock-ups (i.e., created by designers using photo-editing software) and running instances of the GUI from the corresponding implementation. Second, we develop an approach that completely automates prototyping of GUIs for mobile apps. This approach, called ReDraw, is able to transform an image of a mobile app GUI into runnable code by detecting discrete GUI-components using computer vision techniques, classifying these components into proper functional categories (e.g., button, dropdown menu) using a Convolutional Neural Network (CNN), and assembling these components into realistic code. Finally, we design a novel approach for automated testing of mobile apps, called CrashScope, that explores a given android app using systematic input generation with the intrinsic goal of triggering crashes. The GUI-based input generation engine is driven by a combination of static and dynamic analyses that create a model of an app\u27s GUI and targets common, empirically derived root causes of crashes in android apps. We illustrate that the techniques presented in this dissertation represent significant advancements in mobile development processes through a series of empirical investigations, user studies, and industrial case studies that demonstrate the effectiveness of these approaches and the benefit they provide developers

    Opportunistic Angle of Arrival Estimation in Impaired Scenarios

    Get PDF
    This work if focused on the analysis and the development of Angle of Arrival (AoA) radio localization methods. The radio positioning system considered is constituted by a radio source and by a receiving array of antennas. The positioning algorithms treated in this work are designed to have a passive and opportunistic approach. The opportunistic attribute implies that the radio localization algorithms are designed to provide the AoA estimation with nearly-zero information on the transmitted signals. No training sequences or waveforms custom designed for localization are taken into account. The localization is termed passive since there is no collaboration between the transmitter and the receiver during the localization process. Then, the algorithms treated in this work are designed to eavesdrop already existing communication signals and to locate their radio source with nearly-zero knowledge of the signal and without the collaboration of the transmitting node. First of all, AoA radio localization algorithms can be classified in terms of involved signals (narrowband or broadband), antenna array pattern (L-shaped, circular, etc.), signal structure (sinusoidal, training sequences, etc.), Differential Time of Arrival (D-ToA) / Differential Phase of Arrival (D-PoA) and collaborative/non collaborative. Than, the most detrimental effects for radio communications are treated: the multipath (MP) channels and the impaired hardware. A geometric model for the MP is analysed and implemented to test the robustness of the proposed methods. The effects of MP on the received signals statistics from the AoA estimation point-of-view are discussed. The hardware impairments for the most common components are introduced and their effects in the AoA estimation process are analysed. Two novel algorithms that exploits the AoA from signal snapshots acquired sequentially with a time division approach are presented. The acquired signals are QAM waveforms eavesdropped from a pre-existing communication. The proposed methods, namely Constellation Statistical Pattern IDentification and Overlap (CSP-IDO) and Bidimensional CSP-IDO (BCID), exploit the probability density function (pdf) of the received signals to obtain the D-PoA. Both CSP-IDO and BCID use the statistical pattern of received signals exploiting the transmitter statistical signature. Since the presence of hardware impairments modify the statistical pattern of the received signals, CSP-IDO and BCID are able to exploit it to improve the performance with respect to (w.r.t.) the ideal case. Since the proposed methods can be used with a switched antenna architecture they are implementable with a reduced hardware contrariwise to synchronous methods like MUltiple SIgnal Classification (MUSIC) that are not applicable. Then, two iterative AoA estimation algorithms for the dynamic tracking of moving radio sources are implemented. Statistical methods, namely PF, are used to implement the iterative tracking of the AoA from D-PoA measures in two different scenarios: automotive and Unmanned Aerial Vehicle (UAV). The AoA tracking of an electric car signalling with a IEEE 802.11p-like standard is implemented using a test-bed and real measures elaborated with a the proposed Particle Swarm Adaptive Scattering (PSAS) algorithm. The tracking of a UAV moving in the 3D space is investigated emulating the UAV trajectory using the proposed Confined Area Random Aerial Trajectory Emulator (CARATE) algorithm

    A Machine Learning Enhanced Scheme for Intelligent Network Management

    Get PDF
    The versatile networking services bring about huge influence on daily living styles while the amount and diversity of services cause high complexity of network systems. The network scale and complexity grow with the increasing infrastructure apparatuses, networking function, networking slices, and underlying architecture evolution. The conventional way is manual administration to maintain the large and complex platform, which makes effective and insightful management troublesome. A feasible and promising scheme is to extract insightful information from largely produced network data. The goal of this thesis is to use learning-based algorithms inspired by machine learning communities to discover valuable knowledge from substantial network data, which directly promotes intelligent management and maintenance. In the thesis, the management and maintenance focus on two schemes: network anomalies detection and root causes localization; critical traffic resource control and optimization. Firstly, the abundant network data wrap up informative messages but its heterogeneity and perplexity make diagnosis challenging. For unstructured logs, abstract and formatted log templates are extracted to regulate log records. An in-depth analysis framework based on heterogeneous data is proposed in order to detect the occurrence of faults and anomalies. It employs representation learning methods to map unstructured data into numerical features, and fuses the extracted feature for network anomaly and fault detection. The representation learning makes use of word2vec-based embedding technologies for semantic expression. Next, the fault and anomaly detection solely unveils the occurrence of events while failing to figure out the root causes for useful administration so that the fault localization opens a gate to narrow down the source of systematic anomalies. The extracted features are formed as the anomaly degree coupled with an importance ranking method to highlight the locations of anomalies in network systems. Two types of ranking modes are instantiated by PageRank and operation errors for jointly highlighting latent issue of locations. Besides the fault and anomaly detection, network traffic engineering deals with network communication and computation resource to optimize data traffic transferring efficiency. Especially when network traffic are constrained with communication conditions, a pro-active path planning scheme is helpful for efficient traffic controlling actions. Then a learning-based traffic planning algorithm is proposed based on sequence-to-sequence model to discover hidden reasonable paths from abundant traffic history data over the Software Defined Network architecture. Finally, traffic engineering merely based on empirical data is likely to result in stale and sub-optimal solutions, even ending up with worse situations. A resilient mechanism is required to adapt network flows based on context into a dynamic environment. Thus, a reinforcement learning-based scheme is put forward for dynamic data forwarding considering network resource status, which explicitly presents a promising performance improvement. In the end, the proposed anomaly processing framework strengthens the analysis and diagnosis for network system administrators through synthesized fault detection and root cause localization. The learning-based traffic engineering stimulates networking flow management via experienced data and further shows a promising direction of flexible traffic adjustment for ever-changing environments

    Guided Autonomy for Quadcopter Photography

    Get PDF
    Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point. This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Marshall Space Flight Center Research and Technology Report 2018

    Get PDF
    Many of NASAs missions would not be possible if it were not for the investments made in research advancements and technology development efforts. The technologies developed at Marshall Space Flight Center contribute to NASAs strategic array of missions through technology development and accomplishments. The scientists, researchers, and technologists of Marshall Space Flight Center who are working these enabling technology efforts are facilitating NASAs ability to fulfill the ambitious goals of innovation, exploration, and discovery

    Power Line Monitoring through Data Integrity Analysis with Q-Learning Based Data Analysis Network

    Get PDF
    To monitor and handle big data obtained from electrical, electronic, electro-mechanical, and other equipment linked to the power grid effectively and efficiently, it is important to monitor them continually to gather information on power line integrity. We propose that data transmission analysis and data collection from tools like digital power meters may be used to undertake predictive maintenance on power lines without the need for specialized hardware like power line modems and synthetic data streams. Neural network models such as deep learning may be used for power line integrity analysis systems effectively, safely, and reliably. We adopt Q-learning based data analysis network for analyzing and monitoring power line integrity. The results of experiments performed over 32 km long power line under different scenarios are presented. The proposed framework may be useful for monitoring traditional power lines as well as alternative energy source parks and large users like industries. We discovered that the quantity of data transferred changes based on the problem and the size of the planned data packet. When all phases were absent from all meters, we noted a significant decrease in the amount of data collected from the power line of interest. This implies that there is a power outage during the monitoring. When even one phase is reconnected, we only obtain a portion of the information and a solution to interpret this was necessary. Our Q-network was able to identify and classify simulated 190 entire power outages and 700 single phase outages. The mean square error (MSE) did not exceed 0.10% of the total number of instances, and the MSE of the smart meters for a complete disturbance was only 0.20%, resulting in an average number of conceivable cases of errors and disturbances of 0.12% for the whole operation.publishedVersio
    • ā€¦
    corecore