317 research outputs found

    SARCASTIC v2.0 - High-performance SAR simulation for next-generation ATR systems

    Get PDF
    Synthetic aperture radar has been a mainstay of the remote sensing field for many years, with a wide range of applications across both civilian and military contexts. However, the lack of openly available datasets of comparable size and quality to those available for optical imagery has severely hampered work on open problems such as automatic target recognition, image understanding and inverse modelling. This paper presents a simulation and analysis framework based on the upgraded SARCASTIC v2.0 engine, along with a selection of case studies demonstrating its application to well-known and novel problems. In particular, we demonstrate that SARCASTIC v2.0 is capable of supporting complex phase-dependent processing such as interferometric height extraction whilst maintaining near-realtime performance on complex scenes

    Proceedings, MSVSCC 2013

    Get PDF
    Proceedings of the 7th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 11, 2013 at VMASC in Suffolk, Virginia

    Advances in Object and Activity Detection in Remote Sensing Imagery

    Get PDF
    The recent revolution in deep learning has enabled considerable development in the fields of object and activity detection. Visual object detection tries to find objects of target classes with precise localisation in an image and assign each object instance a corresponding class label. At the same time, activity recognition aims to determine the actions or activities of an agent or group of agents based on sensor or video observation data. It is a very important and challenging problem to detect, identify, track, and understand the behaviour of objects through images and videos taken by various cameras. Together, objects and their activity recognition in imaging data captured by remote sensing platforms is a highly dynamic and challenging research topic. During the last decade, there has been significant growth in the number of publications in the field of object and activity recognition. In particular, many researchers have proposed application domains to identify objects and their specific behaviours from air and spaceborne imagery. This Special Issue includes papers that explore novel and challenging topics for object and activity detection in remote sensing images and videos acquired by diverse platforms

    A survey of free software for the design, analysis, modelling, and simulation of an unmanned aerial vehicle

    Get PDF
    The objective of this paper is to analyze free software for the design, analysis, modelling, and simulation of an unmanned aerial vehicle (UAV). Free software is the best choice when the reduction of production costs is necessary; nevertheless, the quality of free software may vary. This paper probably does not include all of the free software, but tries to describe or mention at least the most interesting programs. The first part of this paper summarizes the essential knowledge about UAVs, including the fundamentals of flight mechanics and aerodynamics, and the structure of a UAV system. The second section generally explains the modelling and simulation of a UAV. In the main section, more than 50 free programs for the design, analysis, modelling, and simulation of a UAV are described. Although the selection of the free software has been focused on small subsonic UAVs, the software can also be used for other categories of aircraft in some cases; e.g. for MAVs and large gliders. The applications with an historical importance are also included. Finally, the results of the analysis are evaluated and discussed—a block diagram of the free software is presented, possible connections between the programs are outlined, and future improvements of the free software are suggested. © 2015, CIMNE, Barcelona, Spain.Internal Grant Agency of Tomas Bata University in Zlin [IGA/FAI/2015/001, IGA/FAI/2014/006

    Synthetic Worlds for Improving Driver Assistance Systems

    Get PDF
    The automotive industry is evolving at a rapid pace, new technologies and techniques are being introduced in order to make the driving experience more pleasant and safer as compared to a few decades ago. But as with any new technology and methodology, there will always be new challenges to overcome. Advanced Driver Assistance systems has attracted a considerable amount of interest in the research community over the past few decades. This research dives into greater depths of how synthetic world simulations can be used to train the next generation of Advanced Driver Assistance Systems in order to detect and alert the driver of any possible risks and dangers during autonomous driving sessions. As Autonomous driving is still in the process of rolling out, we are far away from the point where Cars can truly be autonomous in any given environment and scenario and there are still quite a fair number of challenges to overcome. A number of semi autonomous cars are already on the road for a number of years. These include likes of Tesla, BMW \& Mercedes. But even more recently some of these cars have been involved in accidents which could have been avoided if a driver had control of the vehicle instead of the autonomous systems. This raises the question why are these cars of the future so prone to accidents and whats the best way to over come this problem. The answer lies in the use of synthetic worlds for designing more efficient ADAS in the least amount of time for the automobile of the future. This thesis explores a number of research areas starting from the development of an open source driver simulator that when compared to the state-of-the art is cheaper and efficient to deploy at almost any location. A typical driver simulator can cost between £10,000 to as much as £500,000. Our approach has brought this cost down to less than £2,000 while providing the same visual fidelity and accuracy of the more expensive simulators in the market. On the hardware side, our simulator consist of only 4 main components namely, CPU case, monitors Steering/pedal and webcams. This allows the simulator to be shipped to any location without the need of any complicated setup. When compared to other state-of-the-art simulators \cite{carla}, the setup and programming time is quite low, if a PRT based setup requires 10 days on state-of-the-art simulators then the same aspect can be programmed on our simulator in as little as 15 minutes as the simulator is designed from the ground up to be able to record accurate PRT. The simulator is then successfully used to record accurate Perception Reaction Times among 40 subjects under different driving conditions. The results highlight the fact that not all secondary tasks result in higher reaction times. Moreover, the overall reaction times for hands were recorded at 3.51 seconds whereas the feet were recorded at 2.47 seconds. The study highlights the importance of mental workloads during autonomous driving which is a vastly important aspect for designing ADAS. The novelty from this study resulted in the generation of a new dataset comprising of 1.44 million images targeted at driver vehicular interactions that can be used by researchers and engineers to develop advanced driver assistance systems. The simulator is then further modified to generate hi fidelity weather simulations which when compared to simulators like CARLA provide more control over how the cloud formations giving the researchers more variables to test during simulations and image generation. The resulting synthetic weather dataset called Weather Drive Dataset is unique and novel in nature as its the largest synthetic weather dataset currently available to researchers comprising of 108,333 images with varying weather conditions. Most of the state-of-the-art datasets only have non automotive based images or is not synthetic at all. The proposed dataset has been evaluated against Berkeley Deep Drive dataset which resulted in 74\% accuracy. This proved that synthetic nature of datasets are valid in training the next generation of vision based weather classifiers for autonomous driving. The studies performed will prove to be vital in progressing the Advanced Driver Assistance systems research forward in a number of different ways. The experiments take into account the necessary state of the art methods to compare and differentiate between the proposed methodologies. Most efficient approaches and best practices are also explained in detail which can provide the necessary support to other researchers to set up similar systems to aid in designing synthetic simulations for other research areas

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey

    Full text link
    The Internet of Underwater Things (IoUT) is an emerging communication ecosystem developed for connecting underwater objects in maritime and underwater environments. The IoUT technology is intricately linked with intelligent boats and ships, smart shores and oceans, automatic marine transportations, positioning and navigation, underwater exploration, disaster prediction and prevention, as well as with intelligent monitoring and security. The IoUT has an influence at various scales ranging from a small scientific observatory, to a midsized harbor, and to covering global oceanic trade. The network architecture of IoUT is intrinsically heterogeneous and should be sufficiently resilient to operate in harsh environments. This creates major challenges in terms of underwater communications, whilst relying on limited energy resources. Additionally, the volume, velocity, and variety of data produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise to the concept of Big Marine Data (BMD), which has its own processing challenges. Hence, conventional data processing techniques will falter, and bespoke Machine Learning (ML) solutions have to be employed for automatically learning the specific BMD behavior and features facilitating knowledge extraction and decision support. The motivation of this paper is to comprehensively survey the IoUT, BMD, and their synthesis. It also aims for exploring the nexus of BMD with ML. We set out from underwater data collection and then discuss the family of IoUT data communication techniques with an emphasis on the state-of-the-art research challenges. We then review the suite of ML solutions suitable for BMD handling and analytics. We treat the subject deductively from an educational perspective, critically appraising the material surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys & Tutorials, peer-reviewed academic journa

    A Centralized Energy Management System for Wireless Sensor Networks

    Get PDF
    This document presents the Centralized Energy Management System (CEMS), a dynamic fault-tolerant reclustering protocol for wireless sensor networks. CEMS reconfigures a homogeneous network both periodically and in response to critical events (e.g. cluster head death). A global TDMA schedule prevents costly retransmissions due to collision, and a genetic algorithm running on the base station computes cluster assignments in concert with a head selection algorithm. CEMS\u27 performance is compared to the LEACH-C protocol in both normal and failure-prone conditions, with an emphasis on each protocol\u27s ability to recover from unexpected loss of cluster heads
    • …
    corecore