1,337 research outputs found

    Autonomous Collision avoidance for Unmanned aerial systems

    Get PDF
    Unmanned Aerial System (UAS) applications are growing day by day and this will lead Unmanned Aerial Vehicle (UAV) in the close future to share the same airspace of manned aircraft.This implies the need for UAS to define precise safety standards compatible with operations standards for manned aviation. Among these standards the need for a Sense And Avoid (S&A) system to support and, when necessary, sub¬stitute the pilot in the detection and avoidance of hazardous situations (e.g. midair collision, controlled flight into terrain, flight path obstacles, and clouds). This thesis presents the work come out in the development of a S&A system taking into account collision risks scenarios with multiple moving and fixed threats. The conflict prediction is based on a straight projection of the threats state in the future. The approximations introduced by this approach have the advantage of high update frequency (1 Hz) of the estimated conflict geometry. This solution allows the algorithm to capture the trajectory changes of the threat or ownship. The resolution manoeuvre evaluation is based on a optimisation approach considering step command applied to the heading and altitude autopilots. The optimisation problem takes into account the UAV performances and aims to keep a predefined minimum separation distance between UAV and threats during the resolution manouvre. The Human-Machine Interface (HMI) of this algorithm is then embedded in a partial Ground Control Station (GCS) mock-up with some original concepts for the indication of the flight condition parameters and the indication of the resolution manoeuvre constraints. Simulations of the S&A algorithm in different critical scenarios are moreover in-cluded to show the algorithm capabilities. Finally, methodology and results of the tests and interviews with pilots regarding the proposed GCS partial layout are covered

    Investigating the effect of urgency and modality of pedestrian alert warnings on driver acceptance and performance

    Get PDF
    Active safety systems have the potential to reduce the risk to pedestrians by warning the driver and/or taking evasive action to reduce the effects of or avoid a collision. However, current systems are limited in the range of scenarios they can address using primary control interventions, and this arguably places more emphasis in some situations on warning the driver so that they can take appropriate action in response to pedestrian hazards. In a counterbalanced experimental design, we varied urgency (‘when’) based on the time-to-collision (TTC) at which the warning was presented (with associated false-positive alarms, but no false negatives, or ‘misses’), and modality (‘how’) by presenting warnings using audio-only and audio combined with visual alerts presented on a HUD. Results from 24 experienced drivers, who negotiated an urban scenario during twelve 6.0-minute drives in a medium-fidelity driving simulator, showed that all warnings were generally rated ‘positively’ (using recognised subjective ‘acceptance’ scales), although acceptance was lower when warnings were delivered at the shortest (2.0s) TTC. In addition, drivers indicated higher confidence in combined audio and visual warnings in all situations. Performance (based on safety margins associated with critical events) varied significantly between warning onset times, with drivers first fixating their gaze on the hazard, taking their foot off the accelerator, applying their foot on the brake, and ultimately bringing the car to a stop further from the pedestrian when warnings were presented at the longest (5.0s) TTC. In addition, drivers applied the brake further from the pedestrian when combined audio and HUD warnings were provided (compared to audio-only), but only at 5.0s TTC. Overall, the study indicates a greater margin of safety associated with the provision of earlier warnings, with no apparent detriment to acceptance, despite relatively high false alarm rates at longer TTCs. Also, that drivers feel more confident with a warning system present, especially when it incorporates auditory and visual elements, even though the visual cue does not necessarily improve hazard localisation or driving performance beyond the advantages offered by auditory alerts alone. Findings are discussed in the context of the design, evaluation and acceptance of active safety systems

    Leveraging Decision Making in Cyber Security Analysis through Data Cleaning

    Get PDF
    Security Operations Centers (SOCs) have been built in many institutions for intrusion detection and incident response. A SOC employs various cyber defense technologies to continually monitor and control network traffic. Given the voluminous monitoring data, cyber security analysts need to identify suspicious network activities to detect potential attacks. As the network monitoring data are generated at a rapid speed and contain a lot of noise, analysts are so bounded by tedious and repetitive data triage tasks that they can hardly concentrate on in-depth analysis for further decision making. Therefore, it is critical to employ data cleaning methods in cyber situational awareness. In this paper, we investigate the main characteristics and categories of cyber security data with a special emphasis on its heterogeneous features. We also discuss how cyber analysts attempt to understand the incoming data through the data analytical process. Based on this understanding, this paper discusses five categories of data cleaning methods for heterogeneous data and addresses the main challenges for applying data cleaning in cyber situational awareness. The goal is to create a dataset that contains accurate information for cyber analysts to work with and thus achieving higher levels of data-driven decision making in cyber defense

    Effects of modality, urgency and situation on responses to multimodal warnings for drivers

    Get PDF
    Signifying road-related events with warnings can be highly beneficial, especially when imminent attention is needed. This thesis describes how modality, urgency and situation can influence driver responses to multimodal displays used as warnings. These displays utilise all combinations of audio, visual and tactile modalities, reflecting different urgency levels. In this way, a new rich set of cues is designed, conveying information multimodally, to enhance reactions during driving, which is a highly visual task. The importance of the signified events to driving is reflected in the warnings, and safety-critical or non-critical situations are communicated through the cues. Novel warning designs are considered, using both abstract displays, with no semantic association to the signified event, and language-based ones, using speech. These two cue designs are compared, to discover their strengths and weaknesses as car alerts. The situations in which the new cues are delivered are varied, by simulating both critical and non-critical events and both manual and autonomous car scenarios. A novel set of guidelines for using multimodal driver displays is finally provided, considering the modalities utilised, the urgency signified, and the situation simulated

    Autonomous surveillance for biosecurity

    Full text link
    The global movement of people and goods has increased the risk of biosecurity threats and their potential to incur large economic, social, and environmental costs. Conventional manual biosecurity surveillance methods are limited by their scalability in space and time. This article focuses on autonomous surveillance systems, comprising sensor networks, robots, and intelligent algorithms, and their applicability to biosecurity threats. We discuss the spatial and temporal attributes of autonomous surveillance technologies and map them to three broad categories of biosecurity threat: (i) vector-borne diseases; (ii) plant pests; and (iii) aquatic pests. Our discussion reveals a broad range of opportunities to serve biosecurity needs through autonomous surveillance.Comment: 26 pages, Trends in Biotechnology, 3 March 2015, ISSN 0167-7799, http://dx.doi.org/10.1016/j.tibtech.2015.01.003. (http://www.sciencedirect.com/science/article/pii/S0167779915000190

    Automatic Driver Drowsiness Detection System

    Get PDF
    The proposed system aims to lessen the number of accidents that occur due to drivers’ drowsiness and fatigue, which will in turn increase transportation safety. This has become a common reason for accidents in recent times. Several facial and body gestures are considered signs of drowsiness and fatigue in drivers, including tiredness in the eyes and yawning. These features are an indication that the driver’s condition is improper. EAR (Eye Aspect Ratio) computes the ratio of distances between the horizontal and vertical eye landmarks, which is required for the detection of drowsiness. For the purpose of yawn detection, a YAWN value is calculated using the distance between the lower lip and the upper lip, and the distance will be compared against a threshold value. We have deployed an eSpeak module (text-to-speech synthesiser), which is used for giving appropriate voice alerts when the driver is feeling drowsy or is yawning. The proposed system is designed to decrease the rate of accidents and contribute to technology with the goal of preventing fatalities caused by road accidents. Over the past ten years, advances in artificial intelligence and computing technologies have improved driver monitoring systems. Several experimental studies have gathered data on actual driver fatigue using different artificial intelligence systems. In order to dramatically improve these systems' real-time performance, feature combinations are used. An updated evaluation of the driver sleepiness detection technologies put in place during the previous ten years is presented in this research. The paper discusses and displays current systems that track and identify drowsiness using various metrics. Based on the information used, each system can be categorised into one of four groups. Each system in this paper comes with a thorough discussion of the features, classification rules, and datasets it employs.&nbsp

    Interactive Execution Monitoring of Agent Teams

    Full text link
    There is an increasing need for automated support for humans monitoring the activity of distributed teams of cooperating agents, both human and machine. We characterize the domain-independent challenges posed by this problem, and describe how properties of domains influence the challenges and their solutions. We will concentrate on dynamic, data-rich domains where humans are ultimately responsible for team behavior. Thus, the automated aid should interactively support effective and timely decision making by the human. We present a domain-independent categorization of the types of alerts a plan-based monitoring system might issue to a user, where each type generally requires different monitoring techniques. We describe a monitoring framework for integrating many domain-specific and task-specific monitoring techniques and then using the concept of value of an alert to avoid operator overload. We use this framework to describe an execution monitoring approach we have used to implement Execution Assistants (EAs) in two different dynamic, data-rich, real-world domains to assist a human in monitoring team behavior. One domain (Army small unit operations) has hundreds of mobile, geographically distributed agents, a combination of humans, robots, and vehicles. The other domain (teams of unmanned ground and air vehicles) has a handful of cooperating robots. Both domains involve unpredictable adversaries in the vicinity. Our approach customizes monitoring behavior for each specific task, plan, and situation, as well as for user preferences. Our EAs alert the human controller when reported events threaten plan execution or physically threaten team members. Alerts were generated in a timely manner without inundating the user with too many alerts (less than 10 percent of alerts are unwanted, as judged by domain experts)

    An Ontological Approach to Inform HMI Designs for Minimizing Driver Distractions with ADAS

    Get PDF
    ADAS (Advanced Driver Assistance Systems) are in-vehicle systems designed to enhance driving safety and efficiency as well as comfort for drivers in the driving process. Recent studies have noticed that when Human Machine Interface (HMI) is not designed properly, an ADAS can cause distraction which would affect its usage and even lead to safety issues. Current understanding of these issues is limited to the context-dependent nature of such systems. This paper reports the development of a holistic conceptualisation of how drivers interact with ADAS and how such interaction could lead to potential distraction. This is done taking an ontological approach to contextualise the potential distraction, driving tasks and user interactions centred on the use of ADAS. Example scenarios are also given to demonstrate how the developed ontology can be used to deduce rules for identifying distraction from ADAS and informing future designs

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    License to Supervise:Influence of Driving Automation on Driver Licensing

    Get PDF
    To use highly automated vehicles while a driver remains responsible for safe driving, places new – yet demanding, requirements on the human operator. This is because the automation creates a gap between drivers’ responsibility and the human capabilities to take responsibility, especially for unexpected or time-critical transitions of control. This gap is not being addressed by current practises of driver licensing. Based on literature review, this research collects drivers’ requirements to enable safe transitions in control attuned to human capabilities. This knowledge is intended to help system developers and authorities to identify the requirements on human operators to (re)take responsibility for safe driving after automation
    • …
    corecore