374 research outputs found

    Behaviour-based anomaly detection of cyber-physical attacks on a robotic vehicle

    Get PDF
    Security is one of the key challenges in cyber-physical systems, because by their nature, any cyber attack against them can have physical repercussions. This is a critical issue for autonomous vehicles; if compromised in terms of their communications or computation they can cause considerable physical damage due to their mobility. Our aim here is to facilitate the automatic detection of cyber attacks on a robotic vehicle. For this purpose, we have developed a detection mechanism, which monitors real-time data from a large number of sources onboard the vehicle, including its sensors, networks and processing. Following a learning phase, where the vehicle is trained in a non-attack state on what values are considered normal, it is then subjected to a series of different cyber-physical and physical-cyber attacks. We approach the problem as a binary classification problem of whether the robot is able to self-detect when and whether it is under attack. Our experimental results show that the approach is promising for most attacks that the vehicle is subjected to. We further improve its performance by using weights that accentuate the anomalies that are less common thus improving overall performance of the detection mechanism for unknown attacks

    Experiences and issues for environmental science sensor network deployments

    Get PDF
    Sensor network research is a large and growing area of academic effort, examining technological and deployment issues in the area of environmental monitoring. These technologies are used by environmental engineers and scientists to monitor a multiplicity of environments and services, and, specific to this paper, energy and water supplied to the built environment. Although the technology is developed by Computer Science specialists, the use and deployment is traditionally performed by environmental engineers. This paper examines deployment from the perspectives of environmental engineers and scientists and asks what computer scientists can do to improve the process. The paper uses a case study to demonstrate the agile operation of WSNs within the Cloud Computing infrastructure, and thus the demand-driven, collaboration-intense paradigm of Digital Ecosystems in Complex Environments

    Experiences and issues for environmental engineering sensor network deployments

    Get PDF
    Sensor network research is a large and growing area of academic effort, examining technological and deployment issues in the area of environmental monitoring. These technologies are used by environmental engineers and scientists to monitor a multiplicity of environments and services, and, specific to this paper, energy and water supplied to the built environment. Although the technology is developed by Computer Science specialists, the use and deployment is traditionally performed by environmental engineers. This paper examines deployment from the perspectives of environmental engineers and scientists and asks what computer scientists can do to improve the process. The paper uses a case study to demonstrate the agile operation of WSNs within the Cloud Computing infrastructure, and thus the demand-driven, collaboration-intense paradigm of Digital Ecosystems in Complex Environments

    Smart Connected Homes: Integrating Sensor, Occupant and BIM data for Building Performance Analysis

    Get PDF
    Buildings produce huge volumes of data such as BIM, sensor, occupant and building maintenance data. Data is spread across multiple disconnected systems in numerous formats, making it difficult to identify performance gaps between building design and use. Better methods for gathering and analysing data can be used to support building managers with managing building performance. The knowledge can also be fed back to designers and contractors to help close the performance gaps. We have developed a platform to integrate BIM, sensor and occupant data for providing actionable advice for building managers. A social housing organisation is acting as a use case for the platform. A methodology for developing the information needs to support data capture across disconnected systems is proposed and the challenges of bringing data-sets together to provide meaningful information to building owners and managers are presented

    Sense and Avoid Characterization of the Independent Configurable Architecture for Reliable Operations of Unmanned Systems

    Get PDF
    AbstractIndependent Configurable Architecture for Reliable Operations of Unmanned Systems (ICAROUS) is a distributed software architecture developed by NASA Langley Research Center to enable safe autonomous UAS operations. ICAROUS consists of a collection formally verified core algorithms for path planning, traffic avoidance, geofence handling, and decision making that interface with an autopilot system through a publisher-subscriber middleware. The ICAROUS Sense and Avoid Characterization (ISAAC) test was designed to evaluate the performance of the onboard Sense and Avoid (SAA) capability to detect potential conflicts with other aircraft and autonomously maneuver to avoid collisions, while remaining within the airspace boundaries of the mission. The ISAAC tests evaluated the impact of separation distances and alerting times on SAA performance. A preliminary analysis of the effects of each parameter on key measures of performance is conducted, informing the choice of appropriate parameter values for different small Unmanned Aircraft Systems (sUAS) applications. Furthermore, low-power Automatic Dependent Surveillance Broadcast (ADS-B) is evaluated for potential use to enable autonomous sUAS to sUAS deconflictions as well as to provide usable warnings for manned aircraft without saturating the frequency spectrum

    Detecting cyber-physical threats in an autonomous robotic vehicle using Bayesian Networks

    Get PDF
    Robotic vehicles and especially autonomous robotic vehicles can be attractive targets for attacks that cross the cyber-physical divide, that is cyber attacks or sensory channel attacks affecting the ability to navigate or complete a mission. Detection of such threats is typically limited to knowledge-based and vehicle-specific methods, which are applicable to only specific known attacks, or methods that require computation power that is prohibitive for resource-constrained vehicles. Here, we present a method based on Bayesian Networks that can not only tell whether an autonomous vehicle is under attack, but also whether the attack has originated from the cyber or the physical domain. We demonstrate the feasibility of the approach on an autonomous robotic vehicle built in accordance with the Generic Vehicle Architecture specification and equipped with a variety of popular communication and sensing technologies. The results of experiments involving command injection, rogue node and magnetic interference attacks show that the approach is promising

    Proprioceptive Inference for Dual-Arm Grasping of Bulky Objects Using RoboSimian

    Get PDF
    This work demonstrates dual-arm lifting of bulky objects based on inferred object properties (center of mass (COM) location, weight, and shape) using proprioception (i.e. force torque measurements). Data-driven Bayesian models describe these quantities, which enables subsequent behaviors to depend on confidence of the learned models. Experiments were conducted using the NASA Jet Propulsion Laboratory's (JPL) RoboSimian to lift a variety of cumbersome objects ranging in mass from 7kg to 25kg. The position of a supporting second manipulator was determined using a particle set and heuristics that were derived from inferred object properties. The supporting manipulator decreased the initial manipulator's load and distributed the wrench load more equitably across each manipulator, for each bulky object. Knowledge of the objects came from pure proprioception (i.e. without reliance on vision or other exteroceptive sensors) throughout the experiments

    Depth Prompting for Sensor-Agnostic Depth Estimation

    Full text link
    Dense depth maps have been used as a key element of visual perception tasks. There have been tremendous efforts to enhance the depth quality, ranging from optimization-based to learning-based methods. Despite the remarkable progress for a long time, their applicability in the real world is limited due to systematic measurement biases such as density, sensing pattern, and scan range. It is well-known that the biases make it difficult for these methods to achieve their generalization. We observe that learning a joint representation for input modalities (e.g., images and depth), which most recent methods adopt, is sensitive to the biases. In this work, we disentangle those modalities to mitigate the biases with prompt engineering. For this, we design a novel depth prompt module to allow the desirable feature representation according to new depth distributions from either sensor types or scene configurations. Our depth prompt can be embedded into foundation models for monocular depth estimation. Through this embedding process, our method helps the pretrained model to be free from restraint of depth scan range and to provide absolute scale depth maps. We demonstrate the effectiveness of our method through extensive evaluations. Source code is publicly available at https://github.com/JinhwiPark/DepthPrompting .Comment: Accepted at CVPR 202

    On the Development of a Generic Multi-Sensor Fusion Framework for Robust Odometry Estimation

    Get PDF
    In this work we review the design choices, the mathematical and software engineering techniques employed in the development of the ROAMFREE sensor fusion library, a general, open-source framework for pose tracking and sensor parameter self-calibration in mobile robotics. In ROAMFREE, a comprehensive logical sensor library allows to abstract from the actual sensor hardware and processing while preserving model accuracy thanks to a rich set of calibration parameters, such as biases, gains, distortion matrices and geometric placement dimensions. The modular formulation of the sensor fusion problem, which is based on state-of-the-art factor graph inference techniques, allows to handle arbitrary number of multi-rate sensors and to adapt to virtually any kind of mobile robot platform, such as Ackerman steering vehicles, quadrotor unmanned aerial vehicles, omni-directional mobile robots. Different solvers are available to target high-rate online pose tracking tasks and offline accurate trajectory smoothing and parameter calibration. The modularity, versatility and out-of-the-box functioning of the resulting framework came at the cost of an increased complexity of the software architecture, with respect to an ad-hoc implementation of a platform dependent sensor fusion algorithm, and required careful design of abstraction layers and decoupling interfaces between solvers, state variables representations and sensor error models. However, we review how a high level, clean, C++/Python API, as long as ROS interface nodes, hide the complexity of sensor fusion tasks to the end user, making ROAMFREE an ideal choice for new, and existing, mobile robot projects
    corecore