2,888 research outputs found

    Resilience of multi-robot systems to physical masquerade attacks

    Full text link
    The advent of autonomous mobile multi-robot systems has driven innovation in both the industrial and defense sectors. The integration of such systems in safety-and security-critical applications has raised concern over their resilience to attack. In this work, we investigate the security problem of a stealthy adversary masquerading as a properly functioning agent. We show that conventional multi-agent pathfinding solutions are vulnerable to these physical masquerade attacks. Furthermore, we provide a constraint-based formulation of multi-agent pathfinding that yields multi-agent plans that are provably resilient to physical masquerade attacks. This formalization leverages inter-agent observations to facilitate introspective monitoring to guarantee resilience.Accepted manuscrip

    Special Session on Industry 4.0

    Get PDF
    No abstract available

    Opportunities and challenges posed by disruptive and converging information technologies for Australia\u27s future defence capabilities: A horizon scan

    Get PDF
    Introduction: The research project\u27s objective was to conduct a comprehensive horizon scan of Network Centric Warfare (NCW) technologies—specifically, Cyber, IoT/IoBT, AI, and Autonomous Systems. Recognised as pivotal force multipliers, these technologies are critical to reshaping the mission, design, structure, and operations of the Australian Defence Force (ADF), aligning with the Department of Defence (Defence)’s offset strategies and ensuring technological advantage, especially in the Indo-Pacific\u27s competitive landscape. Research process: Employing a two-pronged research approach, the study first leveraged scientometric analysis, utilising informetric mapping software (VOSviewer) to evaluate emerging trends and their implications on defence capabilities. This approach facilitated a broader understanding of the interdisciplinary nature of defence technologies, identifying key areas for further exploration. The subsequent survey study, engaging 415 professionals and six experts across STEM, law enforcement, and ICT, aimed to assess the impact, deployment likelihood, and developmental timelines of the identified technologies. Findings: Key findings revealed significant overlaps in technology clusters, highlighting 11 specific technologies or trends as potential force multipliers for the ADF. Among these, Cyber and AI technologies were recognised for their immediate potential and urgency, suggesting a prioritisation for development investment. The analysis presented a clear imperative for urgent and prioritised technological investments, specifically in Cyber and AI technologies, followed by IoT/IoBT and autonomous systems technologies. The recommended strategic focus entails enhancing cyber security of critical infrastructure, optimising network communications, and harnessing smart sensors, among others. Implications: To maintain a competitive edge, the ADF and the Australian government must commit to significant investments in these priority technologies. This involves not only advancing the technological frontier but also fostering a flexible, innovation-friendly environment conducive to leveraging non-linear opportunities in technology innovation. Such an approach requires a concerted effort from both public and private sectors to invest resources effectively, ensuring the ADF\u27s adaptability and strategic overmatch in a rapidly changing technological landscape. Conclusion: Ultimately, this research illuminates the path forward for the ADF and Defence at large, highlighting the need for strategic investments in emerging technologies. By identifying strategic gaps, potential alliances, and sovereign technologies of high potential, this report serves as a blueprint for enhancing Australia\u27s defence capabilities and securing its strategic interests in the face of global technological shifts

    Machine learning and blockchain technologies for cybersecurity in connected vehicles

    Get PDF
    Future connected and autonomous vehicles (CAVs) must be secured againstcyberattacks for their everyday functions on the road so that safety of passengersand vehicles can be ensured. This article presents a holistic review of cybersecurityattacks on sensors and threats regardingmulti-modal sensor fusion. A compre-hensive review of cyberattacks on intra-vehicle and inter-vehicle communicationsis presented afterward. Besides the analysis of conventional cybersecurity threatsand countermeasures for CAV systems,a detailed review of modern machinelearning, federated learning, and blockchain approach is also conducted to safe-guard CAVs. Machine learning and data mining-aided intrusion detection systemsand other countermeasures dealing with these challenges are elaborated at theend of the related section. In the last section, research challenges and future direc-tions are identified

    Risks, Safety and Security in the Ecosystem of Smart Cities

    Get PDF
    We have performed a review of systemic risks in smart cities dependent on intelligent and partly autonomous transport systems. Smart cities include concepts such as smart transportation/use of autonomous transportation systems (i.e., autonomous cars, subways, shipping, drones) and improved management of infrastructure (power and water supply). At the same time, this requires safe and resilient infrastructures and need for global collaboration. One challenge is some sort of risk based regulation of emergent vulnerabilities. In this paper we focus on emergent vulnerabilities and discussion of how mitigation can be organized and structured based on emergent and known scenarios cross boundaries. We regard a smart city as a software ecosystem (SEC), defined as a dynamic evolution of systems on top of a common technological platform offering a set of software solutions and services. Software ecosystems are increasingly being used to support critical tasks and operations. As a part of our work we have performed a systematic literature review of safety, security and resilience software ecosystems, in the period 2007–2016. The perspective of software ecosystems has helped to identify and specify patterns of safety, security and resilience on a relevant abstraction level. Significant vulnerabilities and poor awareness of safety, security and resilience has been identified. Key actors that should increase their attention are vendors, regulators, insurance companies and the research community. There is a need to improve private-public partnership and to improve the learning loops between computer emergency teams, security information providers (SIP), regulators and vendors. There is a need to focus more on safety, security and resilience and to establish regulations of responsibilities on the vendors for liabilities

    Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities

    Full text link
    Robotics and Artificial Intelligence (AI) have been inextricably intertwined since their inception. Today, AI-Robotics systems have become an integral part of our daily lives, from robotic vacuum cleaners to semi-autonomous cars. These systems are built upon three fundamental architectural elements: perception, navigation and planning, and control. However, while the integration of AI-Robotics systems has enhanced the quality our lives, it has also presented a serious problem - these systems are vulnerable to security attacks. The physical components, algorithms, and data that make up AI-Robotics systems can be exploited by malicious actors, potentially leading to dire consequences. Motivated by the need to address the security concerns in AI-Robotics systems, this paper presents a comprehensive survey and taxonomy across three dimensions: attack surfaces, ethical and legal concerns, and Human-Robot Interaction (HRI) security. Our goal is to provide users, developers and other stakeholders with a holistic understanding of these areas to enhance the overall AI-Robotics system security. We begin by surveying potential attack surfaces and provide mitigating defensive strategies. We then delve into ethical issues, such as dependency and psychological impact, as well as the legal concerns regarding accountability for these systems. Besides, emerging trends such as HRI are discussed, considering privacy, integrity, safety, trustworthiness, and explainability concerns. Finally, we present our vision for future research directions in this dynamic and promising field
    • …
    corecore