4,116 research outputs found
Recommended from our members
Double elevation: Autonomous weapons and the search for an irreducible law of war
What should be the role of law in response to the spread of artificial intelligence in war? Fuelled by both public and private investment, military technology is accelerating towards increasingly autonomous weapons, as well as the merging of humans and machines. Contrary to much of the contemporary debate, this is not a paradigm change; it is the intensification of a central feature in the relationship between technology and war: Double elevation, above one's enemy and above oneself. Elevation above one's enemy aspires to spatial, moral, and civilizational distance. Elevation above oneself reflects a belief in rational improvement that sees humanity as the cause of inhumanity and de-humanization as our best chance for humanization. The distance of double elevation is served by the mechanization of judgement. To the extent that judgement is seen as reducible to algorithm, law becomes the handmaiden of mechanization. In response, neither a focus on questions of compatibility nor a call for a 'ban on killer robots' help in articulating a meaningful role for law. Instead, I argue that we should turn to a long-standing philosophical critique of artificial intelligence, which highlights not the threat of omniscience, but that of impoverished intelligence. Therefore, if there is to be a meaningful role for law in resisting double elevation, it should be law encompassing subjectivity, emotion and imagination, law irreducible to algorithm, a law of war that appreciates situated judgement in the wielding of violence for the collective
Applications of Cyber Threat Intelligence (CTI) in Financial Institutions and Challenges in Its Adoption
The critical nature of financial infrastructures makes them prime targets for cybercriminal activities, underscoring the need for robust security measures. This research delves into the role of Cyber Threat Intelligence (CTI) in bolstering the security framework of financial entities and identifies key challenges that could hinder its effective implementation. CTI brings a host of advantages to the financial sector, including real-time threat awareness, which enables institutions to proactively counteract cyber-attacks. It significantly aids in the efficiency of incident response teams by providing contextual data about attacks. Moreover, CTI is instrumental in strategic planning by providing insights into emerging threats and can assist institutions in maintaining compliance with regulatory frameworks such as GDPR and CCPA. Additional applications include enhancing fraud detection capabilities through data correlation, assessing and managing vendor risks, and allocating resources to confront the most pressing cyber threats. The adoption of CTI technologies is fraught with challenges. One major issue is data overload, as the vast quantity of information generated can overwhelm institutions and lead to alert fatigue. The issue of interoperability presents another significant challenge; disparate systems within the financial sector often use different data formats, complicating seamless CTI integration. Cost constraints may also inhibit the adoption of advanced CTI tools, particularly for smaller institutions. A lack of specialized skills necessary to interpret CTI data exacerbates the problem. The effectiveness of CTI is contingent on its accuracy, and false positives and negatives can have detrimental impacts. The rapidly evolving nature of cyber threats necessitates real-time updates, another hurdle for effective CTI implementation. Furthermore, the sharing of threat intelligence among entities, often competitors, is hampered by mistrust and regulatory complications. This research aims to provide a nuanced understanding of the applicability and limitations of CTI within the financial sector, urging institutions to approach its adoption with a thorough understanding of the associated challenges
A Review of Digital Twins and their Application in Cybersecurity based on Artificial Intelligence
The potential of digital twin technology is yet to be fully realized due to
its diversity and untapped potential. Digital twins enable systems' analysis,
design, optimization, and evolution to be performed digitally or in conjunction
with a cyber-physical approach to improve speed, accuracy, and efficiency over
traditional engineering methods. Industry 4.0, factories of the future, and
digital twins continue to benefit from the technology and provide enhanced
efficiency within existing systems. Due to the lack of information and security
standards associated with the transition to cyber digitization, cybercriminals
have been able to take advantage of the situation. Access to a digital twin of
a product or service is equivalent to threatening the entire collection. There
is a robust interaction between digital twins and artificial intelligence
tools, which leads to strong interaction between these technologies, so it can
be used to improve the cybersecurity of these digital platforms based on their
integration with these technologies. This study aims to investigate the role of
artificial intelligence in providing cybersecurity for digital twin versions of
various industries, as well as the risks associated with these versions. In
addition, this research serves as a road map for researchers and others
interested in cybersecurity and digital security.Comment: 60 pages, 8 Figures, 15 Table
Factors Influencing Cybersecurity Risk Among Minority-Owned Small Businesses
Small businesses are increasingly becoming targets of cyberattacks. Minority-owned small businesses may face additional challenges when it comes to cybersecurity, due to factors such as limited resources and lack of awareness. Therefore, it is important to understand the specific factors that influence cybersecurity risk among minority-owned small businesses in order to develop effective strategies to protect them from cyber threats. This study aimed to identify the factors influencing cybersecurity risk among minority-owned small businesses. The variables examined were lack of resources, lack of awareness, use of outdated technology, limited training, and targeted attacks. A multiple regression analysis was conducted with a sample size of 252 minority-owned small businesses. The results showed that all of the variables were statistically significant in predicting cybersecurity risk. Lack of resources, lack of awareness, and use of outdated technology were found to be significant predictors of cybersecurity risk. Limited training and targeted attacks were also significant predictors. These findings suggest that minority-owned small businesses are vulnerable to cybersecurity risks due to a combination of factors, including limited resources, lack of awareness, outdated technology, and inadequate training. Therefore, it is important for small business owners to prioritize cybersecurity and invest in the necessary resources and training to protect their businesses from cyber threats
Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language Model Systems
Large language models (LLMs) have strong capabilities in solving diverse
natural language processing tasks. However, the safety and security issues of
LLM systems have become the major obstacle to their widespread application.
Many studies have extensively investigated risks in LLM systems and developed
the corresponding mitigation strategies. Leading-edge enterprises such as
OpenAI, Google, Meta, and Anthropic have also made lots of efforts on
responsible LLMs. Therefore, there is a growing need to organize the existing
studies and establish comprehensive taxonomies for the community. In this
paper, we delve into four essential modules of an LLM system, including an
input module for receiving prompts, a language model trained on extensive
corpora, a toolchain module for development and deployment, and an output
module for exporting LLM-generated content. Based on this, we propose a
comprehensive taxonomy, which systematically analyzes potential risks
associated with each module of an LLM system and discusses the corresponding
mitigation strategies. Furthermore, we review prevalent benchmarks, aiming to
facilitate the risk assessment of LLM systems. We hope that this paper can help
LLM participants embrace a systematic perspective to build their responsible
LLM systems
Attacks on self-driving cars and their countermeasures : a survey
Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE
Data management and Data Pipelines: An empirical investigation in the embedded systems domain
Context: Companies are increasingly collecting data from all possible sources to extract insights that help in data-driven decision-making. Increased data volume, variety, and velocity and the impact of poor quality data on the development of data products are leading companies to look for an improved data management approach that can accelerate the development of high-quality data products. Further, AI is being applied in a growing number of fields, and thus it is evolving as a horizontal technology. Consequently, AI components are increasingly been integrated into embedded systems along with electronics and software. We refer to these systems as AI-enhanced embedded systems. Given the strong dependence of AI on data, this expansion also creates a new space for applying data management techniques. Objective: The overall goal of this thesis is to empirically identify the data management challenges encountered during the development and maintenance of AI-enhanced embedded systems, propose an improved data management approach and empirically validate the proposed approach.Method: To achieve the goal, we conducted this research in close collaboration with Software Center companies using a combination of different empirical research methods: case studies, literature reviews, and action research.Results and conclusions: This research provides five main results. First, it identifies key data management challenges specific to Deep Learning models developed at embedded system companies. Second, it examines the practices such as DataOps and data pipelines that help to address data management challenges. We observed that DataOps is the best data management practice that improves the data quality and reduces the time tdevelop data products. The data pipeline is the critical component of DataOps that manages the data life cycle activities. The study also provides the potential faults at each step of the data pipeline and the corresponding mitigation strategies. Finally, the data pipeline model is realized in a small piece of data pipeline and calculated the percentage of saved data dumps through the implementation.Future work: As future work, we plan to realize the conceptual data pipeline model so that companies can build customized robust data pipelines. We also plan to analyze the impact and value of data pipelines in cross-domain AI systems and data applications. We also plan to develop AI-based fault detection and mitigation system suitable for data pipelines
Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses
The ongoing deployment of the fifth generation (5G) wireless networks
constantly reveals limitations concerning its original concept as a key driver
of Internet of Everything (IoE) applications. These 5G challenges are behind
worldwide efforts to enable future networks, such as sixth generation (6G)
networks, to efficiently support sophisticated applications ranging from
autonomous driving capabilities to the Metaverse. Edge learning is a new and
powerful approach to training models across distributed clients while
protecting the privacy of their data. This approach is expected to be embedded
within future network infrastructures, including 6G, to solve challenging
problems such as resource management and behavior prediction. This survey
article provides a holistic review of the most recent research focused on edge
learning vulnerabilities and defenses for 6G-enabled IoT. We summarize the
existing surveys on machine learning for 6G IoT security and machine
learning-associated threats in three different learning modes: centralized,
federated, and distributed. Then, we provide an overview of enabling emerging
technologies for 6G IoT intelligence. Moreover, we provide a holistic survey of
existing research on attacks against machine learning and classify threat
models into eight categories, including backdoor attacks, adversarial examples,
combined attacks, poisoning attacks, Sybil attacks, byzantine attacks,
inference attacks, and dropping attacks. In addition, we provide a
comprehensive and detailed taxonomy and a side-by-side comparison of the
state-of-the-art defense methods against edge learning vulnerabilities.
Finally, as new attacks and defense technologies are realized, new research and
future overall prospects for 6G-enabled IoT are discussed
- …