26,736 research outputs found
Machine-assisted Cyber Threat Analysis using Conceptual Knowledge Discovery
Over the last years, computer networks have evolved into highly dynamic and interconnected environments, involving multiple heterogeneous devices and providing a myriad of services on top of them. This complex landscape has made it extremely difficult for security administrators to keep accurate and be effective in protecting their systems against cyber threats. In this paper, we describe our vision and scientific posture on how artificial intelligence techniques and a smart use of security knowledge may assist system administrators in better defending their networks. To that end, we put forward a research roadmap involving three complimentary axes, namely, (I) the use of FCA-based mechanisms for managing configuration vulnerabilities, (II) the exploitation of knowledge representation techniques for automated security reasoning, and (III) the design of a cyber threat intelligence mechanism as a CKDD process. Then, we describe a machine-assisted process for cyber threat analysis which provides a holistic perspective of how these three research axes are integrated together
AI Security and Safety: The PRALab Research Experience
We present here the main research topics and activities on security, safety, and robustness of machine learning models developed at the Pattern Recognition and Applications (PRA) Laboratory of the University of Cagliari. We have provided
pioneering contributions to this research area, being the first to demonstrate gradient-based attacks to craft adversarial examples and training data poisoning attacks. The findings of our research have significantly contributed not only to
identifying and characterizing vulnerabilities of such models in the context of real-world applications but also to the development of more trustworthy artificial intelligence and machine learning models. We are part of the ELSA network of
excellence for the development of safe and secure AI-based technologies, funded by the European Union
Antecedent factors of violation of information security rules
Purpose – This paper aims to investigate the influence of moral disengagement, perceived penalty, negative experiences and turnover intention on the intention to violate the established security rules.
Design/methodology/approach – The method used involves two stages of analysis, using techniques of structural equation modeling and artificial intelligence with neural networks, based on information collected from 318 workers of organizational information systems.
Findings – The model provides a reasonable prediction regarding the intention to violate information security policies (ISP). The results revealed that the relationships of moral disengagement and perceived penalty significantly influence such an intention.
Research limitations/implications – This research presents a multi-analytical approach that expands the robustness of the results by the complementarity of each analysis technique. In addition, it offers scientific evidence of the factors that reinforce the cognitive processes that involve workers’ decision-making in security breaches.
Practical implications – The practical recommendation is to improve organizational communication to mitigate information security vulnerabilities in several ways, namely, training actions that simulate daily work routines; exposing the consequences of policy violations; disseminating internal newsletters with examples of inappropriate behavior.
Social implications – Results indicate that information security does not depend on the employees’ commitment to the organization; system vulnerabilities can be explored even by employees committed to the companies.
Originality/value – The study expands the knowledge about the individual factors that make information security in companies vulnerable, one of the few in the literature which aims to offer an in-depth perspective on which individual antecedent factors affect the violation of ISP.
 
Mining Threat Intelligence about Open-Source Projects and Libraries from Code Repository Issues and Bug Reports
Open-Source Projects and Libraries are being used in software development
while also bearing multiple security vulnerabilities. This use of third party
ecosystem creates a new kind of attack surface for a product in development. An
intelligent attacker can attack a product by exploiting one of the
vulnerabilities present in linked projects and libraries.
In this paper, we mine threat intelligence about open source projects and
libraries from bugs and issues reported on public code repositories. We also
track library and project dependencies for installed software on a client
machine. We represent and store this threat intelligence, along with the
software dependencies in a security knowledge graph. Security analysts and
developers can then query and receive alerts from the knowledge graph if any
threat intelligence is found about linked libraries and projects, utilized in
their products
The AGI Containment Problem
There is considerable uncertainty about what properties, capabilities and
motivations future AGIs will have. In some plausible scenarios, AGIs may pose
security risks arising from accidents and defects. In order to mitigate these
risks, prudent early AGI research teams will perform significant testing on
their creations before use. Unfortunately, if an AGI has human-level or greater
intelligence, testing itself may not be safe; some natural AGI goal systems
create emergent incentives for AGIs to tamper with their test environments,
make copies of themselves on the internet, or convince developers and operators
to do dangerous things. In this paper, we survey the AGI containment problem -
the question of how to build a container in which tests can be conducted safely
and reliably, even on AGIs with unknown motivations and capabilities that could
be dangerous. We identify requirements for AGI containers, available
mechanisms, and weaknesses that need to be addressed
Autonomic computing architecture for SCADA cyber security
Cognitive computing relates to intelligent computing platforms that are based on the disciplines of artificial intelligence, machine learning, and other innovative technologies. These technologies can be used to design systems that mimic the human brain to learn about their environment and can autonomously predict an impending anomalous situation. IBM first used the term ‘Autonomic Computing’ in 2001 to combat the looming complexity crisis (Ganek and Corbi, 2003). The concept has been inspired by the human biological autonomic system. An autonomic system is self-healing, self-regulating, self-optimising and self-protecting (Ganek and Corbi, 2003). Therefore, the system should be able to protect itself against both malicious attacks and unintended mistakes by the operator
Machine-assisted Cyber Threat Analysis using Conceptual Knowledge Discovery: – Position Paper –
International audienceOver the last years, computer networks have evolved into highly dynamic and interconnected environments, involving multiple heterogeneous devices and providing a myriad of services on top of them. This complex landscape has made it extremely difficult for security administrators to keep accurate and be effective in protecting their systems against cyber threats. In this paper, we describe our vision and scientific posture on how artificial intelligence techniques and a smart use of security knowledge may assist system administrators in better defending their networks. To that end, we put forward a research roadmap involving three complimentary axes, namely, (I) the use of FCA-based mechanisms for managing configuration vulnerabilities, (II) the exploitation of knowledge representation techniques for automated security reasoning, and (III) the design of a cyber threat intelligence mechanism as a CKDD process. Then, we describe a machine-assisted process for cyber threat analysis which provides a holistic perspective of how these three research axes are integrated together
- …