27 research outputs found
Modelling the Influential Factors Embedded in the Proportionality Assessment in Military Operations
The ongoing decade was believed to be a peaceful one. However, contemporary conflicts, and in particular, ongoing wars prove the opposite as they show the increase in context complexity when defining their goals as well as execution strategies for building means and methods for achieving them by gaining advantage against their adversaries through the engagement of well-established targets. At the core of the engagement decision relies the principle of proportionality which brings in a direct relation the expected unintended effects on civilian side with the anticipated intended effects on military side. While the clusters of effects involved in the proportionality assessment are clear, the process itself is subjective, governed by different dimensions of uncertainty, and represents the responsibility of military Commanders. Thus, a complex socio-technical process where different clusters of influential factors (e.g., military, technical, socio-ethical) play a role in the decisions made. Having said that, the objective of this research is to capture and cluster these factors, and further to model their influence in the proportionality decision-making process. This decision support system produces military targeting awareness to the agents involved in the processes of building, executing, and assessing military operations. To accomplish the aim of this research, a Design Science Research methodological approach is taken for capturing and modelling the influential factors as a socio-technical artefact in the form of a Bayesian Belief Network (BBN) model. The model proposed is further evaluated through demonstration on three different cases in respect to real military operations incidents and scenarios existing in the scientific literature in this research field. Hence, through this demonstration, it is illustrated and interpreted how the factors identified influence proportionality decisions when assessing target engagement as being proportional or disproportional. In these cases, corresponding measures for strengthening proportionality and reducing disproportionality in military operations are considered.Modelling the Influential Factors Embedded in the Proportionality Assessment in Military OperationspublishedVersio
Influence Diagrams in Cyber Security: Conceptualization and Potential Applications
Over the last years, cyber-attacks are increasing in organizations especially due to the use of emerging technologies and transformation in terms of how we work. Informed decision-making in cyber security is critical to prevent, detect, respond, and recover from cyber-attacks effectively and efficiently. In cyber security, Decision Support System (DSS) plays a crucial role especially in supporting security analysts, managers, and operators in making informed decisions. Artificial Intelligence (AI)-based techniques like Bayesian Networks, Decision Trees are used as an underlying approach in such DSSs. Furthermore, Influence Diagrams (IDs) possess the capability to support informed decision-making based on its existing applications in other domains like medical. However, the complete capability and potential of IDs are not utilised in cyber security especially in terms of its explainable nature for different stakeholders and existing applications in other domains. Therefore, this research tackles the following research question: âWhat are potential applications of Influence Diagrams (IDs) in cyber security?â. We identified applications of IDs in different domains and then translated it to design potential applications for cyber security issues. In the future, this will help both researchers and practitioners to develop and implement IDs for cyber security-related problems, which in turn will enhance decision-making especially due to its explainable nature for different stakeholders.publishedVersio
Modelling Responsible Digital Security Behaviour for Countering Social Media Manipulation
While the digital environment, and in particular social media, surrounds not only humanâs identity and its societal functions projection, e.g., institutional and financial aspects, it also captures both individual and collective thoughts regarding former, ongoing, and future concepts, trends, and incidents placed in the physical world, in the digital environment, or in both which could impact both individual and collective consciousness, behaviour, and attitude towards different dimensions of reality. Accordingly, an initial attempt to define and model responsible digital security behaviour was made and ongoing discourses and AI-based solutions for tackling and containing social manipulation mechanisms exist in this domain. Noteworthily is that dedicated attention to understanding and modelling responsible digital security behaviour in social media for tackling and/or countering social media manipulation, e.g., disinformation and misinformation, still lacks. To this end, this research aims (i) to capture the factors influencing user behaviour towards tackling and/or countering social media manipulation, (ii) to build a Machine Learning model that assesses userâs responsibility in relation to tackling and/or countering social media manipulation mechanisms, and (iii) propose a set of socio-technical recommendations for building resilience to such mechanisms. To accomplish these research objectives, a Design Science Research methodological approach is taken by designing, developing, and evaluating the model proposed through exemplification. Finally, this research aims to enhance digital security awareness and resilience to social media manipulation of users and policy decision-makers to manage and further extend in a responsible and safe way the digital environment.Modelling Responsible Digital Security Behaviour for Countering Social Media ManipulationpublishedVersio
On Explainable AI Solutions for Targeting in Cyber Military Operations
Nowadays, it is hard to recall a domain, system, or problem that does not use, embed, or could be tackled through AI. From early stages of its development, its techniques and technologies were successfully implemented by military forces for different purposes in distinct military operations. Since cyberspace represents the last officially recognized operational battlefield, it also offers a direct virtual setting for implementing AI solutions for military operations conducted inside or through it. However, planning and conducting AI-based cyber military operations are actions still in the beginning of development. Thus, both practitioner and academic dedication isrequired since the impact of their use could have significant consequences which requires that the output of such intelligent solutions is explainable to the engineers developing them and also to their users e.g., military decision makers. Hence, this article starts by discussing the meaning of explainable AI in the context of targeting in military cyber operations, continues by analyzing the challenges of embedding AI solutions (e.g., intelligent cyber weapons) in different targeting phases, and is structuring them in corresponding taxonomies packaged in a design framework. It does that by crossing the targeting process focusing on target development, capability analysis, and target engagement. Moreover, this research argues that especially in such operations carried out in silence and at incredible speed, it is of major importance that the military forces involved are aware of the following. First, the decisions taken by theintelligent systems embedded. Second, are not only aware, but also able to interpret the results obtained from the AI solutions in a proper, effective, and efficient way. From there, this research draws possible technological and humanoriented methods that facilitate the successful implementation of XAI solutions for targeting in military cyber operations.</jats:p
Trustworthy Human-Autonomy Teaming for Proportionality Assessment in Military Operations
Over the past decades, rapid technological advancements resulted in the integration of autonomous systems and AI across various societal domains. An emerging paradigm in this realm is the human-autonomy teaming which merges human efforts and intelligence with efficiency and performance of autonomous systems through collaboration for reaching common goals and leveraging their strengths. Building such systems should be done in a safe, responsible, and reliable manner, thus in a trustworthy way. While efforts for developing such systems exist in the military domain, they are mainly tackling the technical dimension involved in tasks like reconnaissance and target engagement, and less in conjunction with other dimensions like ethical and legal while focusing on possible produced effects. The intended effects contributing to achieving military objectives represent military advantage and the unintended effects on civilian and civilian objects represent collateral damage. Bringing and assessing these types of effects in a single instance is done through the proportionality assessment which represents the pilar when conducting military operations. Nevertheless, no previous efforts are dedicated to building human-autonomy teaming systems in the context of proportionality assessment in military operations in a trustworthy way. Hence, it is the aim of this research to define this concept and propose a corresponding design framework on this behalf with the intention to contribute to building safe, responsible, and reliable systems in the military domain. To achieve this goal, the Design Science Research methodology is followed in a Value Sensitive Design approach based on extensive research of relevant studies and field experience.</p
On the Road to Designing Responsible AI Systems in Military Cyber Operations
Military cyber operations are increasingly integrating or relying to a specific degree on AI-based systems in one or more moments of their phases where stakeholders are involved. Although the planning and execution of such operations are complex and well-thought processes that take place in silence and with high velocity, their implications and consequences could be experienced not only by their targeted entities, but also by other collateral friendly, non-friendly, or neutral ones. This calls for a broader military-technical and socio-ethical approach when building, conducting, and assessing military Cyber Operations to make sure that the aspects and factors considered and the choices and decisions made in these phases are fair, transparent, and accountable for the stakeholders involved in these processes and the ones impacted by their actions and largely, the society. This resonates with facts currently tackled in the area of Responsible AI, an upcoming critical research area in the AI field that is scarcely present in the ongoing discourses, research, and applications in the military cyber domain. On this matter, this research aims to define and analyse Responsible AI in the context of cyber military operations with the intention of further bringing important aspects to both academic and practitioner communities involved in building and/or conducting such operations. It does that by considering a transdisciplinary approach and concrete examples captured in different phases of their life cycle. Accordingly, a definition is advanced, the components and entities involved in building responsible intelligent systems are analysed, and further challenges, solutions, and future research lines are discussed. Hence, this would allow the agents involved to understand what should be done, what they are allowed to do, and further propose and build corresponding strategies, programs, and solutions e.g., education, modelling and simulation for properly tackling, building, and applying responsible intelligent systems in the military cyber domain.</jats:p
On the Road to Designing Responsible AI Systems in Military Cyber Operations
Military cyber operations are increasingly integrating or relying to a specific degree on AI-based systems in one or more moments of their phases where stakeholders are involved. Although the planning and execution of such operations are complex and well-thought processes that take place in silence and with high velocity, their implications and consequences could be experienced not only by their targeted entities, but also by other collateral friendly, non-friendly, or neutral ones. This calls for a broader military-technical and socio-ethical approach when building, conducting, and assessing military Cyber Operations to make sure that the aspects and factors considered and the choices and decisions made in these phases are fair, transparent, and accountable for the stakeholders involved in these processes and the ones impacted by their actions and largely, the society. This resonates with facts currently tackled in the area of Responsible AI, an upcoming critical research area in the AI field that is scarcely present in the ongoing discourses, research, and applications in the military cyber domain. On this matter, this research aims to define and analyse Responsible AI in the context of cyber military operations with the intention of further bringing important aspects to both academic and practitioner communities involved in building and/or conducting such operations. It does that by considering a transdisciplinary approach and concrete examples captured in different phases of their life cycle. Accordingly, a definition is advanced, the components and entities involved in building responsible intelligent systems are analysed, and further challenges, solutions, and future research lines are discussed. Hence, this would allow the agents involved to understand what should be done, what they are allowed to do, and further propose and build corresponding strategies, programs, and solutions e.g., education, modelling and simulation for properly tackling, building, and applying responsible intelligent systems in the military cyber domain.</p
Design Framework for VR Games in the Police Domain
The escalation in the number of conflicts poses various implications and consequences to society, demanding a comprehensive understanding from a law enforcement perspective. The surge in conflicts introduces significant challenges related to public safety, necessitating adaptive strategies for both conflict and crime prevention and resolution. Accordingly, effective policing must adapt to the evolving dynamics, employing a nuanced approach that addresses the root causes of conflicts and enhances community resilience to avoid or mitigate them and their broader societal impact. A pivotal role is this direction is played by the ongoing technological advancements that facilitate actions like enhancement of situational awareness, analyzing patterns and anticipating potential hotspots and activities, facts that sustain both proactive and reactive measures. In particular, technological advancements in Virtual Reality (VR) present a valuable avenue for addressing policing challenges, especially in relation to conflict resolution, mitigation, and prevention. Building VR simulations for training purposes offers a controlled yet realistic environment for officers to practice e.g., de-escalation strategies and decision-making under stress. These gaming simulations provide a platform to expose officers to diverse scenarios, fostering adaptability, encouraging communication and collaboration, and enhancing their ability to properly handle conflicts with precision and self-control. At the same time, VR systems can be developed for societal awareness and educational purposes, fostering better understanding and communication between law enforcement and the public. While a broad range of governmental, practitioner, and academic efforts exist aiming at building and incorporating VR solutions for various purposes in the police domain, specific areas that involve other societal forces and bystanders need more attention. It is then the aim of this article to capture existing design and evaluation principles as well as lessons learned from existing VR solutions in the police domain. This is done by proposing a design framework for responsible VR gaming solutions in the police domain following the Design Science Research methodology in a transdisciplinary approach.</p
Tackling uncertainty through probabilistic modelling of proportionality in military operations
Just as every neuron in a biological neural network is a reinforcement learning agent, thus a component of a large and advanced structure is de facto a model, the two main components forming the principle of proportionality in military operations can be seen and are as a matter of fact two different entities and models. These are collateral damage depicting the unintentional effects affecting civilians and civilian objects, and military advantage symbolizing the intentional effects contributing to achieving the military objectives defined for military operation conducted. These two entities are complex processes relying on available information, projection on time to the moment of target engagement through estimation and are strongly dependent of common-sense reasoning and decision making. As a deduction, these two components and the proportionality decision result are processes surrounded by various sources and types of uncertainty. However, the existing academic and practitioner efforts in understanding the meaning, dimensions, and implications of the proportionality principle are considering military-legal and ethical lenses, and less technical ones. Accordingly, this research calls for a movement from the existing vision of interpreting proportionality in a possibilistic way to a probabilistic way. Henceforth, this research aims to build two probabilistic Machine Learning models based on Bayesian Belief Networks for assessing proportionality in military operations. The first model embeds a binary classification approach assessing if the engagement is proportional or disproportional, and the second model that extends this perspective based on previous research to perform multi-class classification for assessing degrees of proportionality. To accomplish this objective, this research follows the Design Science Research methodology and conducts an extensive literature for building and demonstrating the model proposed. Finally, this research intends to contribute to designing and developing explainable and responsible intelligent solutions that support human-based military targeting decision-making processes involved when building and conducting military operations.</p
Responsible Digital Security Behaviour:Definition and Assessment Model
Digital landscape transforms remarkably and grows exponentially tackling important societal challenges and needs. In the modern age, futuristic digital concepts are ideated and developed. These digital developments create a diverse pallet of opportunities for organizations and their members like decision makers and financial personnel. Simultaneously, they also introduce different factors that influence usersâ behaviour related to digital security. However, no method exists to determine whether usersâ behaviour could be considered responsible or not, and in case this behaviour is irresponsible, how it could be managed effectively to avoid negative consequences. Thus far, no attempt was made to investigate this to the best of our knowledge. Then this research aims to: (i) introduce âresponsible digital security behaviourâ notion, (ii) identify different factors influencing this behaviour, (iii) design a Bayesian Network model that classifies responsible/irresponsible digital security behaviour considering these factors, and (iv) draw recommendations for improving usersâ responsible digital security behaviour. To address these, extensive literature review is conducted through technical, ethical, and social lenses in a Design Science Research approach for defining, building, and exemplifying the model. The results contribute to increasing digital security awareness and empowering in a responsible way usersâ behaviours and decision-processes involved in developing and adopting new standards, methodologies, and tools in the modern digital era.</p