268 research outputs found
Artificial Intelligence and International Conflict in Cyberspace
This edited volume explores how artificial intelligence (AI) is transforming international conflict in cyberspace. Over the past three decades, cyberspace developed into a crucial frontier and issue of international conflict. However, scholarly work on the relationship between AI and conflict in cyberspace has been produced along somewhat rigid disciplinary boundaries and an even more rigid sociotechnical divide – wherein technical and social scholarship are seldomly brought into a conversation. This is the first volume to address these themes through a comprehensive and cross-disciplinary approach. With the intent of exploring the question ‘what is at stake with the use of automation in international conflict in cyberspace through AI?’, the chapters in the volume focus on three broad themes, namely: (1) technical and operational, (2) strategic and geopolitical and (3) normative and legal. These also constitute the three parts in which the chapters of this volume are organised, although these thematic sections should not be considered as an analytical or a disciplinary demarcation
A Formal Engineering Approach for Interweaving Functional and Security Requirements of RESTful Web APIs
RESTful Web API adoption has become ubiquitous with the proliferation of REST APIs in almost all domains with modern web applications embracing the micro-service architecture. This vibrant and expanding adoption of APIs, has made an increasing amount of data to be funneled through systems which require proper access management to ensure that web assets are secured. A RESTful API provides data using the HTTP protocol over the network, interacting with databases and other services and must preserve its security properties. Currently, practitioners are facing two major challenges for developing high quality secure RESTful APIs. One, REST is not a protocol. Instead, it is a set of guidelines that define how web resources can be designed and accessed over HTTP endpoints. There are a set of guidelines which stipulate how related resources should be structured using hierarchical URIs as well as how specific well-defined actions on those resources should be represented using different HTTP verbs. Whereas security has always been critical in the design of RESTful APIs, there are no clear formal models utilizing a secure-by-design approach that interweaves both the functional and security requirements. The other challenge is how to effectively utilize a model driven approach for constructing precise requirements and design specifications so that the security of a RESTFul API is considered as a concern that transcends across functionality rather than individual isolated operations.This thesis proposes a novel technique that encourages a model driven approach to specifying and verifying APIs functional and security requirements with the practical formal method SOFL (Structured-Object-Oriented Formal Language). Our proposed approach provides a generic 6 step model driven approach for designing security aware APIs by utilizing concepts of domain models, domain primitives, Ecore metamodel and SOFL. The first step involves generating a flat file with APIs resource listings. In this step, we extract resource definitions from an input RESTful API documentation written in RAML using an existing RAML parser. The output of this step is a flat file representing API resources as defined in the RAML input file. This step is fully automated. The second step involves automatic construction of an API resource graph that will work as a blue print for creating the target API domain model. The input for this step is the flat file generated from step 1 and the output is a directed graph (digraph) of API resource. We leverage on an algorithm which we created that takes a list of lists of API resource nodes and the defined API root resource node as an input, and constructs a digraph highlighting all the API resources as an output. In step 3, we use the generated digraph as a guide to manually define the API’s initial domain model as the target output with an aggregate root corresponding to the root node of the input digraph and the rest of the nodes corresponding to domain model entities. In actual sense, the generated digraph in step 2 is a barebone representation of the target domain model, but what is missing in the domain model at this stage in the distinction between containment and reference relationship between entities. The resulting domain model describes the entire ecosystem of the modeled API in the form of Domain Driven Design Concepts of aggregates, aggregate root, entities, entity relationships, value objects and aggregate boundaries. The fourth step, which takes our newly defined domain model as input, involves a threat modeling process using Attack Defense Trees (ADTrees) to identify potential security vulnerabilities in our API domain model and their countermeasures. aCountermeasures that can enforce secure constructs on the attributes and behavior of their associated domain entities are modeled as domain primitives. Domain primitives are distilled versions of value objects with proper invariants. These invariants enforce security constraints on the behavior of their associated entities in our API domain model. The output of this step is a complete refined domain model with additional security invariants from the threat modeling process defined as domain primitives in the refined domain model. This fourth step achieves our first interweaving of functional and security requirements in an implicit manner. The fifth step involves creating an Ecore metamodel that describes the structure of our API domain model. In this step, we rely on the refined domain model as input and create an Ecore metamodel that our refined domain model corresponds to, as an output. Specifically, this step encompasses structural modeling of our target RESTful API. The structural model describes the possible resource types, their attributes, and relations as well as their interface and representations. The sixth and the final step involves behavioral modeling. The input for this step is an Ecore metamodel from step 5 and the output is formal security aware RESTful API specifications in SOFL language. Our goal here is to define RESTful API behaviors that consist of actions corresponding to their respective HTTP verbs i.e., GET, POST, PUT, DELETE and PATCH. For example, CreateAction creates a new resource, an UpdateAction provides the capability to change the value of attributes and ReturnAction allows for response definition including the Representation and all metadata. To achieve behavioral modelling, we transform our API methods into SOFL processes. We take advantage of the expressive nature of SOFL processes to define our modeled API behaviors. We achieve the interweaving of functional and security requirements by injecting boolean formulas in post condition of SOFL processes. To verify whether the interweaved functional and security requirements implement all expected functions correctly and satisfy the desired security constraints, we can optionally perform specification testing. Since implicit specifications do not indicate algorithms for implementation but are rather expressed with predicate expressions involving pre and post conditions for any given specification, we can substitute all the variables involved a process with concrete values of their types with results and evaluate their results in the form of truth values true or false. When conducting specification testing, we apply SOFL process animation technique to obtain the set of concrete values of output variables for each process functional scenario. We analyse test results by comparing the evaluation results with an analysis criteria. An analysis criteria is a predicate expression representing the properties to be verified. If the evaluation results are consistent with the predicate expression, the analysis show consistency between the process specification and its associated requirement. We generate the test cases for both input and output variables based on the user requirements. The test cases generated are usually based on test targets which are predicate expressions, such as the pre and post conditions of a process. when testing for conformance of a process specification to its associated service operation, we only need to observe the execution results of the process by providing concrete input values to all of its functional scenarios and analyze their defining conditions relative to user requirements. We present an empirical case study for validating the practicality and usability of our model driven formal engineering approach by applying it in developing a Salon Booking System. A total of 32 services covering functionalities provided by the Salon Booking System API were developed. We defined process specifications for the API services with their respective security requirements. The security requirements were injected in the threat modeling and behavioral modeling phase of our approach. We test for the interweaving of functional and security requirements in the specifications generated by our approach by conducting tests relative to original RAML specifications. Failed tests were exhibited in cases where injected security measure like requirement of an object level access control is not respected i.e., object level access control is not checked. Our generated SOFL specification correctly rejects such case by returning an appropriate error message while the original RAML specification incorrectly dictates to accept such request, because it is not aware of such measure. We further demonstrate a technique for generating SOFL specifications from a domain model via model to text transformation. The model to text transformation technique semi-automates the generation of SOFL formal specification in step 6 of our proposed approach. The technique allows for isolation of dynamic and static sections of the generated specifications. This enables our technique to have the capability of preserving the static sections of the target specifications while updating the dynamic sections in response to the changes of the underlying domain model representing the RESTful API in design. Specifically, our contribution is provision of a systemic model driven formal engineering approach for design and development of secure RESTful web APIs. The proposed approach offers a six-step methodology covering both structural and behavioral modelling of APIs with a focus on security. The most distinguished merit of the model to text transformation is the utilization of the API’s domain model as well as a metamodel that the domain model corresponds to as the foundation for generation of formal SOFL specifications that is a representation of API’s functional and security requirements.博士(理学)法政大学 (Hosei University
Architectural Alignment of Access Control Requirements Extracted from Business Processes
Business processes and information systems evolve constantly and affect each other in non-trivial ways. Aligning security requirements between both is a challenging task. This work presents an automated approach to extract access control requirements from business processes with the purpose of transforming them into a) access permissions for role-based access control and b) architectural data flow constraints to identify violations of access control in enterprise application architectures
A Design Science Research Approach to Architecting and Developing Information Systems for Collaborative Manufacturing : A Case for Human-Robot Collaboration
Konseptointi- ja suunnitteluvaiheessa sekä valmistuksen, käytön ja kehitysprosessin aikana syntyy tietoa, jonka hyödyntämisessä on valtavaa potentiaalia liike-elämän ja tuotantoprosessien muuttamiseen. Neljännen teollisen vallankumouksen ytimessä oleva digitaalinen muutos tunnistaa tämän painottaen erityisesti tämän tiedon yhdistämistä toimintojen ja järjestelmien tukemiseksi läpi tuotteen elinkaareen, mitä kutsutaan digitaaliseksi säikeen kehykseksi (digital thread framework). Tämän väitöskirjan tavoitteena on kehittää ja käyttää yhtä tällaista viitekehystä ihmisen ja robotin yhteistoiminnan asiayhteydessä. Tämä kehys pyrkii vastaamaan merkittävään ongelmaan, joka liittyy mukautuvuuden ja joustavuuden abstrakteihin ominaisuuksiin. Nykyiset ihmisen ja robotin yhteistyöjärjestelmät (human-robot collaboration (HRC)) on rakennettu pääasiassa pysyviksi järjestelmiksi, jotka sivuuttavat ihmisten intuitiivisen toiminnan asettamalla heidän roolinsa yhteistyötehtävissä etukäteen määritellyiksi. Lisäksi järjestelmien kyky vaihtaa tuotteesta toiseen on rajoittunutta. Tämä on erityisen ongelmallista nykyisellä laajan tuotevalikoiman aikakaudella, joka johtuu asiakkaiden räätälöidyistä vaatimuksista.
Tähän taustaan vastaten, tämä väitöskirja käyttää design science research methodology -menetelmää suunnitellakseen, kehittääkseen ja ottaakseen käyttöön kolme pääasiallista artefaktia ihmisen ja robotin yhteistyösolussa laboratorioympäristössä. Ensimmäinen on digitaalisen säikeen kehys (digital thread framework), joka integroi tuotesuunnitteluympäristön toimijaksi monitoimijajärjestelmään käyttäen uusimpia tietoon perustuvia suunnittelujärjestelmiä, mikä tarjoaa prosessin toimijoille pääsyn tuotesuunnittelumalleihin reaaliajassa. Toinen on lisätyn todellisuuden malli, joka tarjoaa rajapinnan kokoonpanotehtävässä yhteistyöhön osallistuvan ihmisoperaattorin ja edellä mainitun kehyksen välille. Kolmas on tukitietomalli, jota yhteistyötä tekevät toimijat käyttävät tietopohjanaan täyttääkseen yhteistyössä tapahtuvan kokoonpanon tavoitteet mukautuvasti.
Näitä kehitettyjä artefakteja käytettiin kokonaisuutena tapaustutkimuksissa, jotka liittyivät aidon dieselmoottorin kokoonpanoon, ja joissa todennettiin niiden hyödyllisyys ja että ne lisäävät joustavuutta, jota varten kehys (framework) suunniteltiin. Rajauslaatikoiden näyttäminen skaalautuvana informaationa, joka hahmottaa alikokoonpanon osien geometriaa, demostroi kehitettyjen artefaktien käytettävyyttä yhteistyötä tekevien toimijoiden aikomuksia heijastavien laajennetun todellisuuden projektioiden tuottamiseksi.
Yhteenvetona tämän väitöskirjan tuloksena syntyi lähestymistapa älykkään ja mukautuvan robotiikan toteuttamiseksi hyödyntäen tietovirtoja ja mallinnusta ihmisen ja robotin yhteistoiminnan kontekstissa. Teollisuuden raportoima älykkäästi mukautuvien HRC-järjestelmien puute taas toimi osaltaan motivaationa tähän väitöskirjassa tehtyyn työhön. Kun tulevaisuuden tuotteet ja tuotantojärjestelmät muuttuvat monimutkaisemmiksi, tietojärjestelmiltä odotetaan suurempaa vastuuta korvaamaan ihmisen työmuistin luontaiset rajat ja mahdollistamaan siirtyminen kohti ihmiskeskeistä valmistusta, joihin viitataan termeillä Operator 4.0 ja Industry 5.0. Näin ollen on odotettavissa, että tietojärjestelmien tutkimus, kuten tämä väitöskirja, voi auttaa ottamaan merkittäviä askeleita tähän suuntaan.Information generated from the conceptualization, design, manufacturing, and use of a product has immense potential in transforming both the business and manufacturing processes of the manufacturing enterprise. The digital transformation at the heart of the fourth industrial revolution has acknowledged this with a special emphasis on weaving a thread of this information to support functions and systems throughout the life cycle of the product with what is known as a digital thread framework. This dissertation aims to develop and use one such framework in the context of human-robot collaborative assembly. The overarching problem that the framework aims to solve can be attributed to the abstract qualities of adaptability and flexibility. The human-robot collaboration (HRC) systems of today are built predominantly as static systems and ignore the intuitive role of humans by having their roles in collaborative tasks pre-defined. Furthermore, their ability to switch between products during product changeovers is also limited. This is especially problematic in the current era of product variety, stemming from the customised requirements of customers.
To this end, this dissertation employs the design science research methodology to design, develop, and deploy predominantly three artefacts in a human-robot work cell in a laboratory setting. The first is the digital thread framework that integrates the product design environment using state-of-the-art knowledge-based engineering systems, as an agent of a multi-agent system, which provide the collaborative human-robot agents with access to product design models at run time. The second is a constituent mixed-reality model that provides an interface for the foregoing framework for the human operator engaged in collaborative assembly. The third is a supporting information model that the agents use as their knowledge base to fulfil adaptively the goals of collaborative assembly.
Together, these developed artefacts were employed in case studies involving a real diesel engine assembly during which they were observed to provide utility and support the cause of adaptability for which the framework was designed. The identification of bounding boxes as a scalable information construct, that approximates the part geometry of the sub-assembly components, demonstrates the utility of the developed artefacts for spatially augmenting them as projections as intentions of collaborating agents.
In summary, this dissertation contributes with an approach towards realising intelligent and adaptive robotics within the realms of information flows and modelling in the context of human-robot collaboration. The lack of intelligently adaptable HRC systems reported by the industry in part motivated the work undertaken in this dissertation. As future products and production systems become more complex, information systems are expected to assume greater responsibility to compensate for the inherent limits of the human working memory and enable transition towards a human-centred manufacturing, the current likes of which are labelled as Operator 4.0 and Industry 5.0. Thus, the expectation is that information systems research, such as this dissertation, can help take significant strides forward in this direction
Consistent View-Based Management of Variability in Space and Time
Developing variable systems faces many challenges. Dependencies between interrelated artifacts within a product variant, such as code or diagrams, across product variants and across their revisions quickly lead to inconsistencies during evolution. This work provides a unification of common concepts and operations for variability management, identifies variability-related inconsistencies and presents an approach for view-based consistency preservation of variable systems
Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning
[EN] Data analysis is a key process to foster knowledge generation in particular domains
or fields of study. With a strong informative foundation derived from the analysis of
collected data, decision-makers can make strategic choices with the aim of obtaining
valuable benefits in their specific areas of action. However, given the steady growth
of data volumes, data analysis needs to rely on powerful tools to enable knowledge
extraction.
Information dashboards offer a software solution to analyze large volumes of
data visually to identify patterns and relations and make decisions according to the
presented information. But decision-makers may have different goals and,
consequently, different necessities regarding their dashboards. Moreover, the variety
of data sources, structures, and domains can hamper the design and implementation
of these tools.
This Ph.D. Thesis tackles the challenge of improving the development process of
information dashboards and data visualizations while enhancing their quality and
features in terms of personalization, usability, and flexibility, among others.
Several research activities have been carried out to support this thesis. First, a
systematic literature mapping and review was performed to analyze different
methodologies and solutions related to the automatic generation of tailored
information dashboards. The outcomes of the review led to the selection of a modeldriven
approach in combination with the software product line paradigm to deal with
the automatic generation of information dashboards.
In this context, a meta-model was developed following a domain engineering
approach. This meta-model represents the skeleton of information dashboards and
data visualizations through the abstraction of their components and features and has
been the backbone of the subsequent generative pipeline of these tools.
The meta-model and generative pipeline have been tested through their
integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully
integrated with other meta-model to support knowledge generation in learning
ecosystems, and as a framework to conceptualize and instantiate information
dashboards in different domains.
In terms of the practical applications, the focus has been put on how to transform
the meta-model into an instance adapted to a specific context, and how to finally
transform this later model into code, i.e., the final, functional product. These practical
scenarios involved the automatic generation of dashboards in the context of a Ph.D.
Programme, the application of Artificial Intelligence algorithms in the process, and
the development of a graphical instantiation platform that combines the meta-model
and the generative pipeline into a visual generation system.
Finally, different case studies have been conducted in the employment and
employability, health, and education domains. The number of applications of the
meta-model in theoretical and practical dimensions and domains is also a result itself.
Every outcome associated to this thesis is driven by the dashboard meta-model, which
also proves its versatility and flexibility when it comes to conceptualize, generate, and
capture knowledge related to dashboards and data visualizations
Through a Model, Darkly: An Investigation of Modellers’ Conceptualisation of Uncertainty in Climate and Energy Systems Modelling and an Application to Epidemiology
Policy responses to climate change require the use of complex computer models to understand the physical dynamics driving change, to evaluate its impacts and to evaluate the efficacy and costs of different mitigation and adaptation options. These models are often complex and built by large teams of dedicated researchers. All modelling requires assumptions, approximations and analytic conveniences to be employed. No model is without uncertainty.
Authors have attempted to understand these uncertainties over the years and have developed detailed typologies to deal with them. However, it remains unknown how modellers themselves conceptualise the uncertainty inherent in their work.
The core of this thesis involves the interviews of 38 modellers from climate science, energy systems modelling and integrated assessment to understand how they conceptualise the uncertainty in their work. This study finds that there is diversity in how uncertainty is understood and that various concepts from the literature are selectively employed to organise uncertainties.
Uncertainty analysis is conceived as consisting of different phases in the model development process. The interplay between the complexity of the model and the capacities of modellers to manipulate these models shapes the ways in which uncertainty can be conceptualised. How we can attempt to wrangle with uncertainty in the present is determined by the path-dependent decisions made in the past; decisions that are influenced by a variety of factors within the context of the model’s creation.
Furthermore, this thesis examines the application of these concepts to another field, epidemiology, to examine their generalisability in other contexts.
This thesis concludes that in a situation such as climate change, where the nature of the problem changes in a dynamic way, emphasis should be placed on reducing the grip of these path dependencies and the resource costs of adapting models to face new challenges and answer new policy questions
Internet of Things Applications - From Research and Innovation to Market Deployment
The book aims to provide a broad overview of various topics of Internet of Things from the research, innovation and development priorities to enabling technologies, nanoelectronics, cyber physical systems, architecture, interoperability and industrial applications. It is intended to be a standalone book in a series that covers the Internet of Things activities of the IERC – Internet of Things European Research Cluster from technology to international cooperation and the global "state of play".The book builds on the ideas put forward by the European research Cluster on the Internet of Things Strategic Research Agenda and presents global views and state of the art results on the challenges facing the research, development and deployment of IoT at the global level. Internet of Things is creating a revolutionary new paradigm, with opportunities in every industry from Health Care, Pharmaceuticals, Food and Beverage, Agriculture, Computer, Electronics Telecommunications, Automotive, Aeronautics, Transportation Energy and Retail to apply the massive potential of the IoT to achieving real-world solutions. The beneficiaries will include as well semiconductor companies, device and product companies, infrastructure software companies, application software companies, consulting companies, telecommunication and cloud service providers. IoT will create new revenues annually for these stakeholders, and potentially create substantial market share shakeups due to increased technology competition. The IoT will fuel technology innovation by creating the means for machines to communicate many different types of information with one another while contributing in the increased value of information created by the number of interconnections among things and the transformation of the processed information into knowledge shared into the Internet of Everything. The success of IoT depends strongly on enabling technology development, market acceptance and standardization, which provides interoperability, compatibility, reliability, and effective operations on a global scale. The connected devices are part of ecosystems connecting people, processes, data, and things which are communicating in the cloud using the increased storage and computing power and pushing for standardization of communication and metadata. In this context security, privacy, safety, trust have to be address by the product manufacturers through the life cycle of their products from design to the support processes. The IoT developments address the whole IoT spectrum - from devices at the edge to cloud and datacentres on the backend and everything in between, through ecosystems are created by industry, research and application stakeholders that enable real-world use cases to accelerate the Internet of Things and establish open interoperability standards and common architectures for IoT solutions. Enabling technologies such as nanoelectronics, sensors/actuators, cyber-physical systems, intelligent device management, smart gateways, telematics, smart network infrastructure, cloud computing and software technologies will create new products, new services, new interfaces by creating smart environments and smart spaces with applications ranging from Smart Cities, smart transport, buildings, energy, grid, to smart health and life. Technical topics discussed in the book include: • Introduction• Internet of Things Strategic Research and Innovation Agenda• Internet of Things in the industrial context: Time for deployment.• Integration of heterogeneous smart objects, applications and services• Evolution from device to semantic and business interoperability• Software define and virtualization of network resources• Innovation through interoperability and standardisation when everything is connected anytime at anyplace• Dynamic context-aware scalable and trust-based IoT Security, Privacy framework• Federated Cloud service management and the Internet of Things• Internet of Things Application
- …