676 research outputs found

    Circular Production and Maintenance of Automotive Parts:An Internet of Things (IoT) Data Framework and Practice Review

    Get PDF
    The adoption of the Circular Economy paradigm by industry leads to increased responsibility of manufacturing to ensure a holistic awareness of the environmental impact of its operations. In mitigating negative effects in the environment, current maintenance practice must be considered for its potential contribution to a more sustainable lifecycle for the manufacturing operation, its products and related services. Focusing on the matching of digital technologies to maintenance practice in the automotive sector, this paper outlines a framework for organisations pursuing the integration of environmentally aware solutions in their production systems. This research sets out an agenda and framework for digital maintenance practice within the Circular Economy and the utilisation of Industry 4.0 technologies for this purpose

    OM-2017: Proceedings of the Twelfth International Workshop on Ontology Matching

    Get PDF
    shvaiko2017aInternational audienceOntology matching is a key interoperability enabler for the semantic web, as well as auseful tactic in some classical data integration tasks dealing with the semantic heterogeneityproblem. It takes ontologies as input and determines as output an alignment,that is, a set of correspondences between the semantically related entities of those ontologies.These correspondences can be used for various tasks, such as ontology merging,data translation, query answering or navigation on the web of data. Thus, matchingontologies enables the knowledge and data expressed with the matched ontologies tointeroperate

    A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities

    Get PDF
    The relentless and often haphazard process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals. The key challenge they face is trying to identify a remediation scheme specific to in-house, organizational objectives. Without a strategy, the result is a patchwork of fixes applied to a tide of vulnerabilities, any one of which could be the single point of failure in an otherwise formidable defense. This means one of the biggest challenges in vulnerability management relates to prioritization. Given that so few vulnerabilities are a focus of real-world attacks, a practical remediation strategy is to identify vulnerabilities likely to be exploited and focus efforts towards remediating those vulnerabilities first. The goal of this research is to demonstrate that aggregating and synthesizing readily accessible, public data sources to provide personalized, automated recommendations that an organization can use to prioritize its vulnerability management strategy will offer significant improvements over what is currently realized using the Common Vulnerability Scoring System (CVSS). We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK. We identify the data mining steps needed to acquire, standardize, and integrate publicly available cyber intelligence data sets into a robust knowledge graph from which stakeholders can infer business logic related to known threats. We tested our approach by identifying vulnerabilities in academic and common software associated with six universities and four government facilities. Ranking policy performance was measured using the Normalized Discounted Cumulative Gain (nDCG). Our results show an average 71.5% to 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors. The ROI of patching using our policies resulted in a savings in the range of 23.3% to 25.5% in annualized unit costs. Our results demonstrate the efficiency of creating knowledge graphs to link large data sets to facilitate semantic queries and create data-driven, flexible ranking policies. Additionally, our framework uses only open standards, making implementation and improvement feasible for cyber practitioners and academia

    Semantic hierarchies for extracting, modeling, and connecting compliance requirements in information security control standards

    Get PDF
    Companies and government organizations are increasingly compelled, if not required by law, to ensure that their information systems will comply with various federal and industry regulatory standards, such as the NIST Special Publication on Security Controls for Federal Information Systems (NIST SP-800-53), or the Common Criteria (ISO 15408-2). Such organizations operate business or mission critical systems where a lack of or lapse in security protections translates to serious confidentiality, integrity, and availability risks that, if exploited, could result in information disclosure, loss of money, or, at worst, loss of life. To mitigate these risks and ensure that their information systems meet regulatory standards, organizations must be able to (a) contextualize regulatory documents in a way that extracts the relevant technical implications for their systems, (b) formally represent their systems and demonstrate that they meet the extracted requirements following an accreditation process, and (c) ensure that all third-party systems, which may exist outside of the information system enclave as web or cloud services also implement appropriate security measures consistent with organizational expectations. This paper introduces a step-wise process, based on semantic hierarchies, that systematically extracts relevant security requirements from control standards to build a certification baseline for organizations to use in conjunction with formal methods and service agreements for accreditation. The approach is demonstrated following a case study of all audit-related controls in the SP-800-53, ISO 15408-2, and related documents. Accuracy, applicability, consistency, and efficacy of the approach were evaluated using controlled qualitative and quantitative methods in two separate studies

    Framework for collaborative knowledge management in organizations

    Get PDF
    Nowadays organizations have been pushed to speed up the rate of industrial transformation to high value products and services. The capability to agilely respond to new market demands became a strategic pillar for innovation, and knowledge management could support organizations to achieve that goal. However, current knowledge management approaches tend to be over complex or too academic, with interfaces difficult to manage, even more if cooperative handling is required. Nevertheless, in an ideal framework, both tacit and explicit knowledge management should be addressed to achieve knowledge handling with precise and semantically meaningful definitions. Moreover, with the increase of Internet usage, the amount of available information explodes. It leads to the observed progress in the creation of mechanisms to retrieve useful knowledge from the huge existent amount of information sources. However, a same knowledge representation of a thing could mean differently to different people and applications. Contributing towards this direction, this thesis proposes a framework capable of gathering the knowledge held by domain experts and domain sources through a knowledge management system and transform it into explicit ontologies. This enables to build tools with advanced reasoning capacities with the aim to support enterprises decision-making processes. The author also intends to address the problem of knowledge transference within an among organizations. This will be done through a module (part of the proposed framework) for domain’s lexicon establishment which purpose is to represent and unify the understanding of the domain’s used semantic

    Policy-Driven Governance in Cloud Service Ecosystems

    Get PDF
    Cloud application development platforms facilitate new models of software co-development and forge environments best characterised as cloud service ecosystems. The value of those ecosystems increases exponentially with the addition of more users and third-party services. Growth however breeds complexity and puts reliability at risk, requiring all stakeholders to exercise control over changes in the ecosystem that may affect them. This is a challenge of governance. From the viewpoint of the ecosystem coordinator, governance is about preventing negative ripple effects from new software added to the platform. From the viewpoint of third-party developers and end-users, governance is about ensuring that the cloud services they consume or deliver comply with requirements on a continuous basis. To facilitate different forms of governance in a cloud service ecosystem we need governance support systems that achieve separation of concerns between the roles of policy provider, governed resource provider and policy evaluator. This calls for better modularisation of the governance support system architecture, decoupling governance policies from policy evaluation engines and governed resources. It also calls for an improved approach to policy engineering with increased automation and efficient exchange of governance policies and related data between ecosystem partners. The thesis supported by this research is that governance support systems that satisfy such requirements are both feasible and useful to develop through a framework that integrates Semantic Web technologies and Linked Data principles. The PROBE framework presented in this dissertation comprises four components: (1) a governance ontology serving as shared ecosystem vocabulary for policies and resources; (2) a method for the definition of governance policies; (3) a method for sharing descriptions of governed resources between ecosystem partners; (4) a method for evaluating governance policies against descriptions of governed ecosystem resources. The feasibility and usefulness of PROBE are demonstrated with the help of an industrial case study on cloud service ecosystem governance

    Risk Analysis for Smart Cities Urban Planners: Safety and Security in Public Spaces

    Get PDF
    Christopher Alexander in his famous writings "The Timeless Way of Building" and "A pattern language" defined a formal language for the description of a city. Alexander developed a generative grammar able to formally describe complex and articulated concepts of architecture and urban planning to define a common language that would facilitate both the participation of ordinary citizens and the collaboration between professionals in architectural and urban planning. In this research, a similar approach has been applied to let two domains communicate although they are very far in terms of lexicon, methodologies and objectives. These domains are urban planning, urban design and architecture, seen as the first domain both in terms of time and in terms of completeness of vision, and the one relating to the world of engineering, made by innumerable disciplines. In practice, there is a domain that defines the requirements and the overall vision (the first) and a domain (the second) which implements them with real infrastructures and systems. To put these two worlds seamlessly into communication, allowing the concepts of the first world to be translated into those of the second, Christopher Alexander’s idea has been followed by defining a common language. By applying Essence, the software engineering formal descriptive theory, using its customization rules, to the concept of a Smart City, a common language to completely trace the requirements at all levels has been defined. Since the focus was on risk analysis for safety and security in public spaces, existing risk models have been considered, evidencing a further gap also within the engineering world itself. Depending on the area being considered, risk management models have different and siloed approaches which ignore the interactions of one type of risk with the others. To allow effective communication between the two domains and within the engineering domain, a unified risk analysis framework has been developed. Then a framework (an ontology) capable of describing all the elements of a Smart City has been developed and combined with the common language to trace the requirements. Following the philosophy of the Vienna Circle, a creative process called Aufbau has then been defined to allow the generation of a detailed description of the Smart City, at any level, using the common language and the ontology above defined. Then, the risk analysis methodology has been applied to the city model produced by Aufbau. The research developed tools to apply such results to the entire life cycle of the Smart City. With these tools, it is possible to understand how much a given architectural, urban planning or urban design requirement is operational at a given moment. In this way, the narration can accurately describe how much the initial requirements set by architects, planners and urban designers and, above all, the values required by stakeholders, are satisfied, at any time. The impact of this research on urban planning is the ability to create a single model between the two worlds, leaving everyone free to express creativity and expertise in the appropriate forms but, at the same time, allowing both to fill the communication gap existing today. This new way of planning requires adequate IT tools and takes the form, from the engineering side, of harmonization of techniques already in use and greater clarity of objectives. On the side of architecture, urban planning and urban design, it is instead a powerful decision support tool, both in the planning and operational phases. This decision support tool for Urban Planning, based on the research results, is the starting point for the development of a meta-heuristic process using an evolutionary approach. Consequently, risk management, from Architecture/Urban Planning/Urban Design up to Engineering, in any phase of the Smart City’s life cycle, is seen as an “organism” that evolves.Christopher Alexander nei suoi famosi scritti "The Timeless Way of Building" e "A pattern language" ha definito un linguaggio formale per la descrizione di una città, sviluppando una grammatica in grado di descrivere formalmente concetti complessi e articolati di architettura e urbanistica, definendo un linguaggio comune per facilitare la partecipazione dei comuni cittadini e la collaborazione tra professionisti. In questa ricerca, un approccio simile è stato applicato per far dialogare due domini sebbene siano molto distanti in termini di lessico, metodologie e obiettivi. Essi sono l'urbanistica, l'urban design e l'architettura, visti come primo dominio sia in termini di tempo che di completezza di visione, e quello del mondo dell'ingegneria, con numerose discipline. In pratica, esiste un dominio che definisce i requisiti e la visione d'insieme (il primo) e un dominio (il secondo) che li implementa con infrastrutture e sistemi reali. Per metterli in perfetta comunicazione, permettendo di tradurre i concetti del primo in quelli del secondo, si è seguita l'idea di Alexander definendo un linguaggio. Applicando Essence, la teoria descrittiva formale dell'ingegneria del software al concetto di Smart City, è stato definito un linguaggio comune per tracciarne i requisiti a tutti i livelli. Essendo il focus l'analisi dei rischi per la sicurezza negli spazi pubblici, sono stati considerati i modelli di rischio esistenti, evidenziando un'ulteriore lacuna anche all'interno del mondo dell'ingegneria stessa. A seconda dell'area considerata, i modelli di gestione del rischio hanno approcci diversi e isolati che ignorano le interazioni di un tipo di rischio con gli altri. Per consentire una comunicazione efficace tra i due domini e all'interno del dominio dell'ingegneria, è stato sviluppato un quadro di analisi del rischio unificato. Quindi è stato sviluppato un framework (un'ontologia) in grado di descrivere tutti gli elementi di una Smart City e combinato con il linguaggio comune per tracciarne i requisiti. Seguendo la filosofia del Circolo di Vienna, è stato poi definito un processo creativo chiamato Aufbau per consentire la generazione di una descrizione dettagliata della Smart City, a qualsiasi livello, utilizzando il linguaggio comune e l'ontologia sopra definita. Infine, la metodologia dell'analisi del rischio è stata applicata al modello di città prodotto da Aufbau. La ricerca ha sviluppato strumenti per applicare tali risultati all'intero ciclo di vita della Smart City. Con questi strumenti è possibile capire quanto una data esigenza architettonica, urbanistica o urbanistica sia operativa in un dato momento. In questo modo, la narrazione può descrivere con precisione quanto i requisiti iniziali posti da architetti, pianificatori e urbanisti e, soprattutto, i valori richiesti dagli stakeholder, siano soddisfatti, in ogni momento. L'impatto di questa ricerca sull'urbanistica è la capacità di creare un modello unico tra i due mondi, lasciando ognuno libero di esprimere creatività e competenza nelle forme appropriate ma, allo stesso tempo, permettendo ad entrambi di colmare il gap comunicativo oggi esistente. Questo nuovo modo di progettare richiede strumenti informatici adeguati e si concretizza, dal lato ingegneristico, in un'armonizzazione delle tecniche già in uso e in una maggiore chiarezza degli obiettivi. Sul versante dell'architettura, dell'urbanistica e del disegno urbano, è invece un potente strumento di supporto alle decisioni, sia in fase progettuale che operativa. Questo strumento di supporto alle decisioni per la pianificazione urbana, basato sui risultati della ricerca, è il punto di partenza per lo sviluppo di un processo meta-euristico utilizzando un approccio evolutivo

    A Framework for Requirements Decomposition, SLA Management and Dynamic System Reconfiguration

    Get PDF
    To meet user requirements, systems can be built from Commercial-Off-The-Shelf (COTS) components, potentially from different vendors. However, the gap between the requirements referring to the overall system and the components to build the system from can be large. To close the gap, it is required to decompose the requirements to a level where they can be mapped to components. When the designed system is deployed and ready for operations, its services are sold and pro-vided to customers. One important goal for service providers is to optimize system resource utilization while ensuring the quality of service expressed in the Service Level Agreements (SLAs). For this purpose, the system can be reconfigured dynamically according to the cur-rent workload to satisfy the SLAs while using only necessary resources. To manage the re-configuration of the system at runtime, a set of previously defined patterns called elasticity rules can be used. In elasticity rules, the actions that need to be taken to reconfigure the sys-tem are specified. An elasticity rule is generally invoked by a trigger, which is generated in reaction to a monitoring event. In this thesis, we propose a model-driven management framework which aims at user re-quirements satisfaction, SLA compliance management and enabling dynamic reconfiguration by reusing the design information at runtime. An approach has been developed to derive automatically a valid configuration starting from low level requirements called service configurations. However, the service configurations are far from requirements a user would express. To generate a system configuration from user requirements and alleviate the work of designer, we generate service configurations by de-composing functional user requirements to the level where components can be selected and put together to satisfy the user requirements. We integrated our service configurations gen-erator with the previous configuration generator. In our framework, we reuse the information acquired from system configuration and dimen-sioning to generate elasticity rules offline. We propose a model driven approach to check the compliance of SLAs and generate triggers for invoking applicable elasticity rules when system reconfiguration is required. For handling multiple triggers generated at the same time, we propose a solution to automatically correlate the actions of invoked elasticity rules, when re-quired. The framework consists of a number of metamodels and a set of model transfor-mations. We use the Unified Modeling Language (UML) and its profiling mechanism to de-scribe all the artifacts in the proposed framework. We implement the profiles using Eclipse Modeling Framework (EMF) and Papyrus. To implement the processes, we use the Atlas Transformation Language (ATL). We also use the APIs of the Object Constraint Language (OCL) in the Eclipse environment to develop a tool for checking constraints and generating triggers

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Circular production and maintenance of automotive parts: an Internet of Things (IoT) data framework and practice review

    Get PDF
    The adoption of the Circular Economy paradigm by industry leads to increased responsibility of manufacturing to ensure a holistic awareness of the environmental impact of its operations. In mitigating negative effects in the environment, current maintenance practice must be considered for its potential contribution to a more sustainable lifecycle for the manufacturing operation, its products and related services. Focusing on the matching of digital technologies to maintenance practice in the automotive sector, this paper outlines a framework for organisations pursuing the integration of environmentally aware solutions in their production systems. This research sets out an agenda and framework for digital maintenance practice within the Circular Economy and the utilisation of Industry 4.0 technologies for this purpose
    • …
    corecore