20 research outputs found

    Game Theory Models for the Verification of the Collective Behaviour of Autonomous Cars

    Get PDF
    The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to be the basis to formally prove this conjecture.Comment: In Proceedings FVAV 2017, arXiv:1709.0212

    Self-adaptive video encoder: comparison of multiple adaptation strategies made simple

    Get PDF
    This paper presents an adaptive video encoder that can be used to compare the behavior of different adaptation strategies using multiple actuators to steer the encoder towards a global goal, composed of multiple conflicting objectives. A video camera produces frames that the encoder manipulates with the objective of matching some space requirement to fit a given communication channel. A second objective is to maintain a given similarity index between the manipulated frames and the original ones. To achieve the goal, the software can change three parameters: the quality of the encoding, the noise reduction filter radius and the sharpening filter radius. In most cases, the objectives - small encoded size and high quality - conflict, since a larger frame would have a higher similarity index to its original counterpart. This makes the problem difficult from the control perspective and makes the case study appealing to compare different adaptation strategies

    Autonomic Parallelism and Thread Mapping Control on Software Transactional Memory

    Get PDF
    International audienceParallel programs need to manage the trade-offbetween the time spent in synchronization and computation.The time trade-off is affected by the number of active threadssignificantly. High parallelism may decrease computing time whileincrease synchronization cost. Furthermore thread locality ondifferent cores may impact on program performance too, asthe memory access time can vary from one core to anotherdue to the complexity of the underlying memory architecture.Therefore the performance of a program can be improved byadjusting the number of active threads as well as the mapping ofits threads to physical cores. However, there is no universal rule todecide the parallelism and the thread locality for a program froman offline view. Furthermore, an offline tuning is error-prone.In this paper, we dynamically manage parallelism and threadlocalities. We address multiple threads problems via SoftwareTransactional Memory (STM). STM has emerged as a promisingtechnique, which bypasses locks, to address synchronization issuesthrough transactions. Autonomic computing offers designers aframework of methods and techniques to build autonomic systemswith well-mastered behaviours. Its key idea is to implementfeedback control loops to design safe, efficient and predictablecontrollers, which enable monitoring and adjusting controlledsystems dynamically while keeping overhead low. We propose todesign a feedback control loop to automate thread managementat runtime and diminish program execution time

    Digital Twin Fidelity Requirements Model for Manufacturing

    Get PDF
    The Digital Twin (DT), including its sub-categories Digital Model (DM) and Digital Shadow (DS), is a promising concept in the context of Smart Manufacturing and Industry 4.0. With ongoing maturation of its fundamental technologies like Simulation, Internet of Things (IoT), Cyber-Physical Systems (CPS), Artificial Intelligence (AI) and Big Data, DT has experienced a substantial increase in scholarly publications and industrial applications. According to academia, DT is considered as an ultra-realistic, high-fidelity virtual model of a physical entity, mirroring all of its properties most accurately. Furthermore, the DT is capable of altering this physical entity based on virtual modifications. Fidelity thereby refers to the number of parameters, their accuracy and level of abstraction. In practice, it is questionable whether the highest fidelity is required to achieve desired benefits. A literary analysis of 77 recent DT application articles reveals that there is currently no structured method supporting scholars and practitioners by elaborating appropriate fidelity levels. Hence, this article proposes the Digital Twin Fidelity Requirements Model (DT-FRM) as a possible solution. It has been developed by using concepts from Design Science Research methodology. Based on an initial problem definition, DT-FRM guides through problem breakdown, identifying problem centric dependent target variables (1), deriving (2) and prioritizing underlying independent variables (3), and defining the required fidelity level for each variable (4). This way, DT-FRM enables its users to efficiently solve their initial problem while minimizing DT implementation and recurring costs. It is shown that assessing the appropriate level of DT fidelity is crucial to realize benefits and reduce implementation complexity in manufacturing

    Digital twin—The dream and the reality

    Get PDF
    Digital twins (DTs) are under active research and development in the research community, industry, and in the digital engineering solution business. The roots of the concept of DT are almost 2 decades old, but the fast progress in enabling technologies, especially in data analytics, artificial intelligence, and the Internet of Things, has accelerated the evolution of DT during the last 5 years. The growing interest, increasing development activities, and increasing business opportunities of the concept are also feeding the hype in the media. Consequently, this has led to the scattering and even misuse of the concept and its definition. In this article, we discuss different applications of DTs and what kinds of solutions there are for DTs. We analyze some most cited definitions of DT in the scientific literature and discuss the interpretation of the definitions through a hypothetical case example. Furthermore, we discuss different life cycle aspects of DTs and potential risks that may arise. To further concretize the concept of DT, we introduce ten reported case examples of implemented DTs in the scientific literature and analyze their features. Finally, we discuss the future development directions of DTs and the aspects that will affect the development trends

    Barriers to the adoption of digital twin in the construction industry : a literature review

    Get PDF
    Digital twin (DT) has gained significant recognition among researchers due to its potential across industries. With the prime goal of solving numerous challenges confronting the construction industry (CI), DT in recent years has witnessed several applications in the CI. Hence, researchers have been advocating for DT adoption to tackle the challenges of the CI. Notwithstanding, a distinguishable set of barriers that oppose the adoption of DT in the CI has not been determined. Therefore, this paper identifies the barriers and incorporates them into a classified framework to enhance the roadmap for adopting DT in the CI. This research conducts an extensive review of the literature and analyses the barriers whilst integrating the science mapping technique. Using Scopus, ScienceDirect, and Web of Science databases, 154 related bibliographic records were identified and analysed using science mapping, while 40 carefully selected relevant publications were systematically reviewed. From the review, the top five barriers identified include low level of knowledge, low level of technology acceptance, lack of clear DT value propositions, project complexities, and static nature of building data. The results show that the UK, China, the USA, and Germany are the countries spearheading the DT adoption in the CI, while only a small number of institutions from Australia, the UK, Algeria, and Greece have established institutional collaborations for DT research. A conceptual framework was developed on the basis of 30 identified barriers to support the DT adoption roadmap. The main categories of the framework comprise stakeholder-oriented, industryrelated, construction-enterprise-related, and technology-related barriers. The identified barriers and the framework will guide and broaden the knowledge of DT, which is critical for successful adoption in the construction industry

    Distributed Tracing for Troubleshooting of Native Cloud Applications via Rule-Induction Systems

    Get PDF
    Diagnosing IT issues is a challenging problem for large-scale distributed cloud environments due to complex and non-deterministic interrelations between the system components. Modern monitoring tools rely on AI-empowered data analytics for detection, root cause analysis, and rapid resolution of performance degradation. However, the successful adoption of AI solutions is anchored on trust. System administrators will not unthinkingly follow the recommendations without sufficient interpretability of solutions. Explainable AI is gaining popularity by enabling improved confidence and trust in intelligent solutions. For many industrial applications, explainable models with moderate accuracy are preferable to highly precise black-box ones. This paper shows the benefits of rule-induction classification methods, particularly RIPPER, for the root cause analysis of performance degradations. RIPPER reveals the causes of problems in a set of rules system administrators can use in remediation processes. Native cloud applications are based on the microservices architecture to consume the benefits of distributed computing. Monitoring such applications can be accomplished via distributed tracing, which inspects the passage of requests through different microservices. We discuss the application of rule-learning approaches to trace traffic passing through a malfunctioning microservice for the explanations of the problem. Experiments performed on datasets from cloud environments proved the applicability of such approaches and unveiled the benefits
    corecore