19,289 research outputs found

    Design and Validation of Cyber-Physical Systems Through Co-Simulation: The Voronoi Tessellation Use Case

    Get PDF
    This paper reports on the use of co-simulation techniques to build prototypes of co-operative autonomous robotic cyber-physical systems. Designing such systems involves a mission-specific planner algorithm, a control algorithm to drive an agent performing its task; and the plant model to simulate the agent dynamics. An application aimed at positioning a swarm of unmanned aerial vehicles (drones) in a bounded area, exploiting a Voronoi tessellation algorithm developed in this work, is taken as a case study. The paper shows how co-simulation allows testing the complex system at the design phase using models created with different languages and tools. The paper then reports on how the adopted co-simulation platform enables control parameters calibration, by exploiting design space exploration technology. The INTO-CPS co-simulation platform, compliant with the Functional Mock-up Interface standard to exchange dynamic simulation models using various languages, was used in this work. The different software modules were written in Modelica, C, and Python. In particular, the latter was used to implement an original variant of the Voronoi algorithm to tesselate a convex polygonal region, by means of dummy points added at appropriate positions outside the bounding polygon. A key contribution of this case study is that it demonstrates how an accurate simulation of a cooperative drone swarm requires modeling the physical plant together with the high-level coordination algorithm. The coupling of co-simulation and design space exploration has been demonstrated to support control parameter calibration to optimize energy consumption and convergence time to the target positions of the drone swarm. From a practical point of view, this makes it possible to test the ability of the swarm to self-deploy in space in order to achieve optimal detection coverage and allow unmanned aerial vehicles in a swarm to coordinate with each other

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Eunomia: Enabling User-specified Fine-Grained Search in Symbolically Executing WebAssembly Binaries

    Full text link
    Although existing techniques have proposed automated approaches to alleviate the path explosion problem of symbolic execution, users still need to optimize symbolic execution by applying various searching strategies carefully. As existing approaches mainly support only coarse-grained global searching strategies, they cannot efficiently traverse through complex code structures. In this paper, we propose Eunomia, a symbolic execution technique that allows users to specify local domain knowledge to enable fine-grained search. In Eunomia, we design an expressive DSL, Aes, that lets users precisely pinpoint local searching strategies to different parts of the target program. To further optimize local searching strategies, we design an interval-based algorithm that automatically isolates the context of variables for different local searching strategies, avoiding conflicts between local searching strategies for the same variable. We implement Eunomia as a symbolic execution platform targeting WebAssembly, which enables us to analyze applications written in various languages (like C and Go) but can be compiled into WebAssembly. To the best of our knowledge, Eunomia is the first symbolic execution engine that supports the full features of the WebAssembly runtime. We evaluate Eunomia with a dedicated microbenchmark suite for symbolic execution and six real-world applications. Our evaluation shows that Eunomia accelerates bug detection in real-world applications by up to three orders of magnitude. According to the results of a comprehensive user study, users can significantly improve the efficiency and effectiveness of symbolic execution by writing a simple and intuitive Aes script. Besides verifying six known real-world bugs, Eunomia also detected two new zero-day bugs in a popular open-source project, Collections-C.Comment: Accepted by ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) 202

    2P-BFT-Log: 2-Phase Single-Author Append-Only Log for Adversarial Environments

    Full text link
    Replicated append-only logs sequentially order messages from the same author such that their ordering can be eventually recovered even with out-of-order and unreliable dissemination of individual messages. They are widely used for implementing replicated services in both clouds and peer-to-peer environments because they provide simple and efficient incremental reconciliation. However, existing designs of replicated append-only logs assume replicas faithfully maintain the sequential properties of logs and do not provide eventual consistency when malicious participants fork their logs by disseminating different messages to different replicas for the same index, which may result in partitioning of replicas according to which branch was first replicated. In this paper, we present 2P-BFT-Log, a two-phase replicated append-only log that provides eventual consistency in the presence of forks from malicious participants such that all correct replicas will eventually agree either on the most recent message of a valid log (first phase) or on the earliest point at which a fork occurred as well as on an irrefutable proof that it happened (second phase). We provide definitions, algorithms, and proofs of the key properties of the design, and explain one way to implement the design onto Git, an eventually consistent replicated database originally designed for distributed version control. Our design enables correct replicas to faithfully implement the happens-before relationship first introduced by Lamport that underpins most existing distributed algorithms, with eventual detection of forks from malicious participants to exclude the latter from further progress. This opens the door to adaptations of existing distributed algorithms to a cheaper detect and repair paradigm, rather than the more common and expensive systematic prevention of incorrect behaviour.Comment: Fixed 'two-phase' typ

    Semantics-based privacy by design for Internet of Things applications

    Get PDF
    As Internet of Things (IoT) technologies become more widespread in everyday life, privacy issues are becoming more prominent. The aim of this research is to develop a personal assistant that can answer software engineers’ questions about Privacy by Design (PbD) practices during the design phase of IoT system development. Semantic web technologies are used to model the knowledge underlying PbD measurements, their intersections with privacy patterns, IoT system requirements and the privacy patterns that should be applied across IoT systems. This is achieved through the development of the PARROT ontology, developed through a set of representative IoT use cases relevant for software developers. This was supported by gathering Competency Questions (CQs) through a series of workshops, resulting in 81 curated CQs. These CQs were then recorded as SPARQL queries, and the developed ontology was evaluated using the Common Pitfalls model with the help of the ProtĂ©gĂ© HermiT Reasoner and the Ontology Pitfall Scanner (OOPS!), as well as evaluation by external experts. The ontology was assessed within a user study that identified that the PARROT ontology can answer up to 58% of privacy-related questions from software engineers

    Context-aware Knowledge-based Systems: A Literature Review

    Get PDF
    Context awareness systems, a subcategory of intelligent systems, are concerned with suggesting relevant products/services to users' situations as smart services. One key element for improving smart services’ quality is to organize and manipulate contextual data in an appropriate manner to facilitate knowledge generation from these data. In this light, a knowledge-based approach, can be used as a key component in context-aware systems. Context awareness and knowledge-based systems, in fact, have been gaining prominence in their respective domains for decades. However, few studies have focused on how to reconcile the two fields to maximize the benefits of each field. For this reason, the objective of this paper is to present a literature review of how context-aware systems, with a focus on the knowledge-based approach, have recently been conceptualized to promote further research in this area. In the end, the implications and current challenges of the study will be discussed

    Node assembly for waste level measurement: embrace the smart city

    Get PDF
    Municipal Solid Waste Management Systems (MSWMS) worldwide are currently facing pressure due to the rapid growth of the population in cities. One of the biggest challenges in this system is the inefficient expenditure of time and fuel in waste collection. In this regard, cities/- municipalities in charge of MSWMS could take advantage of information and communication technologies to improve the overall quality of their infrastructure. One particular strategy that has been explored and is showing interesting results is using a Wireless Sensors Network (WSN) to monitor waste levels in real-time and help decision-making regarding the need for collection. The WSN is equipped with sensing devices that should be carefully chosen considering the real scenario in which they will work. Therefore, in this work, three sets of sensors were studied to evaluate which is the best to be used in the future WSN assembled in Bragança, Portugal. Sets tested were HC-SR04 (S1), HC-SR04 + DHT11 (S2), and US-100 (S3). Tests considered for this work were air temperature and several distances. In the first, the performance of each set to measure a fixed target (metal and plastic box) was evaluated under different temperatures (1.7 - 37 ℃). From these results, two best sets were further used to assess distance measurement at a fixed temperature. This test revealed low absolute errors measuring the distances of interest in this work, ranging from 0.18% to 1.27%.This work has been supported by FCT - Fundação para a CiĂȘncia e Tecnologia within the R&D Units Project Scope: UIDB/05757/2020, UIDB/00690/2020, LA/P/0045/2020, UIDB/50020/2020, and UIDP/50020/2020. Adriano Silva was supported by FCT-MIT Portugal PhD grant SFRH/BD/151346/2021, and Thadeu Brito was supported by FCT PhD grant SFRH/BD/08598 /2020. Jose L. Diaz de Tuesta acknowledges the finantial support through the program of Atraccion de Talento of Atraccion al Talento of the Comunidad de Madrid (Spain) for the individual research grant 2020-T2/AMB-19836.info:eu-repo/semantics/acceptedVersio

    Responsible Composition and Optimization of Integration Processes under Correctness Preserving Guarantees

    Full text link
    Enterprise Application Integration deals with the problem of connecting heterogeneous applications, and is the centerpiece of current on-premise, cloud and device integration scenarios. For integration scenarios, structurally correct composition of patterns into processes and improvements of integration processes are crucial. In order to achieve this, we formalize compositions of integration patterns based on their characteristics, and describe optimization strategies that help to reduce the model complexity, and improve the process execution efficiency using design time techniques. Using the formalism of timed DB-nets - a refinement of Petri nets - we model integration logic features such as control- and data flow, transactional data storage, compensation and exception handling, and time aspects that are present in reoccurring solutions as separate integration patterns. We then propose a realization of optimization strategies using graph rewriting, and prove that the optimizations we consider preserve both structural and functional correctness. We evaluate the improvements on a real-world catalog of pattern compositions, containing over 900 integration processes, and illustrate the correctness properties in case studies based on two of these processes.Comment: 37 page

    Workrs: Fault Tolerant Horizontal Computation Offloading

    Full text link
    The broad development and usage of edge devices has highlighted the importance of creating resilient and computationally advanced environments. When working with edge devices these desiderata are usually achieved through replication and offloading. This paper reports on the design and implementation of Workrs, a fault tolerant service that enables the offloading of jobs from devices with limited computational power. We propose a solution that allows users to upload jobs through a web service, which will be executed on edge nodes within the system. The solution is designed to be fault tolerant and scalable, with no single point of failure as well as the ability to accommodate growth, if the service is expanded. The use of Docker checkpointing on the worker machines ensures that jobs can be resumed in the event of a fault. We provide a mathematical approach to optimize the number of checkpoints that are created along a computation, given that we can forecast the time needed to execute a job. We present experiments that indicate in which scenarios checkpointing benefits job execution. The results achieved are based on a working prototype which shows clear benefits of using checkpointing and restore when the completion jobs' time rises compared with the forecast fault rate. The code of Workrs is released as open source, and it is available at \url{https://github.com/orgs/P7-workrs/repositories}. This paper is an extended version of \cite{edge2023paper}.Comment: extended version of a paper accepted at IEEE Edge 202

    Cloud computing : developing a cost estimation model for customers

    Get PDF
    Cloud computing is an essential part of the digital transformation journey. It offers many benefits to organisations, including the advantages of scalability and agility. Cloud customers see cloud computing as a moving train that every organisation needs to catch. This means that adoption decisions are made quickly in order to keep up with the new trend. Such quick decisions have led to many disappointments for cloud customers and have questioned the cost of the cloud. This is also because there is a lack of criteria or guidelines to help cloud customers get a complete picture of what is required of them before they go to the cloud. From another perspective, as new technologies force changes to the organizational structure and business processes, it is important to understand how cloud computing changes the IT and non-IT departments and how can this be translated into costs. Accordingly, this research uses the total cost of ownership approach and transaction cost theory to develop a customer-centric model to estimate the cost of cloud computing. The Research methodology used the Design Science Research approach. Expert interviews were used to develop the model. The model was then validated using four case studies. The model, named Sunny, identifies many costs that need to be estimated, which will help to make the cloud-based digital transformation journey less cloudy. The costs include Meta Services, Continuous Contract management, Monitoring and ITSM Adjustment. From an academic perspective, this research highlights the management efforts required for cloud computing and how misleading the rapid provision potential of the cloud resources can be. From a business perspective, proper estimation of these costs would help customers make informed decisions and vendors make realistic promises.Cloud Computing ist ein wesentlicher Bestandteil der Digitalisierung. Es bietet Unternehmen viele Vorteile, wie Skalierbarkeit und AgilitĂ€t. Cloud-Kunden sehen Cloud Computing als einen Zug, auf den jedes Unternehmen aufspringen muss. Das bedeutet, dass EinfĂŒhrungsentscheidungen schnell getroffen werden, um mit dem neuen Trend Schritt zu halten. Solche SchnellschĂŒsse haben zu vielen EnttĂ€uschungen bei Cloud-Kunden gefĂŒhrt und die Kosten der Cloud in Frage gestellt. Dies ist auch darauf zurĂŒckzufĂŒhren, dass es keine Kriterien oder Leitlinien gibt, die den Cloud-Kunden helfen, sich ein vollstĂ€ndiges Bild davon zu machen, was von ihnen erwartet wird, bevor sie in die Cloud gehen. Aus einem anderen Blickwinkel ist es wichtig zu verstehen, wie Cloud Computing IT- und Nicht-IT-Abteilungen verĂ€ndert und wie sich dies auf die Kosten auswirkt, da neue Technologien VerĂ€nderungen in der Organisationsstruktur und den GeschĂ€ftsprozessen erzwingen. Dementsprechend werden in dieser Forschungsarbeit der Total Cost of Ownership-Ansatz und die Transaktionskostentheorie verwendet, um ein kundenorientiertes Modell zur SchĂ€tzung der Kosten von Cloud Computing zu entwickeln. Die Forschungsmethodik basiert auf dem Design Science Research Ansatz. Zur Entwicklung des Modells wurden Experteninterviews durchgefĂŒhrt. Anschließend wurde das Modell anhand von vier Fallstudien validiert. Das Modell mit dem Namen Sunny identifiziert viele Kosten, die geschĂ€tzt werden mĂŒssen, um die Reise zur digitalen Transformation in der Cloud weniger wolkig zu gestalten. Zu diesen Kosten gehören Meta-Services, kontinuierliches Vertragsmanagement, Überwachung und ITSM-Anpassung. Aus akademischer Sicht verdeutlicht diese Forschung, welcher Verwaltungsaufwand fĂŒr Cloud Computing erforderlich ist und wie irrefĂŒhrend das schnelle Bereitstellungspotenzial von Cloud-Ressourcen sein kann. Aus Unternehmenssicht wĂŒrde eine korrekte EinschĂ€tzung dieser Kosten den Kunden helfen, fundierte Entscheidungen zu treffen, und den Anbietern, realistische Versprechungen zu machen
    • 

    corecore