31 research outputs found

    Liveness of Randomised Parameterised Systems under Arbitrary Schedulers (Technical Report)

    Full text link
    We consider the problem of verifying liveness for systems with a finite, but unbounded, number of processes, commonly known as parameterised systems. Typical examples of such systems include distributed protocols (e.g. for the dining philosopher problem). Unlike the case of verifying safety, proving liveness is still considered extremely challenging, especially in the presence of randomness in the system. In this paper we consider liveness under arbitrary (including unfair) schedulers, which is often considered a desirable property in the literature of self-stabilising systems. We introduce an automatic method of proving liveness for randomised parameterised systems under arbitrary schedulers. Viewing liveness as a two-player reachability game (between Scheduler and Process), our method is a CEGAR approach that synthesises a progress relation for Process that can be symbolically represented as a finite-state automaton. The method is incremental and exploits both Angluin-style L*-learning and SAT-solvers. Our experiments show that our algorithm is able to prove liveness automatically for well-known randomised distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon Protocol). To the best of our knowledge, this is the first fully-automatic method that can prove liveness for randomised protocols.Comment: Full version of CAV'16 pape

    Java Challenge Software Project

    Get PDF
    Programming contests are a means of exploiting the problem solving capabilities of developers and they provide a forum for display of extraordinary programming skills. The Java Challenge (JC) Software Project is the saga of creating an automated, secure and responsive programming contest system for deployment on the Internet and to collect information about programming practices, habits, and trends in coding in such restricted environment. The methodology followed to design, implement, and evaluate such a system uses new technologies such as the WWW, mail filtering and sandboxing techniques. The current Java Challenge implementation runs the Java Challenge on a Solaris 2.6 platform under specified regulations. The scripts are developed in Perl. The security features of jdkl.2 have been researched and successfully implemented. The mode of entry acceptance is electronic mail in a specified format. Standard Unix features have been used for data archiving and information redirection. The JC software is an application package that conducts programming contests in an automated manner, provides a secure environment for evaluation and does web listing updates automatically

    Using the Go Programming Language in Practice

    Get PDF
    When developing software today, we still use old tools and ideas. Maybe it is time to start from scratch and try tools and languages that are more in line with how we actually want to develop software. The Go Programming Language was created at Google by a rather famous trio: Rob Pike, Ken Thompson and Robert Griesemer. Before introducing Go, the company suffered from their development process not scaling well due to slow builds, uncontrolled dependencies, hard to read code, poor documentation and so on. Go is set out to provide a solution for these issues. The purpose of this master's thesis was to review the current state of the language. This is not only a study of the language itself but an investigation of the whole software development process using Go. The study was carried out from an embedded development perspective which includes an investigation of compilers and cross-compilation. We found that Go is exciting, fun to use and fulfills what is promised in many cases. However, we think the tools need some more time to mature

    Resilient scalable internet routing and embedding algorithms

    Get PDF

    Qualité de service dans l'IOT : couche de brouillard

    Get PDF
    Abstract : The Internet of Things (IoT) can be defined as a combination of push and pull from the technological side and human side respectively. This push and pull effect results in more connectivity among objects and humans in the near surrounding environments [1]. With the growth in the field of IoT, in recent times, the risk of real time failures has increased as well. The failures are often detected by certain points of vulnerability in the system. Narrowing down to the root causes we get the point of failures and that leads to the required measures to overcome them. This creates the need for IoT systems to have a proper Quality of Service (QoS) architecture. Thus, QoS is becoming a crucial issue with the democratization of IoT. QoS is the description or measurement of the overall performance of a service, such as a telephony or computer network or a cloud computing service, particularly the performance seen by the users of the network. In this study, we propose the methods of enforcement of QoS in IoT platforms. We will highlight the challenges and recurrent issues faced by all IoT platforms which in turn inspired us to build a generic tool to overcome these challenges by enforcing the QoS in all the IoT platforms with an easy to use set up. The main focus of this study is to enable QoS features in the Fog layer of the IoT architecture. Existing platforms and systems enabling QoS features in the Fog layer are also highlighted. Finally, we validate our proposed model by implementing it on our AMI-LAB platform.L'Internet des objets (IdO) (Internet of Things en anglais), peut être défini comme une combinaison d’interactions entre les Humains et le monde technologique de l’Internet. De cet effet résulte une interconnexion entre les objets physiques et les appareils technologiques dans leur environnement proche. Ces dernières années le domaine de l'IdO s’est beaucoup développé, entrainant ainsi une augmentation du risque de défaillances en temps réel. Les défaillances sont souvent détectées par certains points de vulnérabilité dans le système. En se concentrant sur les causes profondes, le point de défaillance peut être détecter, ce qui conduit aux mesures à mettre en place pour surmonter les défaillances. Les systèmes IdO ont donc besoin d'avoir une architecture de Qualité de Service (QdS) adéquate. Ainsi, la QdS devient un enjeu crucial avec la démocratisation de l'IdO. La QdS est la description ou la mesure de la performance globale d'un service, tel qu'un réseau de téléphonie ou informatique, ou un service de cloud computing, en particulier la performance perçue par les utilisateurs du réseau. Dans cette étude, nous proposons les méthodes de mise en œuvre de la QdS dans les plateformes IdO. Nous mettrons en lumière les défis et les problèmes récurrents rencontrés par toutes les plateformes IdO, qui nous ont inspirés à construire un outil générique pour surmonter ces défis en imposant la QdS dans toutes les plateformes IdO avec une configuration facile à utiliser. L'objectif principal de cette étude est de permettre les fonctionnalités de QdS dans la couche Fog de l'architecture IdO. Les plateformes et systèmes existants permettant les fonctionnalités de QdS dans la couche Fog sont également mis en évidence. Enfin, nous soulignons la validation de notre modèle en le mettant en œuvre sur notre plateforme AMI-LAB

    7. GI/ITG KuVS Fachgespräch Drahtlose Sensornetze

    Get PDF
    In dem vorliegenden Tagungsband sind die Beiträge des Fachgesprächs Drahtlose Sensornetze 2008 zusammengefasst. Ziel dieses Fachgesprächs ist es, Wissenschaftlerinnen und Wissenschaftler aus diesem Gebiet die Möglichkeit zu einem informellen Austausch zu geben – wobei immer auch Teilnehmer aus der Industrieforschung willkommen sind, die auch in diesem Jahr wieder teilnehmen.Das Fachgespräch ist eine betont informelle Veranstaltung der GI/ITG-Fachgruppe „Kommunikation und Verteilte Systeme“ (www.kuvs.de). Es ist ausdrücklich keine weitere Konferenz mit ihrem großen Overhead und der Anforderung, fertige und möglichst „wasserdichte“ Ergebnisse zu präsentieren, sondern es dient auch ganz explizit dazu, mit Neueinsteigern auf der Suche nach ihrem Thema zu diskutieren und herauszufinden, wo die Herausforderungen an die zukünftige Forschung überhaupt liegen.Das Fachgespräch Drahtlose Sensornetze 2008 findet in Berlin statt, in den Räumen der Freien Universität Berlin, aber in Kooperation mit der ScatterWeb GmbH. Auch dies ein Novum, es zeigt, dass das Fachgespräch doch deutlich mehr als nur ein nettes Beisammensein unter einem Motto ist.Für die Organisation des Rahmens und der Abendveranstaltung gebührt Dank den beiden Mitgliedern im Organisationskomitee, Kirsten Terfloth und Georg Wittenburg, aber auch Stefanie Bahe, welche die redaktionelle Betreuung des Tagungsbands übernommen hat, vielen anderen Mitgliedern der AG Technische Informatik der FU Berlin und natürlich auch ihrem Leiter, Prof. Jochen Schiller

    A Distributed System for Robot Manipulator Control, NSF Grant ECS-11879 Fourth Report

    Get PDF
    This is the fourth annual report representing our last year\u27s work under the current grant. This work was directed to the development of a distributed computer architecture to function as a force and motion server to a robot system. In the course of this work we developed a compliant contact sensor to provide for transitions between position and force control; developed an end-effector capable of securing a stable grasp on an object and a theory of grasping; developed and built a controller which minimizes control delays; explored a parallel kinematics algorithms for the controller; developed a consistent approach to the definition of motion both in joint coordinates and in Cartesian coordinates; developed a symbolic simplification software package to generate the dynamics equations of a manipulator such that the calculations may be split between background and foreground

    The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction

    Elastic techniques to handle dynamism in real-time data processing systems

    Get PDF
    Real-time data processing is a crucial component of cloud computing today. It is widely adopted to provide an up-to-date view of data for social networks, cloud management, web applications, edge, and IoT infrastructures. Real-time processing frameworks are designed for time-sensitive tasks such as event detection, real-time data analysis, and prediction. Compared to handling offline, batched data, real-time data processing applications tend to be long-running and are prone to performance issues caused by many unpredictable environmental variables, including (but not limited to) job specification, user expectation, and available resources. In order to cope with this challenge, it is crucial for system designers to improve frameworks’ ability to adjust their resource usage to adapt to changing environmental variables, defined as system elasticity. This thesis investigates how elastic resource provisioning helps cloud systems today process real-time data while maintaining predictable performance under workload influence in an automated manner. We explore new algorithms, framework design, and efficient system implementation to achieve this goal. On the other hand, distributed systems today need to continuously handle various application specifications, hardware configurations, and workload characteristics. Maintaining stable performance requires systems to explicitly plan for resource allocation upon starting an application and tailor allocation dynamically during run time. In this thesis, we show how achieving system elasticity can help systems provide tunable performance under the dynamism of many environmental variables without compromising resource efficiency. Specifically, this thesis focuses on the two following aspects: i) Elasticity-aware Scheduling: Real-time data processing systems today are often designed in resource-, workload-agnostic fashion. As a result, most users are unable to perform resource planning before launching an application or adjust resource allocation (both within and across application boundaries) intelligently during the run. The first part of this thesis work (Stela [1], Henge [2], Getafix [3]) explores efficient mechanisms to conduct performance analysis while also enabling elasticity-aware scheduling in today’s cloud frameworks. ii) Resource Efficient Cloud Stack: The second line of work in this thesis aims to improve underlying cloud stacks to support self-adaptive, highly efficient resource provisioning. Today’s cloud systems enforce full isolation that prevents resource sharing among applications at a fine granularity over time. This work (Cameo [4], Dirigo) builds real- time data processing systems for emerging cloud infrastructures with high resource utilization through fine-grained resource sharing. Given that the market for real-time data analysis is expected to increase by the annual rate of 28.2% and reach 35.5 billion by the year 2024 [5], improving system elasticity can introduce a significant reduction to deployment cost and increase in resource utilization. Our works improve the performances of real-time data analytics applications within resource constraints. We highlight some of the improvements as the following: i) Stela explores elastic techniques for single-tenant, on-demand dataflow scale-out and scale-in operations. It improves post-scale throughput by 45-120% during on-demand scale-out and post-scale throughput by 2-5× during on-demand scale-in. ii) Henge develops a mechanism to map application’s performance into a unified scale of resource needs. It reduces resource consumption by 40-60% by maintaining the same level of SLO achievement throughout the cluster. iii) Getafix implements a strategy to analyze workload dynamically and proposes a solution that guides the systems to calculate the number of replicas to generate and the placement plan of these replicas adaptively. It achieves comparable query latency (both average and tail) by achieving 1.45-2.15× memory savings. iv) Cameo proposes a scheduler that supports data-driven, fine-grained operator execution guided by user expectations. It improves cluster utilization by 6× and reduces the performance violation by 72% while compacting more jobs into a shared cluster. v) Dirigo performs fully decentralized, function state-aware, global message scheduling for stateful functions. It is able to reduce tail latency by 60% compared to the local scheduling approach and reduce remote state accesses by 19× compared to the scheduling approach that is unaware of function states. These works can potentially lead to profound cost savings for both cloud providers and end-users
    corecore