601 research outputs found

    Making the Distribution Subsystem Secure

    Get PDF
    This report presents how the Distribution Subsystem is made secure. A set of different security threats to a shared data programming system are identifed. The report presents the extensions nessesary to the DSS in order to cope with the identified security threats by maintaining reference security. A reference to a shared data structure cannot be forged or guessed; only by proper delegation can a thread acquire access to data originating at remote processes. Referential security is a requirement for secure distributed applications. By programmatically restricting access to distributed data to trusted nodes, a distributed application can be made secure. However, for this to be true, referential security must be supported on the level of the implementation

    Practical Experiences in using Model-Driven Engineering to Develop Trustworthy Computing Systems

    Get PDF
    In this paper, we describe how Motorola has deployed model-driven engineering in product development, in particular for the development of trustworthy and highly reliable telecommunications systems, and outline the benefits obtained. Model-driven engineering has dramatically increased both the quality and the reliability of software developed in our organization, as well as the productivity of our software engineers. Our experience demonstrates that model-driven engineering significantly improves the development process for trustworthy computing systems

    Framework for resource efficient profiling of spatial model performance, A

    Get PDF
    2022 Summer.Includes bibliographical references.We design models to understand phenomena, make predictions, and/or inform decision-making. This study targets models that encapsulate spatially evolving phenomena. Given a model M, our objective is to identify how well the model predicts across all geospatial extents. A modeler may expect these validations to occur at varying spatial resolutions (e.g., states, counties, towns, census tracts). Assessing a model with all available ground-truth data is infeasible due to the data volumes involved. We propose a framework to assess the performance of models at scale over diverse spatial data collections. Our methodology ensures orchestration of validation workloads while reducing memory strain, alleviating contention, enabling concurrency, and ensuring high throughput. We introduce the notion of a validation budget that represents an upper-bound on the total number of observations that are used to assess the performance of models across spatial extents. The validation budget attempts to capture the distribution characteristics of observations and is informed by multiple sampling strategies. Our design allows us to decouple the validation from the underlying model-fitting libraries to interoperate with models designed using different libraries and analytical engines; our advanced research prototype currently supports Scikit-learn, PyTorch, and TensorFlow. We have conducted extensive benchmarks that demonstrate the suitability of our methodology

    LTS and Linked Data: a position paper

    Get PDF
    "LTS and Linked Data: a position paper" outlines motivations for adopting linked data techniques for describing and managing our collections, and seeks to articulate a specific role for Library Technical Services (LTS) within this enterprise

    Programming Language interoperability in cross-platform software development

    Get PDF
    Recent years have witnessed the rising popularity of software that are constructed by combining various modules written in different programming languages. While the coexistence of multiple programming languages within the same codebase might bring certain benefits such as reusability and the ability to exploit the unique power of each language, this architecture certainly adds significant complexity to the development and maintenance process of such systems. This thesis proposes an approach to alleviate the pain of language interoperability in those systems by automating the binding code generation process between different languages. The proposed method uses the metadata extracted from the Interface Description Language (IDL) to systematically generate the Application Programming Interface (API) in each involved language. As a result, the code written in one language can seamlessly interact with code developed in others. The experiment results showed that the developed code generator has improved the stability, scalability, and modularity of multi-language software systems

    Doctor of Philosophy

    Get PDF
    dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay

    Ur/Web: A Simple Model for Programming the Web

    Get PDF
    The World Wide Web has evolved gradually from a document delivery platform to an architecture for distributed programming. This largely unplanned evolution is apparent in the set of interconnected languages and protocols that any Web application must manage. This paper presents Ur/Web, a domain-specific, statically typed functional programming language with a much simpler model for programming modern Web applications. Ur/Web's model is unified, where programs in a single programming language are compiled to other "Web standards" languages as needed; modular, supporting novel kinds of encapsulation of Web-specific state; and exposes simple concurrency, where programmers can reason about distributed, multithreaded applications via a mix of transactions and cooperative preemption. We give a tutorial introduction to the main features of Ur/Web, formalize the basic programming model with operational semantics, and discuss the language implementation and the production Web applications that use it.National Science Foundation (U.S.) (Grant CCF-1217501

    Production Engineering and Management

    Get PDF
    It is our pleasure to introduce the 8th edition of the International Conference on Production Engineering and anagement (PEM), an event that is the result of the joint effort of the OWL University of Applied Sciences and the University of Trieste. The conference has been established as an annual meeting under the Double Degree Master Program “Production Engineering and Management” by the two partner universities. This year the conference is hosted at the university campus in Lemgo, Germany. The main goal of the conference is to offer students, researchers and professionals in Germany, Italy and abroad, an opportunity to meet and exchange information, discuss experiences, specific practices and technical solutions for planning, design, and management of manufacturing and service systems and processes. As always, the conference is a platform aimed at presenting research projects, introducing young academics to the tradition of symposiums and promoting the exchange of ideas between the industry and the academy. This year’s special focus is on Supply Chain Design and Management in the context of Industry 4.0, which are currently major topics of discussion among experts and professionals. In fact, the features and problems of Industry 4.0 have been widely discussed in the last editions of the PEM conference, in which sustainability and efficiency also emerged as key factors. With the further study and development of Direct Digital Manufacturing technologies in connection with new Management Practices and Supply Chain Designs, the 8th edition of the PEM conference aims to offer new and interesting scientific contributions. The conference program includes 25 speeches organized in seven sessions. Two are specifically dedicated to “Direct Digital Manufacturing in the context of Industry 4.0”. The other sessions are covering areas of great interest and importance to the participants of the conference, which are related to the main focus: “Supply Chai n Design and Management”, “Industrial Engineering and Lean Management”, “Wood Processing Technologies and Furniture Production”, and “Management Practices and Methodologies”. The proceedings of the conference include the articles submitted and accepted after a careful double-blind refereeing process

    Design and Analysis of Evergreen Virtually Clustered Automation Platform

    Get PDF
    • …
    corecore