73 research outputs found

    A portable real-time operating system for embedded platforms

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2004Includes bibliographical references (leaves: 55)Text in English; Abstract: Turkish and Englishix, 74 leavesIn today's world, from TV sets to washing machines or cars, almost every electronic device is controlled by an embedded system. These systems are handling many tasks simultaneously. By using an operating system, handling of different tasks simultaneously is done in a more standardized fashion. The purpose of this thesis is to design and write a portable real-time operating system for embedded systems, which can be compiled with any application by using an ANSI C compiler. The main target is to design it as small as possible to fit the smallest microcontrollers. Other targets are high flexibility, optimal modularity, high readability and maintainability of the source code

    Towards the Correctness of Software Behavior in UML: A Model Checking Approach Based on Slicing

    Get PDF
    Embedded systems are systems which have ongoing interactions with their environments, accepting requests and producing responses. Such systems are increasingly used in applications where failure is unacceptable: traffic control systems, avionics, automobiles, etc. Correct and highly dependable construction of such systems is particularly important and challenging. A very promising and increasingly attractive method for achieving this goal is using the approach of formal verification. A formal verification method consists of three major components: a model for describing the behavior of the system, a specification language to embody correctness requirements, and an analysis method to verify the behavior against the correctness requirements. This Ph.D. addresses the correctness of the behavioral design of embedded systems, using model checking as the verification technology. More precisely, we present an UML-based verification method that checks whether the conditions on the evolution of the embedded system are met by the model. Unfortunately, model checking is limited to medium size systems because of its high space requirements. To overcome this problem, this Ph.D. suggests the integration of the slicing (reduction) technique

    Metascheduling and Heuristic Co-Allocation Strategies in Distributed Computing

    Get PDF
    In this paper, we address problems of efficient computing in distributed systems with non-dedicated resources including utility grid. There are global job flows from external users along with resource owner's local tasks upon the resource non-dedication condition. Competition for resource reservation between independent users, local and global job flows substantially complicates scheduling and the requirement to provide the necessary quality of service. A metascheduling concept, justified in this work, assumes a complex combination of job flow dispatching and application-level scheduling methods for parallel jobs, as well as resource sharing and consumption policies established in virtual organizations and based on economic principles. We introduce heuristic slot selection and co-allocation strategies for parallel jobs. They are formalized by given criteria and implemented by algorithms of linear complexity on an available slots number

    Implementing and Testing the APEX I/O Scheduler in Linux

    Get PDF
    This thesis seeks to test an implementation of the APEX I/O scheduler to see how it compares to modern schedulers and whether it better serves mixed-media workloads. APEX is a scheduling framework that seeks to provide deterministic guarantees for storage service to applications. The implementation is done in Linux, a modern open source operating system kernel that includes a loadable scheduler framework. The implementation compares favorably with the existing schedulers on Linux, despite problems inherent in the assumptions made in the design of mixed-media schedulers about modern operating system environments

    Index to 1984 NASA Tech Briefs, volume 9, numbers 1-4

    Get PDF
    Short announcements of new technology derived from the R&D activities of NASA are presented. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This index for 1984 Tech B Briefs contains abstracts and four indexes: subject, personal author, originating center, and Tech Brief Number. The following areas are covered: electronic components and circuits, electronic systems, physical sciences, materials, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences

    Incorporating traffic patterns to improve delivery performance

    Get PDF
    Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2010.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 63-64).Traffic, construction and other road hazards impact the on-time performance of companies that operate delivery fleets. This study examines how incorporating traffic patterns in vehicle route development compares with standard, deterministic methods. We seek to understand how using historical data improves both planning and overall delivery efficiency. Our analysis contrasts manifests that were developed by an industry standard routing software tool with projections that use traffic data by benchmarking them against actual routes run by drivers. In addition to evaluating the differences between route planning tools, we explore why those differences exist, including how uncertainty is incorporated. Evidence suggests that incorporating traffic patterns into vehicle routing does produce improved solutions. Needless to say, the delivery process needs to be evaluated holistically. Our recommendations involve the various steps for creating and executing a route. Operational considerations, the potential for improving customer service, and areas for further exploration are discussed. This thesis is being conducted with sponsorship from a leading consumer products company and in coordination with the CarTel mobile sensing data project at Massachusetts Institute of Technology (MIT).by Melody J. Dickinson and Jillian Leifer.M.Eng.in Logistic

    Performance of Computer Systems; Proceedings of the 4th International Symposium on Modelling and Performance Evaluation of Computer Systems, Vienna, Austria, February 6-8, 1979

    Get PDF
    These proceedings are a collection of contributions to computer system performance, selected by the usual refereeing process from papers submitted to the symposium, as well as a few invited papers representing significant novel contributions made during the last year. They represent the thrust and vitality of the subject as well as its capacity to identify important basic problems and major application areas. The main methodological problems appear in the underlying queueing theoretic aspects, in the deterministic analysis of waiting time phenomena, in workload characterization and representation, in the algorithmic aspects of model processing, and in the analysis of measurement data. Major areas for applications are computer architectures, data bases, computer networks, and capacity planning. The international importance of the area of computer system performance was well reflected at the symposium by participants from 19 countries. The mixture of participants was also evident in the institutions which they represented: 35% from universities, 25% from governmental research organizations, but also 30% from industry and 10% from non-research government bodies. This proves that the area is reaching a stage of maturity where it can contribute directly to progress in practical problems

    Developing Real-Time GPU-Sharing Platforms for Artificial-Intelligence Applications

    Get PDF
    In modern autonomous systems such as self-driving cars, sustained safe operation requires running complex software at rates possible only with the help of specialized computational accelerators. Graphics processing units (GPUs) remain a foremost example of such accelerators, due to their relative ease of use and the proficiency with which they can accelerate neural-network computations underlying modern computer-vision and artificial-intelligence algorithms. This means that ensuring GPU processing completes in a timely manner is essential---but doing so is not necessarily simple, especially when a single GPU is concurrently shared by many applications. Existing real-time research includes several techniques for improving timing characteristics of shared-GPU workloads, each with varying tradeoffs and practical limitations. In the world of timing correctness, however, one problem stands above all others: the lack of detailed information about how GPU hardware and software behaves. GPU manufacturers are usually willing to publish documentation sufficient for producing logically correct software, or guidance on tuning software to achieve "real-fast," high-throughput performance, but the same manufacturers neglect to provide details used when establishing temporal predictability. Techniques for improving the reliability of GPU software's temporal performance are only as good as the information upon which they are based, incentivising researchers to spend inordinate amounts of time learning foundational facts about existing hardware---facts that chip manufacturers must know, but are not willing to publish. This is both a continual inconvenience in established GPU research, and a high barrier to entry for newcomers. This dissertation addresses the "information problem" hindering real-time GPU research in several ways. First, it seeks to fight back against the monoculture that has arisen with respect to platform choice. Virtually all prior real-time GPU research is developed for and evaluated using GPUs manufactured by NVIDIA, but this dissertation provides details about an alternate platform: AMD GPUs. Second, this dissertation works towards establishing a model with which GPU performance can be predicted or controlled. To this end, it uses a series of experiments to discern the policy that governs the queuing behavior of concurrent GPU-sharing processes, on both NVIDIA and AMD GPUs. Finally, this dissertation addresses the novel problems for safety-critical systems caused by the changing landscape of the applications that run on GPUs. In particular, the advent of neural-network-based artificial-intelligence has catapulted GPU usage into safety-critical domains that are not prepared for the complexity of the new software or the fact that it cannot guarantee logical correctness. The lack of logical guarantees is unlikely to be "solved" in the near future, motivating a focus on increased throughput. Higher throughput increases the probability of producing a correct result within a fixed amount of time, but GPU-management efforts typically focus on worst-case performance, often at the expense of throughput. This dissertation's final chapter therefore evaluates existing GPU-management techniques' efficacy at managing neural-network applications, both from a throughput and worst-case perspective.Doctor of Philosoph

    Discrete Event Simulations

    Get PDF
    Considered by many authors as a technique for modelling stochastic, dynamic and discretely evolving systems, this technique has gained widespread acceptance among the practitioners who want to represent and improve complex systems. Since DES is a technique applied in incredibly different areas, this book reflects many different points of view about DES, thus, all authors describe how it is understood and applied within their context of work, providing an extensive understanding of what DES is. It can be said that the name of the book itself reflects the plurality that these points of view represent. The book embraces a number of topics covering theory, methods and applications to a wide range of sectors and problem areas that have been categorised into five groups. As well as the previously explained variety of points of view concerning DES, there is one additional thing to remark about this book: its richness when talking about actual data or actual data based analysis. When most academic areas are lacking application cases, roughly the half part of the chapters included in this book deal with actual problems or at least are based on actual data. Thus, the editor firmly believes that this book will be interesting for both beginners and practitioners in the area of DES

    Optimizing Virtual Machine I/O Performance in Virtualized Cloud by Differenciated-frequency Scheduling and Functionality Offloading

    Get PDF
    Many enterprises are increasingly moving their applications to private cloud environments or public cloud platforms. A key technology driving cloud computing is virtualization which can serve multiple VMs in one physical machine hence providing better management flexibility and significant savings in operational costs. However, one important consequence of virtualized hosts in the cloud is the negative impact it has on the I/O performance of the applications running in the VMs
    corecore