68 research outputs found

    Space experiment "Kontur-2": Applied methods and obtained results

    Get PDF
    Space experiment "Kontur-2" aboard the International Space Station is focused on the transfer of information between station and on-ground robot. Station's resources are limited, including communication ones. That is why for the space experiment “Kontur-2” it was decided to use the methods of priority traffic management. New access control mechanisms based on these methods are researched. The usage of the priority traffic processing methods allows using more efficiently the bandwidth of receiving and transmitting equipment onboard the International Space Station through the application of randomized push-out mechanism. The paper considers methods applied for traffic management and access control during international space experiment “Kontur-2” performed aboard the ISS. The obtained results are also presented

    Stability of queueing-inventory systems with different priorities

    Full text link
    We study a production-inventory system with two customer classes with different priorities which are admitted to the system following a flexible admission control scheme. The inventory management is according to a base stock policy and arriving demand which finds the inventory depleted is lost (lost sales). We analyse the global balance equations of the associated Markov process and derive structural properties of the steady state distribution which provide insights into the equilibrium behaviour of the system. We derive a sufficient condition for ergodicity using the Foster-Lyapunov stability criterion. For a special case we show that the condition is necessary as well

    DESIGN AND DEVELOPMENT OF CARRIER ASSIGNMENT AND PACKET SCHEDULING IN LTE-A AND Wi-Fi

    Get PDF
    The highly competitive environment in today's wireless and cellular network industries is making the management of systems seek for better and more advance techniques to keep masses of data, complexity of systems and deadline constrains under control with a lower cost and higher efficiency. Therefore, the management is getting significant attentions by researchers in order to increase the efficiency of the resource usage to provide high quality services. Two of the cornerstones of the management system in wireless and cellular network are carrier assignment and packet scheduling. Therefore, this work focuses on analysis and development of carrier assignment and packet scheduling methods in multi-band Wi-Fi and LTE-A networks. First, several existing carrier assignment methods which are developed by considering different strategists in LTE and LTE-A are analyzed. Secondly, a new technique for the carrier assignment methods for LTE and LTE-A is developed to improve the efficiency of carrier assignment methods. Thirdly, a novel carrier assignment method is proposed by considering the behaviors of mobile users for LTE and LTE-A. Then, a novel architecture with packet scheduling scheme is proposed for next generation mobile routers in multi-band Wi-Fi environment as similar to LTE-A. Finally, the scheme is improved based on energy awareness. Results show that the developed methods improve the performance of the systems in comparison to existing methods. The proposed methods and related analysis should help network engineers and service providers build next generation carrier assignment and packet scheduling methods to satisfy users in LTE, LTE-A and Wi-Fi

    Improving the Performance of User-level Runtime Systems for Concurrent Applications

    Get PDF
    Concurrency is an essential part of many modern large-scale software systems. Applications must handle millions of simultaneous requests from millions of connected devices. Handling such a large number of concurrent requests requires runtime systems that efficiently man- age concurrency and communication among tasks in an application across multiple cores. Existing low-level programming techniques provide scalable solutions with low overhead, but require non-linear control flow. Alternative approaches to concurrent programming, such as Erlang and Go, support linear control flow by mapping multiple user-level execution entities across multiple kernel threads (M:N threading). However, these systems provide comprehensive execution environments that make it difficult to assess the performance impact of user-level runtimes in isolation. This thesis presents a nimble M:N user-level threading runtime that closes this con- ceptual gap and provides a software infrastructure to precisely study the performance impact of user-level threading. Multiple design alternatives are presented and evaluated for scheduling, I/O multiplexing, and synchronization components of the runtime. The performance of the runtime is evaluated in comparison to event-driven software, system- level threading, and other user-level threading runtimes. An experimental evaluation is conducted using benchmark programs, as well as the popular Memcached application. The user-level runtime supports high levels of concurrency without sacrificing application performance. In addition, the user-level scheduling problem is studied in the context of an existing actor runtime that maps multiple actors to multiple kernel-level threads. In particular, two locality-aware work-stealing schedulers are proposed and evaluated. It is shown that locality-aware scheduling can significantly improve the performance of a class of applications with a high level of concurrency. In general, the performance and resource utilization of large-scale concurrent applications depends on the level of concurrency that can be expressed by the programming model. This fundamental effect is studied by refining and customizing existing concurrency models

    Performance modelling of replication protocols

    Get PDF
    PhD ThesisThis thesis is concerned with the performance modelling of data replication protocols. Data replication is used to provide fault tolerance and to improve the performance of a distributed system. Replication not only needs extra storage but also has an extra cost associated with it when performing an update. It is not always clear which algorithm will give best performance in a given scenario, how many copies should be maintained or where these copies should be located to yield the best performance. The consistency requirements also change with application. One has to choose these parameters to maximize reliability and speed and minimize cost. A study showing the effect of change in different parameters on the performance of these protocols would be helpful in making these decisions. With the use of data replication techniques in wide-area systems where hundreds or even thousands of sites may be involved, it has become important to evaluate the performance of the schemes maintaining copies of data. This thesis evaluates the performance of replication protocols that provide differ- ent levels of data consistency ranging from strong to weak consistency. The protocols that try to integrate strong and weak consistency are also examined. Queueing theory techniques are used to evaluate the performance of these protocols. The performance measures of interest are the response times of read and write jobs. These times are evaluated both when replicas are reliable and when they are subject to random breakdowns and repairs.Commonwealth Scholarshi

    Developing Real-Time GPU-Sharing Platforms for Artificial-Intelligence Applications

    Get PDF
    In modern autonomous systems such as self-driving cars, sustained safe operation requires running complex software at rates possible only with the help of specialized computational accelerators. Graphics processing units (GPUs) remain a foremost example of such accelerators, due to their relative ease of use and the proficiency with which they can accelerate neural-network computations underlying modern computer-vision and artificial-intelligence algorithms. This means that ensuring GPU processing completes in a timely manner is essential---but doing so is not necessarily simple, especially when a single GPU is concurrently shared by many applications. Existing real-time research includes several techniques for improving timing characteristics of shared-GPU workloads, each with varying tradeoffs and practical limitations. In the world of timing correctness, however, one problem stands above all others: the lack of detailed information about how GPU hardware and software behaves. GPU manufacturers are usually willing to publish documentation sufficient for producing logically correct software, or guidance on tuning software to achieve "real-fast," high-throughput performance, but the same manufacturers neglect to provide details used when establishing temporal predictability. Techniques for improving the reliability of GPU software's temporal performance are only as good as the information upon which they are based, incentivising researchers to spend inordinate amounts of time learning foundational facts about existing hardware---facts that chip manufacturers must know, but are not willing to publish. This is both a continual inconvenience in established GPU research, and a high barrier to entry for newcomers. This dissertation addresses the "information problem" hindering real-time GPU research in several ways. First, it seeks to fight back against the monoculture that has arisen with respect to platform choice. Virtually all prior real-time GPU research is developed for and evaluated using GPUs manufactured by NVIDIA, but this dissertation provides details about an alternate platform: AMD GPUs. Second, this dissertation works towards establishing a model with which GPU performance can be predicted or controlled. To this end, it uses a series of experiments to discern the policy that governs the queuing behavior of concurrent GPU-sharing processes, on both NVIDIA and AMD GPUs. Finally, this dissertation addresses the novel problems for safety-critical systems caused by the changing landscape of the applications that run on GPUs. In particular, the advent of neural-network-based artificial-intelligence has catapulted GPU usage into safety-critical domains that are not prepared for the complexity of the new software or the fact that it cannot guarantee logical correctness. The lack of logical guarantees is unlikely to be "solved" in the near future, motivating a focus on increased throughput. Higher throughput increases the probability of producing a correct result within a fixed amount of time, but GPU-management efforts typically focus on worst-case performance, often at the expense of throughput. This dissertation's final chapter therefore evaluates existing GPU-management techniques' efficacy at managing neural-network applications, both from a throughput and worst-case perspective.Doctor of Philosoph

    Discrete Event Simulations

    Get PDF
    Considered by many authors as a technique for modelling stochastic, dynamic and discretely evolving systems, this technique has gained widespread acceptance among the practitioners who want to represent and improve complex systems. Since DES is a technique applied in incredibly different areas, this book reflects many different points of view about DES, thus, all authors describe how it is understood and applied within their context of work, providing an extensive understanding of what DES is. It can be said that the name of the book itself reflects the plurality that these points of view represent. The book embraces a number of topics covering theory, methods and applications to a wide range of sectors and problem areas that have been categorised into five groups. As well as the previously explained variety of points of view concerning DES, there is one additional thing to remark about this book: its richness when talking about actual data or actual data based analysis. When most academic areas are lacking application cases, roughly the half part of the chapters included in this book deal with actual problems or at least are based on actual data. Thus, the editor firmly believes that this book will be interesting for both beginners and practitioners in the area of DES
    • …
    corecore