34 research outputs found

    Towards Scalable Design of Future Wireless Networks

    Full text link
    Wireless operators face an ever-growing challenge to meet the throughput and processing requirements of billions of devices that are getting connected. In current wireless networks, such as LTE and WiFi, these requirements are addressed by provisioning more resources: spectrum, transmitters, and baseband processors. However, this simple add-on approach to scale system performance is expensive and often results in resource underutilization. What are, then, the ways to efficiently scale the throughput and operational efficiency of these wireless networks? To answer this question, this thesis explores several potential designs: utilizing unlicensed spectrum to augment the bandwidth of a licensed network; coordinating transmitters to increase system throughput; and finally, centralizing wireless processing to reduce computing costs. First, we propose a solution that allows LTE, a licensed wireless standard, to co-exist with WiFi in the unlicensed spectrum. The proposed solution bridges the incompatibility between the fixed access of LTE, and the random access of WiFi, through channel reservation. It achieves a fair LTE-WiFi co-existence despite the transmission gaps and unequal frame durations. Second, we consider a system where different MIMO transmitters coordinate to transmit data of multiple users. We present an adaptive design of the channel feedback protocol that mitigates interference resulting from the imperfect channel information. Finally, we consider a Cloud-RAN architecture where a datacenter or a cloud resource processes wireless frames. We introduce a tree-based design for real-time transport of baseband samples and provide its end-to-end schedulability and capacity analysis. We also present a processing framework that combines real-time scheduling with fine-grained parallelism. The framework reduces processing times by migrating parallelizable tasks to idle compute resources, and thus, decreases the processing deadline-misses at no additional cost. We implement and evaluate the above solutions using software-radio platforms and off-the-shelf radios, and confirm their applicability in real-world settings.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133358/1/gkchai_1.pd

    Abstractions for software architecture and tools to support them

    Full text link

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Mining Event Traces from Real-time Systems for Anomaly Detection

    Get PDF
    Real-time systems are a significant class of applications, poised to grow even further as autonomous vehicles and the Internet of Things (IoT) become a reality. The computation and communication tasks of the underlying embedded systems must comply with strict timing and safety requirements as undetected defects in these systems may lead to catastrophic failures. The runtime behavior of these systems is prone to uncertainties arising from dynamic workloads and extra-functional conditions that affect both the software and hardware over the course of their deployment, e.g., unscheduled firmware updates, communication channel saturation, power-saving mode switches, or external malicious attacks. The operation in such unpredictable environments prevents the detection of anomalous behavior using traditional formal modeling and analysis techniques as they generally consider worst-case analysis and tend to be overly conservative. To overcome these limitations, and primarily motivated by the increasing availability of generated traces from real-time embedded systems, this thesis presents TRACMIN - Trace Mining using Arrival Curves - which is an anomaly detection approach that empirically constructs arrival curves from event traces to capture the recurrent behavior and intrinsic features of a given real-time system. The thesis uses TRACMIN to fill the gap between formal analysis techniques of real-time systems and trace mining approaches that lack expressive, human-readable, and scalable methods. The thesis presents definitions, metrics, and tools to employ statistical learning techniques to cluster and classify traces generated from different modes of normal operation versus anomalous traces. Experimenting with multiple datasets from deployed real-time embedded systems facing performance degradation and hardware misconfiguration anomalies demonstrates the feasibility and viability of our approaches on timestamped event traces generated from an industrial real-time operating system. Acknowledging the high computation expense for constructing empirical arrival curves, the thesis provides a rapid algorithm to achieve desirable scalability on lengthy traces paving the way for adoption in research and industry. Finally, the thesis presents a robustness analysis for the arrival curves models by employing theories of demand-bound functions from the scheduling domain. The analysis provides bounds on how much disruption a real-time system modeled using our approach can tolerate before being declared anomalous, which is crucial for specification and certification purposes. In conclusion, TRACMIN combines empirical and theoretical methods to provide a concrete anomaly detection framework that uses robust models of arrival curves scalably constructed from event traces to detect anomalies that affect the recurrent behavior of a real-time system

    Functional Programming for Embedded Systems

    Get PDF
    Embedded Systems application development has traditionally been carried out in low-level machine-oriented programming languages like C or Assembler that can result in unsafe, error-prone and difficult-to-maintain code. Functional programming with features such as higher-order functions, algebraic data types, polymorphism, strong static typing and automatic memory management appears to be an ideal candidate to address the issues with low-level languages plaguing embedded systems. However, embedded systems usually run on heavily memory-constrained devices with memory in the order of hundreds of kilobytes and applications running on such devices embody the general characteristics of being (i) I/O- bound, (ii) concurrent and (iii) timing-aware. Popular functional language compilers and runtimes either do not fare well with such scarce memory resources or do not provide high-level abstractions that address all the three listed characteristics. This work attempts to address this gap by investigating and proposing high-level abstractions specialised for I/O-bound, concurrent and timing-aware embedded-systems programs. We implement the proposed abstractions on eagerly-evaluated, statically-typed functional languages running natively on microcontrollers. Our contributions are divided into two parts - Part 1 presents a functional reactive programming language - Hailstorm - that tracks side effects like I/O in its type system using a feature called resource types. Hailstorm’s programming model is illustrated on the GRiSP microcontroller board.Part 2 comprises two papers that describe the design and implementation of Synchron, a runtime API that provides a uniform message-passing framework for the handling of software messages as well as hardware interrupts. Additionally, the Synchron API supports a novel timing operator to capture the notion of time, common in embedded applications. The Synchron API is implemented as a virtual machine - SynchronVM - that is run on the NRF52 and STM32 microcontroller boards. We present programming examples that illustrate the concurrency, I/O and timing capabilities of the VM and provide various benchmarks on the response time, memory and power usage of SynchronVM

    Proceedings of Monterey Workshop 2001 Engineering Automation for Sofware Intensive System Integration

    Get PDF
    The 2001 Monterey Workshop on Engineering Automation for Software Intensive System Integration was sponsored by the Office of Naval Research, Air Force Office of Scientific Research, Army Research Office and the Defense Advance Research Projects Agency. It is our pleasure to thank the workshop advisory and sponsors for their vision of a principled engineering solution for software and for their many-year tireless effort in supporting a series of workshops to bring everyone together.This workshop is the 8 in a series of International workshops. The workshop was held in Monterey Beach Hotel, Monterey, California during June 18-22, 2001. The general theme of the workshop has been to present and discuss research works that aims at increasing the practical impact of formal methods for software and systems engineering. The particular focus of this workshop was "Engineering Automation for Software Intensive System Integration". Previous workshops have been focused on issues including, "Real-time & Concurrent Systems", "Software Merging and Slicing", "Software Evolution", "Software Architecture", "Requirements Targeting Software" and "Modeling Software System Structures in a fastly moving scenario".Office of Naval ResearchAir Force Office of Scientific Research Army Research OfficeDefense Advanced Research Projects AgencyApproved for public release, distribution unlimite

    Rigorous System Design

    Get PDF
    The monograph advocates rigorous system design as a coherent and accountable model-based process leading from requirements to correct implementations. It presents the current state of the art in system design, discusses its limitations, and identifies possible avenues for overcoming them. A rigorous system design flow is defined as a formal accountable and iterative process composed of steps, and based on four principles: (1) separation of concerns; (2) component-based construction; (3) semantic coherency; and (4) correctness-by-construction. The combined application of these principles allows the definition of a methodology clearly identifying where human intervention and ingenuity are needed to resolve design choices, as well as activities that can be supported by tools to automate tedious and error-prone tasks. An implementable system model is progressively derived by source-to-source automated transformations in a single host component-based language rooted in well-defined semantics. Using a single modeling language throughout the design flow enforces semantic coherency. Correct-by-construction techniques allow well-known limitations of a posteriori verification to be overcome and ensure accountability. It is possible to explain, at each design step, which among the requirements are satisfied and which may not be satisfied. The presented view for rigorous system design has been amply implemented in the BIP (Behavior, Interaction, Priority) component framework and substantiated by numerous experimental results showing both its relevance and feasibility. The monograph concludes with a discussion advocating a system-centric vision for computing, identifying possible links with other disciplines, and emphasizing centrality of system design

    Maximal and minimal dynamic Petri net slicing

    Full text link
    Context: Petri net slicing is a technique to reduce the size of a Petri net so that it can ease the analysis or understanding of the original Petri net. Objective: Presenting two new Petri net slicing algorithms to isolate those places and transitions of a Petri net (the slice) that may contribute tokens to one or more places given (the slicing criterion). Method: The two algorithms proposed are formalized. The completeness of the first algorithm and the minimality of the second algorithm are formally proven. Both algorithms together with other three state-of-the-art algorithms have been implemented and integrated into a single tool so that we have been able to carry out a fair empirical evaluation. Results: Besides the two new Petri net slicing algorithms, a public, free, and open-source implementation of five algorithms is reported. The results of an empirical evaluation of the new algorithms and the slices that they produce are also presented. Conclusions: The first algorithm collects all places and transitions that may influence (in any computation) the slicing criterion, while the second algorithm collects a minimum set of places and transitions that may influence (in some computation) the slicing criterion. Therefore, the net computed by the first algorithm can reproduce any computation that contributes tokens to any place of interest. In contrast, the second algorithm loses this possibility but it often produces a much more reduced subnet (which still can reproduce some computations that contribute tokens to some places of interest). The first algorithm is proven complete, and the second one is proven minimal
    corecore