990,402 research outputs found

    Low-complexity medium access control protocols for QoS support in third-generation radio access networks

    Get PDF
    One approach to maximizing the efficiency of medium access control (MAC) on the uplink in a future wideband code-division multiple-access (WCDMA)-based third-generation radio access network, and hence maximize spectral efficiency, is to employ a low-complexity distributed scheduling control approach. The maximization of spectral efficiency in third-generation radio access networks is complicated by the need to provide bandwidth-on-demand to diverse services characterized by diverse quality of service (QoS) requirements in an interference limited environment. However, the ability to exploit the full potential of resource allocation algorithms in third-generation radio access networks has been limited by the absence of a metric that captures the two-dimensional radio resource requirement, in terms of power and bandwidth, in the third-generation radio access network environment, where different users may have different signal-to-interference ratio requirements. This paper presents a novel resource metric as a solution to this fundamental problem. Also, a novel deadline-driven backoff procedure has been presented as the backoff scheme of the proposed distributed scheduling MAC protocols to enable the efficient support of services with QoS imposed delay constraints without the need for centralized scheduling. The main conclusion is that low-complexity distributed scheduling control strategies using overload avoidance/overload detection can be designed using the proposed resource metric to give near optimal performance and thus maintain a high spectral efficiency in third-generation radio access networks and that importantly overload detection is superior to overload avoidance

    Model based code generation for distributed embedded systems

    Get PDF
    Embedded systems are becoming increasingly complex and more distributed. Cost and quality requirements necessitate reuse of the functional software components for multiple deployment architectures. An important step is the allocation of software components to hardware. During this process the differences between the hardware and application software architectures must be reconciled. In this paper we discuss an architecture driven approach involving model-based techniques to resolve these differences and integrate hardware and software components. The system architecture serves as the underpinning based on which distributed real-time components can be generated. Generation of various embedded system architectures using the same functional architecture is discussed. The approach leverages the following technologies – IME (Integrated Modeling Environment), the SAE AADL (Architecture Analysis and Design Language), and Ocarina. The approach is illustrated using the electronic throttle control system as a case study

    Adaptive Electricity Scheduling in Microgrids

    Full text link
    Microgrid (MG) is a promising component for future smart grid (SG) deployment. The balance of supply and demand of electric energy is one of the most important requirements of MG management. In this paper, we present a novel framework for smart energy management based on the concept of quality-of-service in electricity (QoSE). Specifically, the resident electricity demand is classified into basic usage and quality usage. The basic usage is always guaranteed by the MG, while the quality usage is controlled based on the MG state. The microgrid control center (MGCC) aims to minimize the MG operation cost and maintain the outage probability of quality usage, i.e., QoSE, below a target value, by scheduling electricity among renewable energy resources, energy storage systems, and macrogrid. The problem is formulated as a constrained stochastic programming problem. The Lyapunov optimization technique is then applied to derive an adaptive electricity scheduling algorithm by introducing the QoSE virtual queues and energy storage virtual queues. The proposed algorithm is an online algorithm since it does not require any statistics and future knowledge of the electricity supply, demand and price processes. We derive several "hard" performance bounds for the proposed algorithm, and evaluate its performance with trace-driven simulations. The simulation results demonstrate the efficacy of the proposed electricity scheduling algorithm.Comment: 12 pages, extended technical repor

    The Essential Role and the Continuous Evolution of Modulation Techniques for Voltage-Source Inverters in the Past, Present, and Future Power Electronics

    Get PDF
    The cost reduction of power-electronic devices, the increase in their reliability, efficiency, and power capability, and lower development times, together with more demanding application requirements, has driven the development of several new inverter topologies recently introduced in the industry, particularly medium-voltage converters. New more complex inverter topologies and new application fields come along with additional control challenges, such as voltage imbalances, power-quality issues, higher efficiency needs, and fault-tolerant operation, which necessarily requires the parallel development of modulation schemes. Therefore, recently, there have been significant advances in the field of modulation of dc/ac converters, which conceptually has been dominated during the last several decades almost exclusively by classic pulse-width modulation (PWM) methods. This paper aims to concentrate and discuss the latest developments on this exciting technology, to provide insight on where the state-of-the-art stands today, and analyze the trends and challenges driving its future

    SLA Management in Intent-Driven Service Management Systems: A Taxonomy and Future Directions

    Full text link
    Traditionally, network and system administrators are responsible for designing, configuring, and resolving the Internet service requests. Human-driven system configuration and management are proving unsatisfactory due to the recent interest in time-sensitive applications with stringent quality of service (QoS). Aiming to transition from the traditional human-driven to zero-touch service management in the field of networks and computing, intent-driven service management (IDSM) has been proposed as a response to stringent quality of service requirements. In IDSM, users express their service requirements in a declarative manner as intents. IDSM, with the help of closed control-loop operations, perform configurations and deployments, autonomously to meet service request requirements. The result is a faster deployment of Internet services and reduction in configuration errors caused by manual operations, which in turn reduces the service-level agreement (SLA) violations. In the early stages of development, IDSM systems require attention from industry as well as academia. In an attempt to fill the gaps in current research, we conducted a systematic literature review of SLA management in IDSM systems. As an outcome, we have identified four IDSM intent management activities and proposed a taxonomy for each activity. Analysis of all studies and future research directions, are presented in the conclusions.Comment: Extended version of the preprint submitted at ACM Computing Surveys (CSUR

    Test-driven development of embedded control systems: application in an automotive collision prevention system

    Get PDF
    With test-driven development (TDD) new code is not written until an automated test has failed, and duplications of functions, tests, or simply code fragments are always removed. TDD can lead to a better design and a higher quality of the developed system, but to date it has mainly been applied to the development of traditional software systems such as payroll applications. This thesis describes the novel application of TDD to the development of embedded control systems using an automotive safety system for preventing collisions as an example. The basic prerequisite for test-driven development is the availability of an automated testing framework as tests are executed very often. Such testing frameworks have been developed for nearly all programming languages, but not for the graphical, signal driven language Simulink. Simulink is commonly used in the automotive industry and can be considered as state-of-the-art for the design and development of embedded control systems in the automotive, aerospace and other industries. The thesis therefore introduces a novel automated testing framework for Simulink. This framework forms the basis for the test-driven development process by integrating the analysis, design and testing of embedded control systems into this process. The thesis then shows the application of TDD to a collision prevention system. The system architecture is derived from the requirements of the system and four software components are identified, which represent problems of particular areas for the realisation of control systems, i.e. logical combinations, experimental problems, mathematical algorithms, and control theory. For each of these problems, a concept to systematically derive test cases from the requirements is presented. Moreover two conventional approaches to design the controller are introduced and compared in terms of their stability and performance. The effectiveness of the collision prevention system is assessed in trials on a driving simulator. These trials show that the system leads to a significant reduction of the accident rate for rear-end collisions. In addition, experiments with prototype vehicles on test tracks and field tests are presented to verify the system’s functional requirements within a system testing approach. Finally, the new test-driven development process for embedded control systems is evaluated in comparison to traditional development processes

    Active Sampling-based Binary Verification of Dynamical Systems

    Full text link
    Nonlinear, adaptive, or otherwise complex control techniques are increasingly relied upon to ensure the safety of systems operating in uncertain environments. However, the nonlinearity of the resulting closed-loop system complicates verification that the system does in fact satisfy those requirements at all possible operating conditions. While analytical proof-based techniques and finite abstractions can be used to provably verify the closed-loop system's response at different operating conditions, they often produce conservative approximations due to restrictive assumptions and are difficult to construct in many applications. In contrast, popular statistical verification techniques relax the restrictions and instead rely upon simulations to construct statistical or probabilistic guarantees. This work presents a data-driven statistical verification procedure that instead constructs statistical learning models from simulated training data to separate the set of possible perturbations into "safe" and "unsafe" subsets. Binary evaluations of closed-loop system requirement satisfaction at various realizations of the uncertainties are obtained through temporal logic robustness metrics, which are then used to construct predictive models of requirement satisfaction over the full set of possible uncertainties. As the accuracy of these predictive statistical models is inherently coupled to the quality of the training data, an active learning algorithm selects additional sample points in order to maximize the expected change in the data-driven model and thus, indirectly, minimize the prediction error. Various case studies demonstrate the closed-loop verification procedure and highlight improvements in prediction error over both existing analytical and statistical verification techniques.Comment: 23 page

    Requirement- and cost-driven product development process

    Get PDF
    This paper presents an approach, which enables a cost and requirement driven control of the design process. It is based on the concept of Property-Driven Development (PDD) [WeWD-03]. Integrated in the approach are well established tools like Target Costing and Value Analysis as well as methods of design for requirements. In the authors\u27 approach, the product development process is controlled by an ongoing target/actual (\u27Soll/Ist\u27) comparison between target properties and the state of properties currently achieved. For each property, depending on the fulfilment, quality ratings from the customer\u27s point of view are assigned. The aim of the product development process is the maximisation of the sum of these quality ratings. This aim can be realised based on the PDD approach, because it supports the engineer/designer by explicitly representing the interdependencies between the properties (that have to be optimized) and the characteristics that influence these properties

    Managing services quality through admission control and active monitoring

    Get PDF
    We propose a lightweight traffic admission control scheme based on on-line monitoring which ensures multimedia services quality both intra-domain and end-to-end. The AC strategy is distributed, service-oriented and allows to control QoS and SLS without adding complexity to the network core. For each service class, AC decisions are driven by rate-based SLS control rules and QoS parameters control rules, defined and parameterized according to each service characteristics. These rules are essentially based on systematic on-line measurements of relevant QoS and performance parameters. Thus, from a practical perspective, we discuss and evaluate methodologies and mechanisms for parameter estimation. The AC criteria is evaluated as regards its ability to ensure service commitments while achieving high network utilization. The results show that the proposed model provides a good compromise between simplicity, service level guarantee and network usage, even for services with strict QoS requirements
    corecore