48,733 research outputs found

    Quality-aware model-driven service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box character of services

    Computer Architectures to Close the Loop in Real-time Optimization

    Get PDF
    © 2015 IEEE.Many modern control, automation, signal processing and machine learning applications rely on solving a sequence of optimization problems, which are updated with measurements of a real system that evolves in time. The solutions of each of these optimization problems are then used to make decisions, which may be followed by changing some parameters of the physical system, thereby resulting in a feedback loop between the computing and the physical system. Real-time optimization is not the same as fast optimization, due to the fact that the computation is affected by an uncertain system that evolves in time. The suitability of a design should therefore not be judged from the optimality of a single optimization problem, but based on the evolution of the entire cyber-physical system. The algorithms and hardware used for solving a single optimization problem in the office might therefore be far from ideal when solving a sequence of real-time optimization problems. Instead of there being a single, optimal design, one has to trade-off a number of objectives, including performance, robustness, energy usage, size and cost. We therefore provide here a tutorial introduction to some of the questions and implementation issues that arise in real-time optimization applications. We will concentrate on some of the decisions that have to be made when designing the computing architecture and algorithm and argue that the choice of one informs the other

    Overlay networks for smart grids

    Get PDF

    Exploiting programmable architectures for WiFi/ZigBee inter-technology cooperation

    Get PDF
    The increasing complexity of wireless standards has shown that protocols cannot be designed once for all possible deployments, especially when unpredictable and mutating interference situations are present due to the coexistence of heterogeneous technologies. As such, flexibility and (re)programmability of wireless devices is crucial in the emerging scenarios of technology proliferation and unpredictable interference conditions. In this paper, we focus on the possibility to improve coexistence performance of WiFi and ZigBee networks by exploiting novel programmable architectures of wireless devices able to support run-time modifications of medium access operations. Differently from software-defined radio (SDR) platforms, in which every function is programmed from scratch, our programmable architectures are based on a clear decoupling between elementary commands (hard-coded into the devices) and programmable protocol logic (injected into the devices) according to which the commands execution is scheduled. Our contribution is two-fold: first, we designed and implemented a cross-technology time division multiple access (TDMA) scheme devised to provide a global synchronization signal and allocate alternating channel intervals to WiFi and ZigBee programmable nodes; second, we used the OMF control framework to define an interference detection and adaptation strategy that in principle could work in independent and autonomous networks. Experimental results prove the benefits of the envisioned solution

    Safety-related challenges and opportunities for GPUs in the automotive domain

    Get PDF
    GPUs have been shown to cover the computing performance needs of autonomous driving (AD) systems. However, since the GPUs used for AD build on designs for the mainstream market, they may lack fundamental properties for correct operation under automotive's safety regulations. In this paper, we analyze some of the main challenges in hardware and software design to embrace GPUs as the reference computing solution for AD, with the emphasis in ISO 26262 functional safety requirements.Authors would like to thank Guillem Bernat from Rapita Systems for his technical feedback on this work. The research leading to this work has received funding from the European Re-search Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 772773). This work has also been partially supported by the Spanish Ministry of Science and Innovation under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717. Carles HernĂĄndez is jointly funded by the Spanish Ministry of Economy and Competitiveness and FEDER funds through grant TIN2014-60404-JIN.Peer ReviewedPostprint (author's final draft

    Design for validation: An approach to systems validation

    Get PDF
    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center

    Quality of service assurance for the next generation Internet

    Get PDF
    The provisioning for multimedia applications has been of increasing interest among researchers and Internet Service Providers. Through the migration from resource-based to service-driven networks, it has become evident that the Internet model should be enhanced to provide support for a variety of differentiated services that match applications and customer requirements, and not stay limited under the flat best-effort service that is currently provided. In this paper, we describe and critically appraise the major achievements of the efforts to introduce Quality of Service (QoS) assurance and provisioning within the Internet model. We then propose a research path for the creation of a network services management architecture, through which we can move towards a QoS-enabled network environment, offering support for a variety of different services, based on traffic characteristics and user expectations
    • 

    corecore