1,486 research outputs found

    Reconfigurable Mobile Multimedia Systems

    Get PDF
    This paper discusses reconfigurability issues in lowpower hand-held multimedia systems, with particular emphasis on energy conservation. We claim that a radical new approach has to be taken in order to fulfill the requirements - in terms of processing power and energy consumption - of future mobile applications. A reconfigurable systems-architecture in combination with a QoS driven operating system is introduced that can deal with the inherent dynamics of a mobile system. We present the preliminary results of studies we have done on reconfiguration in hand-held mobile computers: by having reconfigurable media streams, by using reconfigurable processing modules and by migrating functions

    The Design of a System Architecture for Mobile Multimedia Computers

    Get PDF
    This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile Digital Companion, energy management plays a crucial role in the architecture. As the Companion must remain usable in a variety of environments, it has to be flexible and adaptable to various operating conditions. The Mobile Digital Companion has an unconventional architecture that saves energy by using system decomposition at different levels of the architecture and exploits locality of reference with dedicated, optimised modules. The approach is based on dedicated functionality and the extensive use of energy reduction techniques at all levels of system design. The system has an architecture with a general-purpose processor accompanied by a set of heterogeneous autonomous programmable modules, each providing an energy efficient implementation of dedicated tasks. A reconfigurable internal communication network switch exploits locality of reference and eliminates wasteful data copies

    A Survey of Techniques For Improving Energy Efficiency in Embedded Computing Systems

    Full text link
    Recent technological advances have greatly improved the performance and features of embedded systems. With the number of just mobile devices now reaching nearly equal to the population of earth, embedded systems have truly become ubiquitous. These trends, however, have also made the task of managing their power consumption extremely challenging. In recent years, several techniques have been proposed to address this issue. In this paper, we survey the techniques for managing power consumption of embedded systems. We discuss the need of power management and provide a classification of the techniques on several important parameters to highlight their similarities and differences. This paper is intended to help the researchers and application-developers in gaining insights into the working of power management techniques and designing even more efficient high-performance embedded systems of tomorrow

    A Real-Time Service-Oriented Architecture for Industrial Automation

    Get PDF
    Industrial automation platforms are experiencing a paradigm shift. New technologies are making their way in the area, including embedded real-time systems, standard local area networks like Ethernet, Wi-Fi and ZigBee, IP-based communication protocols, standard service oriented architectures (SOAs) and Web services. An automation system will be composed of flexible autonomous components with plug & play functionality, self configuration and diagnostics, and autonomic local control that communicate through standard networking technologies. However, the introduction of these new technologies raises important problems that need to be properly solved, one of these being the need to support real-time and quality-of-service (QoS) for real-time applications. This paper describes a SOA enhanced with real-time capabilities for industrial automation. The proposed architecture allows for negotiation of the QoS requested by clients from Web services, and provides temporal encapsulation of individual activities. This way, it is possible to perform an a priori analysis of the temporal behavior of each service, and to avoid unwanted interference among them. After describing the architecture, experimental results gathered on a real implementation of the framework (which leverages a soft real-time scheduler for the Linux kernel) are presented, showing the effectiveness of the proposed solution. The experiments were performed on simple case studies designed in the context of industrial automation applications

    Intelligent control for scalable video processing

    Get PDF
    In this thesis we study a problem related to cost-effective video processing in software by consumer electronics devices, such as digital TVs. Video processing is the task of transforming an input video signal into an output video signal, for example to improve the quality of the signal. This transformation is described by a video algorithm. At a high level, video processing can be seen as the task of processing a sequence of still pictures, called frames. Video processing in consumer electronic devices is subject to strict time constraints. In general, the successively processed frames are needed periodically in time. If a frame is not processed in time, then a quality reduction of the output signal may be perceived. Video processing in software is often characterized by highly ??uctuating, content-dependent processing times of frames. There is often a considerable gap between the worst-case and average-case processing times of frames. In general, assigning processing time to a software video processing task based on its worstcase needs is not cost effective. We consider a software video processing task to which has been assigned insuf??cient processing time to process the most computeintensive frames in time. As a result, a severe quality reduction of the output signal may occur. To optimize the quality of the output signal, given the limited amount of processing time that is available to the task, we do the following. First we use a technique called asynchronous processing, which allows the task to make more effective use of the available processing time by working ahead. Second, we make use of scalable video algorithms. A scalable video algorithm can process frames at different quality levels. The higher the applied quality level for a frame, the higher is the resulting picture quality, but also the more processing time is needed. Due to the combination of asynchronous processing and scalable processing, a larger fraction of the frames can be processed in time, however at the cost of a sometimes lower picture quality. The problem we consider is to select the quality level for each frame. The objective that we try to optimize re??ects the user-perceived quality, and is given by a combination of the number of frames that are not processed in time, the quality levels applied for the processed frames, and changes in the applied quality level between successive frames. The video signal to be processed is not known in advance, which means that we have to make a quality-level decision for each frame without knowing in which processing time this will result, and without knowing the complexity of the subsequent frames. As a ??rst solution approach we modeled this problem as a Markov decision process. The input of the model is given by the budgetted processing time for the task, and statistics on the processing times of frames at the different quality levels. Solving the Markov decision process results in a Markov strategy that can be used to select a quality level for each frame to be processed, based on the amount of time that is available for processing until the deadline of the frame. Our ??rst solution approach works well if the processing times of successive frames are independent. In practice, however, the processing times of successive frames can be highly correlated, because successive frames are often very similar. Our second solution approach, which can be seen as an extension of our ??rst approach, takes care of the dependencies in the processing times of successive frames. The idea is that we introduce a measure for the complexity of successively processed frames, based on structural ??uctuations in the processing times of the frames. Before processing, we solve the Markov decision process several times, for different values of the complexity measure. During video processing we regularly determine the complexity measure for the frames that have just been processed, and based on this measure we dynamically adapt the Markov policy that is applied to select the quality level for the next frame. The Markov strategies that we use are computed based on processing-time statistics of a particular collection of video sequences. Hence, these statistics can differ from the statistics of the video sequence that is processed. Therefore we also worked out a third solution approach in which we use a learning algorithm to select the quality levels for frames. The algorithm starts with hardly any processing-time statistics, but it has to learn these statistics from run-time experience. Basically, the learning algorithm implicitly solves the Markov decision process at run time, making use of the increasing amount of information that becomes available. The algorithm also takes care of dependencies in the processing time of successive frames, using the same complexity measure as in our second solution approach. From computer simulations we learned that our second and third solution approaches perform close to a theoretical upper bound, determined by a reference strategy that selects the quality levels for frames based on complete knowledge of the processing times of all frames to be processed. Although our solutions are successful in computer simulations, they still have to be tested in a real system

    Real-time scheduling for media processing using conditionally guaranteed budgets

    Get PDF
    In dit proefschrift behandelen we een planningsprobleem dat haar oorsprong vindt in het kosteneffectief verwerken van verschillende media door software in consumentenapparaten, zoals digitale televisies. De laatste jaren zijn er trends gaande van analoge naar digitale systemen, en van verwerking van digitale signalen door speci??eke, toepassingsgerichte hardware naar verwerking door software. Voor de verwerking van digitale media door software wordt gebruik gemaakt van krachtige programmeerbare processoren. Om te kunnen wedijveren met bestaande oplossingen is het van belang dat deze programeerbare hardware zeer kosteneffectief wordt gebruikt. Daarnaast dienen de bestaande eigenschappen van deze consumenten apparaten, zoals robuustheid, stabiliteit, en voorspelbaarheid, behouden te blijven als er software wordt gebruikt. Verder geldt dat er gelijktijdig meerdere media stromen door een consumenten apparaat verwerkt moeten kunnen worden. Deze uitdaging is binnen de onderzoekslaboratoria van Philips aangegaan in het zogenoemde Video-Quality-of-Service programma, en het werk dat in dit proefschrift beschreven wordt is binnen dat programma ontstaan. De binnen dat programma gekozen aanpak is gebaseerd op schaalbare algoritmen voor de verwerking van media, budgetten voor die algoritmen, en software dat de instelling van die algoritmen en de grootte van de budgetten aanpast tijdens de verwerking van de media. Ten behoeve van het kosteneffectief gebruik van de programmeerbare processoren zijn de budgetten krap bemeten. Dit proefschrift geeft een uitvoerige beschrijving van die aanpak, en van een model van een apparaat dat de haalbaarheid van die aanpak aantoont. Vervolgens laten we zien dat die aanpak leidt tot een probleem wanneer er gelijktijdig meerdere stromen worden verwerkt die verschillende relatieve relevanties hebben voor de gebruiker van het apparaat. Om dit probleem op te lossen stellen we het nieuwe concept van voorwaardelijk gegarandeerde budgetten voor, en beschrijven we hoe dat concept kan worden gerealiseerd. De technieken voor het analyseren van het planningprobleem voor budgetten zijn gebaseerd op bestaande technieken voor slechtste-gevals-analyse voor periodieke real-time taken. We breiden die bestaande technieken uit met technieken voor beste-gevals-analyse zodat we apparaten die gebruik maken van dit nieuwe type budget kunnen analyseren

    Wireless multimedia sensor network technology: a survey

    Get PDF
    Wireless Multimedia Sensor Networks (WMSNs) is comprised of small embedded video motes capable of extracting the surrounding environmental information, locally processing it and then wirelessly transmitting it to parent node or sink. It is comprised of video sensor, digital signal processing unit and digital radio interface. In this paper we have surveyed existing WMSN hardware and communicationprotocol layer technologies for achieving or fulfilling the objectives of WMSN. We have also listed the various technical challenges posed by this technology while discussing the communication protocol layer technologies. Sensor networking capabilities are urgently required for some of our most important scientific and societal problems like understanding the international carbon budget, monitoring water resources, monitoring vehicle emissions and safeguarding public health. This is a daunting research challenge requiring distributed sensor systems operating in complex environments while providing assurance of reliable and accurate sensing

    An accurate analysis for guaranteed performance of multiprocessor streaming applications

    Get PDF
    Already for more than a decade, consumer electronic devices have been available for entertainment, educational, or telecommunication tasks based on multimedia streaming applications, i.e., applications that process streams of audio and video samples in digital form. Multimedia capabilities are expected to become more and more commonplace in portable devices. This leads to challenges with respect to cost efficiency and quality. This thesis contributes models and analysis techniques for improving the cost efficiency, and therefore also the quality, of multimedia devices. Portable consumer electronic devices should feature flexible functionality on the one hand and low power consumption on the other hand. Those two requirements are conflicting. Therefore, we focus on a class of hardware that represents a good trade-off between those two requirements, namely on domain-specific multiprocessor systems-on-chip (MP-SoC). Our research work contributes to dynamic (i.e., run-time) optimization of MP-SoC system metrics. The central question in this area is how to ensure that real-time constraints are satisfied and the metric of interest such as perceived multimedia quality or power consumption is optimized. In these cases, we speak of quality-of-service (QoS) and power management, respectively. In this thesis, we pursue real-time constraint satisfaction that is guaranteed by the system by construction and proven mainly based on analytical reasoning. That approach is often taken in real-time systems to ensure reliable performance. Therefore the performance analysis has to be conservative, i.e. it has to use pessimistic assumptions on the unknown conditions that can negatively influence the system performance. We adopt this hypothesis as the foundation of this work. Therefore, the subject of this thesis is the analysis of guaranteed performance for multimedia applications running on multiprocessors. It is very important to note that our conservative approach is essentially different from considering only the worst-case state of the system. Unlike the worst-case approach, our approach is dynamic, i.e. it makes use of run-time characteristics of the input data and the environment of the application. The main purpose of our performance analysis method is to guide the run-time optimization. Typically, a resource or quality manager predicts the execution time, i.e., the time it takes the system to process a certain number of input data samples. When the execution times get smaller, due to dependency of the execution time on the input data, the manager can switch the control parameter for the metric of interest such that the metric improves but the system gets slower. For power optimization, that means switching to a low-power mode. If execution times grow, the manager can set parameters so that the system gets faster. For QoS management, for example, the application can be switched to a different quality mode with some degradation in perceived quality. The real-time constraints are then never violated and the metrics of interest are kept as good as possible. Unfortunately, maintaining system metrics such as power and quality at the optimal level contradicts with our main requirement, i.e., providing performance guarantees, because for this one has to give up some quality or power consumption. Therefore, the performance analysis approach developed in this thesis is not only conservative, but also accurate, so that the optimization of the metric of interest does not suffer too much from conservativity. This is not trivial to realize when two factors are combined: parallel execution on multiple processors and dynamic variation of the data-dependent execution delays. We achieve the goal of conservative and accurate performance estimation for an important class of multiprocessor platforms and multimedia applications. Our performance analysis technique is realizable in practice in QoS or power management setups. We consider a generic MP-SoC platform that runs a dynamic set of applications, each application possibly using multiple processors. We assume that the applications are independent, although it is possible to relax this requirement in the future. To support real-time constraints, we require that the platform can provide guaranteed computation, communication and memory budgets for applications. Following important trends in system-on-chip communication, we support both global buses and networks-on-chip. We represent every application as a homogeneous synchronous dataflow (HSDF) graph, where the application tasks are modeled as graph nodes, called actors. We allow dynamic datadependent actor execution delays, which makes HSDF graphs very useful to express modern streaming applications. Our reason to consider HSDF graphs is that they provide a good basic foundation for analytical performance estimation. In this setup, this thesis provides three major contributions: 1. Given an application mapped to an MP-SoC platform, given the performance guarantees for the individual computation units (the processors) and the communication unit (the network-on-chip), and given constant actor execution delays, we derive the throughput and the execution time of the system as a whole. 2. Given a mapped application and platform performance guarantees as in the previous item, we extend our approach for constant actor execution delays to dynamic datadependent actor delays. 3. We propose a global implementation trajectory that starts from the application specification and goes through design-time and run-time phases. It uses an extension of the HSDF model of computation to reflect the design decisions made along the trajectory. We present our model and trajectory not only to put the first two contributions into the right context, but also to present our vision on different parts of the trajectory, to make a complete and consistent story. Our first contribution uses the idea of so-called IPC (inter-processor communication) graphs known from the literature, whereby a single model of computation (i.e., HSDF graphs) are used to model not only the computation units, but also the communication unit (the global bus or the network-on-chip) and the FIFO (first-in-first-out) buffers that form a ‘glue’ between the computation and communication units. We were the first to propose HSDF graph structures for modeling bounded FIFO buffers and guaranteed throughput network connections for the network-on-chip communication in MP-SoCs. As a result, our HSDF models enable the formalization of the on-chip FIFO buffer capacity minimization problem under a throughput constraint as a graph-theoretic problem. Using HSDF graphs to formalize that problem helps to find the performance bottlenecks in a given solution to this problem and to improve this solution. To demonstrate this, we use the JPEG decoder application case study. Also, we show that, assuming constant – worst-case for the given JPEG image – actor delays, we can predict execution times of JPEG decoding on two processors with an accuracy of 21%. Our second contribution is based on an extension of the scenario approach. This approach is based on the observation that the dynamic behavior of an application is typically composed of a limited number of sub-behaviors, i.e., scenarios, that have similar resource requirements, i.e., similar actor execution delays in the context of this thesis. The previous work on scenarios treats only single-processor applications or multiprocessor applications that do not exploit all the flexibility of the HSDF model of computation. We develop new scenario-based techniques in the context of HSDF graphs, to derive the timing overlap between different scenarios, which is very important to achieve good accuracy for general HSDF graphs executing on multiprocessors. We exploit this idea in an application case study – the MPEG-4 arbitrarily-shaped video decoder, and demonstrate execution time prediction with an average accuracy of 11%. To the best of our knowledge, for the given setup, no other existing performance technique can provide a comparable accuracy and at the same time performance guarantees
    • …
    corecore