606 research outputs found

    Real-time scheduling for media processing using conditionally guaranteed budgets

    Get PDF
    In dit proefschrift behandelen we een planningsprobleem dat haar oorsprong vindt in het kosteneffectief verwerken van verschillende media door software in consumentenapparaten, zoals digitale televisies. De laatste jaren zijn er trends gaande van analoge naar digitale systemen, en van verwerking van digitale signalen door speci??eke, toepassingsgerichte hardware naar verwerking door software. Voor de verwerking van digitale media door software wordt gebruik gemaakt van krachtige programmeerbare processoren. Om te kunnen wedijveren met bestaande oplossingen is het van belang dat deze programeerbare hardware zeer kosteneffectief wordt gebruikt. Daarnaast dienen de bestaande eigenschappen van deze consumenten apparaten, zoals robuustheid, stabiliteit, en voorspelbaarheid, behouden te blijven als er software wordt gebruikt. Verder geldt dat er gelijktijdig meerdere media stromen door een consumenten apparaat verwerkt moeten kunnen worden. Deze uitdaging is binnen de onderzoekslaboratoria van Philips aangegaan in het zogenoemde Video-Quality-of-Service programma, en het werk dat in dit proefschrift beschreven wordt is binnen dat programma ontstaan. De binnen dat programma gekozen aanpak is gebaseerd op schaalbare algoritmen voor de verwerking van media, budgetten voor die algoritmen, en software dat de instelling van die algoritmen en de grootte van de budgetten aanpast tijdens de verwerking van de media. Ten behoeve van het kosteneffectief gebruik van de programmeerbare processoren zijn de budgetten krap bemeten. Dit proefschrift geeft een uitvoerige beschrijving van die aanpak, en van een model van een apparaat dat de haalbaarheid van die aanpak aantoont. Vervolgens laten we zien dat die aanpak leidt tot een probleem wanneer er gelijktijdig meerdere stromen worden verwerkt die verschillende relatieve relevanties hebben voor de gebruiker van het apparaat. Om dit probleem op te lossen stellen we het nieuwe concept van voorwaardelijk gegarandeerde budgetten voor, en beschrijven we hoe dat concept kan worden gerealiseerd. De technieken voor het analyseren van het planningprobleem voor budgetten zijn gebaseerd op bestaande technieken voor slechtste-gevals-analyse voor periodieke real-time taken. We breiden die bestaande technieken uit met technieken voor beste-gevals-analyse zodat we apparaten die gebruik maken van dit nieuwe type budget kunnen analyseren

    Intelligent control for scalable video processing

    Get PDF
    In this thesis we study a problem related to cost-effective video processing in software by consumer electronics devices, such as digital TVs. Video processing is the task of transforming an input video signal into an output video signal, for example to improve the quality of the signal. This transformation is described by a video algorithm. At a high level, video processing can be seen as the task of processing a sequence of still pictures, called frames. Video processing in consumer electronic devices is subject to strict time constraints. In general, the successively processed frames are needed periodically in time. If a frame is not processed in time, then a quality reduction of the output signal may be perceived. Video processing in software is often characterized by highly ??uctuating, content-dependent processing times of frames. There is often a considerable gap between the worst-case and average-case processing times of frames. In general, assigning processing time to a software video processing task based on its worstcase needs is not cost effective. We consider a software video processing task to which has been assigned insuf??cient processing time to process the most computeintensive frames in time. As a result, a severe quality reduction of the output signal may occur. To optimize the quality of the output signal, given the limited amount of processing time that is available to the task, we do the following. First we use a technique called asynchronous processing, which allows the task to make more effective use of the available processing time by working ahead. Second, we make use of scalable video algorithms. A scalable video algorithm can process frames at different quality levels. The higher the applied quality level for a frame, the higher is the resulting picture quality, but also the more processing time is needed. Due to the combination of asynchronous processing and scalable processing, a larger fraction of the frames can be processed in time, however at the cost of a sometimes lower picture quality. The problem we consider is to select the quality level for each frame. The objective that we try to optimize re??ects the user-perceived quality, and is given by a combination of the number of frames that are not processed in time, the quality levels applied for the processed frames, and changes in the applied quality level between successive frames. The video signal to be processed is not known in advance, which means that we have to make a quality-level decision for each frame without knowing in which processing time this will result, and without knowing the complexity of the subsequent frames. As a ??rst solution approach we modeled this problem as a Markov decision process. The input of the model is given by the budgetted processing time for the task, and statistics on the processing times of frames at the different quality levels. Solving the Markov decision process results in a Markov strategy that can be used to select a quality level for each frame to be processed, based on the amount of time that is available for processing until the deadline of the frame. Our ??rst solution approach works well if the processing times of successive frames are independent. In practice, however, the processing times of successive frames can be highly correlated, because successive frames are often very similar. Our second solution approach, which can be seen as an extension of our ??rst approach, takes care of the dependencies in the processing times of successive frames. The idea is that we introduce a measure for the complexity of successively processed frames, based on structural ??uctuations in the processing times of the frames. Before processing, we solve the Markov decision process several times, for different values of the complexity measure. During video processing we regularly determine the complexity measure for the frames that have just been processed, and based on this measure we dynamically adapt the Markov policy that is applied to select the quality level for the next frame. The Markov strategies that we use are computed based on processing-time statistics of a particular collection of video sequences. Hence, these statistics can differ from the statistics of the video sequence that is processed. Therefore we also worked out a third solution approach in which we use a learning algorithm to select the quality levels for frames. The algorithm starts with hardly any processing-time statistics, but it has to learn these statistics from run-time experience. Basically, the learning algorithm implicitly solves the Markov decision process at run time, making use of the increasing amount of information that becomes available. The algorithm also takes care of dependencies in the processing time of successive frames, using the same complexity measure as in our second solution approach. From computer simulations we learned that our second and third solution approaches perform close to a theoretical upper bound, determined by a reference strategy that selects the quality levels for frames based on complete knowledge of the processing times of all frames to be processed. Although our solutions are successful in computer simulations, they still have to be tested in a real system

    Bulgarian economy in the second quarter of 2003

    Get PDF

    The Bulgarian economy - April 2005

    Get PDF

    An accurate analysis for guaranteed performance of multiprocessor streaming applications

    Get PDF
    Already for more than a decade, consumer electronic devices have been available for entertainment, educational, or telecommunication tasks based on multimedia streaming applications, i.e., applications that process streams of audio and video samples in digital form. Multimedia capabilities are expected to become more and more commonplace in portable devices. This leads to challenges with respect to cost efficiency and quality. This thesis contributes models and analysis techniques for improving the cost efficiency, and therefore also the quality, of multimedia devices. Portable consumer electronic devices should feature flexible functionality on the one hand and low power consumption on the other hand. Those two requirements are conflicting. Therefore, we focus on a class of hardware that represents a good trade-off between those two requirements, namely on domain-specific multiprocessor systems-on-chip (MP-SoC). Our research work contributes to dynamic (i.e., run-time) optimization of MP-SoC system metrics. The central question in this area is how to ensure that real-time constraints are satisfied and the metric of interest such as perceived multimedia quality or power consumption is optimized. In these cases, we speak of quality-of-service (QoS) and power management, respectively. In this thesis, we pursue real-time constraint satisfaction that is guaranteed by the system by construction and proven mainly based on analytical reasoning. That approach is often taken in real-time systems to ensure reliable performance. Therefore the performance analysis has to be conservative, i.e. it has to use pessimistic assumptions on the unknown conditions that can negatively influence the system performance. We adopt this hypothesis as the foundation of this work. Therefore, the subject of this thesis is the analysis of guaranteed performance for multimedia applications running on multiprocessors. It is very important to note that our conservative approach is essentially different from considering only the worst-case state of the system. Unlike the worst-case approach, our approach is dynamic, i.e. it makes use of run-time characteristics of the input data and the environment of the application. The main purpose of our performance analysis method is to guide the run-time optimization. Typically, a resource or quality manager predicts the execution time, i.e., the time it takes the system to process a certain number of input data samples. When the execution times get smaller, due to dependency of the execution time on the input data, the manager can switch the control parameter for the metric of interest such that the metric improves but the system gets slower. For power optimization, that means switching to a low-power mode. If execution times grow, the manager can set parameters so that the system gets faster. For QoS management, for example, the application can be switched to a different quality mode with some degradation in perceived quality. The real-time constraints are then never violated and the metrics of interest are kept as good as possible. Unfortunately, maintaining system metrics such as power and quality at the optimal level contradicts with our main requirement, i.e., providing performance guarantees, because for this one has to give up some quality or power consumption. Therefore, the performance analysis approach developed in this thesis is not only conservative, but also accurate, so that the optimization of the metric of interest does not suffer too much from conservativity. This is not trivial to realize when two factors are combined: parallel execution on multiple processors and dynamic variation of the data-dependent execution delays. We achieve the goal of conservative and accurate performance estimation for an important class of multiprocessor platforms and multimedia applications. Our performance analysis technique is realizable in practice in QoS or power management setups. We consider a generic MP-SoC platform that runs a dynamic set of applications, each application possibly using multiple processors. We assume that the applications are independent, although it is possible to relax this requirement in the future. To support real-time constraints, we require that the platform can provide guaranteed computation, communication and memory budgets for applications. Following important trends in system-on-chip communication, we support both global buses and networks-on-chip. We represent every application as a homogeneous synchronous dataflow (HSDF) graph, where the application tasks are modeled as graph nodes, called actors. We allow dynamic datadependent actor execution delays, which makes HSDF graphs very useful to express modern streaming applications. Our reason to consider HSDF graphs is that they provide a good basic foundation for analytical performance estimation. In this setup, this thesis provides three major contributions: 1. Given an application mapped to an MP-SoC platform, given the performance guarantees for the individual computation units (the processors) and the communication unit (the network-on-chip), and given constant actor execution delays, we derive the throughput and the execution time of the system as a whole. 2. Given a mapped application and platform performance guarantees as in the previous item, we extend our approach for constant actor execution delays to dynamic datadependent actor delays. 3. We propose a global implementation trajectory that starts from the application specification and goes through design-time and run-time phases. It uses an extension of the HSDF model of computation to reflect the design decisions made along the trajectory. We present our model and trajectory not only to put the first two contributions into the right context, but also to present our vision on different parts of the trajectory, to make a complete and consistent story. Our first contribution uses the idea of so-called IPC (inter-processor communication) graphs known from the literature, whereby a single model of computation (i.e., HSDF graphs) are used to model not only the computation units, but also the communication unit (the global bus or the network-on-chip) and the FIFO (first-in-first-out) buffers that form a ‘glue’ between the computation and communication units. We were the first to propose HSDF graph structures for modeling bounded FIFO buffers and guaranteed throughput network connections for the network-on-chip communication in MP-SoCs. As a result, our HSDF models enable the formalization of the on-chip FIFO buffer capacity minimization problem under a throughput constraint as a graph-theoretic problem. Using HSDF graphs to formalize that problem helps to find the performance bottlenecks in a given solution to this problem and to improve this solution. To demonstrate this, we use the JPEG decoder application case study. Also, we show that, assuming constant – worst-case for the given JPEG image – actor delays, we can predict execution times of JPEG decoding on two processors with an accuracy of 21%. Our second contribution is based on an extension of the scenario approach. This approach is based on the observation that the dynamic behavior of an application is typically composed of a limited number of sub-behaviors, i.e., scenarios, that have similar resource requirements, i.e., similar actor execution delays in the context of this thesis. The previous work on scenarios treats only single-processor applications or multiprocessor applications that do not exploit all the flexibility of the HSDF model of computation. We develop new scenario-based techniques in the context of HSDF graphs, to derive the timing overlap between different scenarios, which is very important to achieve good accuracy for general HSDF graphs executing on multiprocessors. We exploit this idea in an application case study – the MPEG-4 arbitrarily-shaped video decoder, and demonstrate execution time prediction with an average accuracy of 11%. To the best of our knowledge, for the given setup, no other existing performance technique can provide a comparable accuracy and at the same time performance guarantees

    The impact of macroeconomic leading indicators on inventory management

    Get PDF
    Forecasting tactical sales is important for long term decisions such as procurement and informing lower level inventory management decisions. Macroeconomic indicators have been shown to improve the forecast accuracy at tactical level, as these indicators can provide early warnings of changing markets while at the same time tactical sales are sufficiently aggregated to facilitate the identification of useful leading indicators. Past research has shown that we can achieve significant gains by incorporating such information. However, at lower levels, that inventory decisions are taken, this is often not feasible due to the level of noise in the data. To take advantage of macroeconomic leading indicators at this level we need to translate the tactical forecasts into operational level ones. In this research we investigate how to best assimilate top level forecasts that incorporate such exogenous information with bottom level (at Stock Keeping Unit level) extrapolative forecasts. The aim is to demonstrate whether incorporating these variables has a positive impact on bottom level planning and eventually inventory levels. We construct appropriate hierarchies of sales and use that structure to reconcile the forecasts, and in turn the different available information, across levels. We are interested both at the point forecast and the prediction intervals, as the latter inform safety stock decisions. Therefore the contribution of this research is twofold. We investigate the usefulness of macroeconomic leading indicators for SKU level forecasts and alternative ways to estimate the variance of hierarchically reconciled forecasts. We provide evidence using a real case study

    Integrated Online Media Management Systems For Media Centers: A Model For Selection And Effective Use

    Get PDF
    The researcher proposed to advice in the selection of an Integrated Online Library System (IOLS) for use in the 103 school media centers in the Palm Beach County Schools, Florida. This was accomplished by evaluating the two finalists of those vendors who answered the district\u27s Request for Proposal (RFP). Of the five vendors who responded to the RFP. CLSI and SIRSI were selected as the systems most likely to meet the needs of the media centers of the school district. An overview and definition of IOLS was first discussed. This overview then related itself to the needs of the school district as presented in the RFP. A selection criteria was then designed from previous research on the subject to help find the ideal system. The history and development of an Integrated Online Library System was important in seeing where the systems have originated in contrast to the systems in the Eighties to the also revealed the present time. Literature IOLS principles of operation. The Request for Proposal reflected the needs assessment discussed over several years of committee meetings of representatives from various schools. The committees explored IOLS automated options and compared these options. The RFP outlined the system requirements. Thoughts on staff attitudes while planning for a system were also considered. Each system was evaluated with the criteria outlined in the RFP. The background and capabilities of both systems were explored. This exploration took place where by the benchmark tests, on-site demonstrations systems were in use daily, conferences with the vendors, systems. And reading literature reviews on both systems. Evaluation guidelines and criteria were found in library resources. The functions required for terminal access requirements, process for data conversion, vendor background and reliability, contained in these library resources. And the cost were contained in these library resources. The results of this study culminated in the official recommendation of the SIRSI to be purchased by the district\u27s school board. It was the expectation of the author of this document to see the purchase of the recommended system by the school board and have it implemented in all the schools in the district within a three year period following the submission of the recommendation

    Management for Bachelors

    Get PDF
    The textbook contains educational module, which embraces the content of main regulatory disciplines on specialists training by the direction 6.030601 “Management” in the knowledge branch 03.06 “Management and administration” of the educational and qualification level “Bachelor”. According to the content the disciplines completely conform to curricula approved by scientific and methodological commission on management and agreed with logical and structural scheme of educational process. The textbook embraces almost all aspects of bachelor training. The chapters contain questions for self-control and list of recommended literature. While creating the chapters the results of fundamental and applied scientific researches of the evaluation branch, the forecasting and management of economic potential of complicated industrial system were used

    The travel broker : concepts for a new business proposal

    Get PDF
    Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 1983.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH.Bibliography: leaves 57-60.by Kenneth Lin.M.C.P
    corecore