5 research outputs found
Towards Computational Efficiency of Next Generation Multimedia Systems
To address throughput demands of complex applications (like Multimedia), a next-generation system designer needs to co-design and co-optimize the hardware and software layers. Hardware/software knobs must be tuned in synergy to increase the throughput efficiency. This thesis provides such algorithmic and architectural solutions, while considering the new technology challenges (power-cap and memory aging). The goal is to maximize the throughput efficiency, under timing- and hardware-constraints
Recommended from our members
End user video quality prediction and coding parameters selection at the encoder for robust HEVC video transmission
Along with the rapid increase in the availability for high quality video formats such as HD (High Definition), UHD (Ultra HD) and HDR (High Dynamic Range), a huge demand for data rates during their transmission has become inevitable. Consequently, the role of video compression techniques has become crucially important in the process of mitigating the data rate requirements. Even though the latest video codec HEVC (High Efficiency Video Coding) has succeeded in significantly reducing the data rate compared to its immediate predecessor H.264/AVC (Advanced Video Coding), the HEVC coded videos in the meantime have become even more vulnerable to network impairments. Therefore, it is equally important to assess the consumersâ perceived quality degradation prior to transmitting HEVC coded videos over an error prone network, and to include error resilient features so as to minimize the adverse effects those impairments. To this end, this paper proposes a probabilistic model which accurately predicts the overall distortion of the decoded video at the encoder followed by an accurate QP-λ relationship which can be used in the RDO (Rate Distortion Optimization) process. During the derivation process of the probabilistic model, the impacts from the motion vectors, the pixels in the reference frames and the clipping operations are accounted and consequently the model is capable of minimizing the prediction error as low as 3.11% whereas the state-of-the-art methods canât reach below 20.08% under identical conditions. Furthermore, the enhanced RDO process has resulted in 21.41%- 43.59% improvement in the BD-rate compared to the state-of-the-art error resilient algorithms
CROSS-STACK PREDICTIVE CONTROL FRAMEWORK FOR MULTICORE REAL-TIME APPLICATIONS
Many of the next generation applications in entertainment, human computer interaction, infrastructure, security and medical systems are computationally intensive, always-on, and have soft real time (SRT) requirements. While failure to meet deadlines is not catastrophic in SRT systems, missing deadlines can result in an unacceptable degradation in the quality of service (QoS). To ensure acceptable QoS under dynamically changing operating conditions such as changes in the workload, energy availability, and thermal constraints, systems are typically designed for worst case conditions. Unfortunately, such over-designing of systems increases costs and overall power consumption.
In this dissertation we formulate the real-time task execution as a Multiple-Input, Single- Output (MISO) optimal control problem involving tracking a desired system utilization set point with control inputs derived from across the computing stack. We assume that an arbitrary number of SRT tasks may join and leave the system at arbitrary times. The tasks are scheduled on multiple cores by a dynamic priority multiprocessor scheduling algorithm. We use a model predictive controller (MPC) to realize optimal control. MPCs are easy to tune, can handle multiple control variables, and constraints on both the dependent and independent variables. We experimentally demonstrate the operation of our controller on a video encoder application and a computer vision application executing on a dual socket quadcore Xeon processor with a total of 8 processing cores. We establish that the use of DVFS and application quality as control variables enables operation at a lower power op- erating point while meeting real-time constraints as compared to non cross-stack control approaches. We also evaluate the role of scheduling algorithms in the control of homo- geneous and heterogeneous workloads. Additionally, we propose a novel adaptive control technique for time-varying workloads
Understanding and designing for control in camera operation
Kameraleute nutzen traditionell gezielt Hilfsmittel um kontrollierte Kamerabewegungen zu ermöglichen. Der technische Fortschritt hat hierbei unlĂ€ngst zum Entstehen neuer Werkzeugen wie Gimbals, Drohnen oder Robotern beigetragen. Dabei wurden durch eine Kombination von Motorisierung, Computer-Vision und Machine-Learning auch neue Interaktionstechniken eingefuÌhrt. Neben dem etablierten achsenbasierten Stil wurde nun auch ein inhaltsbasierter Interaktionsstil ermöglicht. Einerseits vereinfachte dieser die Arbeit, andererseits aber folgten dieser (Teil-)Automatisierung auch unerwuÌnschte Nebeneffekte. GrundsĂ€tzlich wollen sich Kameraleute wĂ€hrend der Kamerabewegung kontinuierlich in Kontrolle und am Ende als Autoren der Aufnahmen fuÌhlen. WĂ€hrend Automatisierung hierbei Experten unterstuÌtzen und AnfĂ€nger befĂ€higen kann, fuÌhrt sie unweigerlich auch zu einem gewissen Verlust an gewuÌnschter Kontrolle. Wenn wir Kamerabewegung mit neuen Werkzeugen unterstuÌtzen wollen, stellt sich uns daher die Frage: Wie sollten wir diese Werkzeuge gestalten damit sie, trotz fortschreitender Automatisierung ein GefuÌhl von Kontrolle vermitteln?
In der Vergangenheit wurde Kamerakontrolle bereits eingehend erforscht, allerdings vermehrt im virtuellen Raum. Die Anwendung inhaltsbasierter Kontrolle im physikalischen Raum trifft jedoch auf weniger erforschte domĂ€nenspezifische Herausforderungen welche gleichzeitig auch neue Gestaltungsmöglichkeiten eröffnen. Um dabei auf NutzerbeduÌrfnisse einzugehen, muÌssen sich Schnittstellen zum Beispiel an diese EinschrĂ€nkungen anpassen können und ein Zusammenspiel mit bestehenden Praktiken erlauben. Bisherige Forschung fokussierte sich oftmals auf ein technisches VerstĂ€ndnis von Kamerafahrten, was sich auch in der Schnittstellengestaltung niederschlug. Im Gegensatz dazu trĂ€gt diese Arbeit zu einem besseren VerstĂ€ndnis der Motive und Praktiken von Kameraleuten bei und bildet eine Grundlage zur Forschung und Gestaltung von Nutzerschnittstellen.
Diese Arbeit prĂ€sentiert dazu konkret drei BeitrĂ€ge: Zuerst beschreiben wir ethnographische Studien uÌber Experten und deren Praktiken. Sie zeigen vor allem die Herausforderungen von Automatisierung bei Kreativaufgaben auf (Assistenz vs. KontrollgefuÌhl). Zweitens, stellen wir ein Prototyping-Toolkit vor, dass fuÌr den Einsatz im Feld geeignet ist. Das Toolkit stellt Software fuÌr eine Replikation quelloffen bereit und erleichtert somit die Exploration von Designprototypen. Um Fragen zu deren Gestaltung besser beantworten zu können, stellen wir ebenfalls ein Evaluations-Framework vor, das vor allem KontrollqualitĂ€t und -gefuÌhl bestimmt. Darin erweitern wir etablierte AnsĂ€tze um eine neurowissenschaftliche Methodik, um Daten explizit wie implizit erheben zu können. Drittens, prĂ€sentieren wir Designs und deren Evaluation aufbauend auf unserem Toolkit und Framework. Die Alternativen untersuchen Kontrolle bei verschiedenen Automatisierungsgraden und inhaltsbasierten Interaktionen. Auftretende Verdeckung durch graphische Elemente, wurde dabei durch visuelle Reduzierung und Mid-Air Gesten kompensiert. Unsere Studien implizieren hohe Grade an KontrollqualitĂ€t und -gefuÌhl bei unseren AnsĂ€tzen, die zudem kreatives Arbeiten und bestehende Praktiken unterstuÌtzen.Cinematographers often use supportive tools to craft desired camera moves. Recent technological advances added new tools to the palette such as gimbals, drones or robots. The combination of motor-driven actuation, computer vision and machine learning in such systems also rendered new interaction techniques possible. In particular, a content-based interaction style was introduced in addition to the established axis-based style. On the one hand, content-based cocreation between humans and automated systems made it easier to reach high level goals. On the other hand however, the increased use of automation also introduced negative side effects. Creatives usually want to feel in control during executing the camera motion and in the end as the authors of the recorded shots. While automation can assist experts or enable novices, it unfortunately also takes away desired control from operators. Thus, if we want to support cinematographers with new tools and interaction techniques the following question arises: How should we design interfaces for camera motion control that, despite being increasingly automated, provide cinematographers with an experience of control?
Camera control has been studied for decades, especially in virtual environments. Applying content-based interaction to physical environments opens up new design opportunities but also faces, less researched, domain-specific challenges. To suit the needs of cinematographers, designs need to be crafted with care. In particular, they must adapt to constraints of recordings on location. This makes an interplay with established practices essential. Previous work has mainly focused on a technology-centered understanding of camera travel which consequently influenced the design of camera control systems. In contrast, this thesis, contributes to the understanding of the motives of cinematographers, how they operate on set and provides a user-centered foundation informing cinematography specific research and design.
The contribution of this thesis is threefold: First, we present ethnographic studies on expert users and their shooting practices on location. These studies highlight the challenges of introducing automation to a creative task (assistance vs feeling in control). Second, we report on a domain specific prototyping toolkit for in-situ deployment. The toolkit provides open source software for low cost replication enabling the exploration of design alternatives. To better inform design decisions, we further introduce an evaluation framework for estimating the resulting quality and sense of control. By extending established methodologies with a recent neuroscientific technique, it provides data on explicit as well as implicit levels and is designed to be applicable to other domains of HCI. Third, we present evaluations of designs based on our toolkit and framework. We explored a dynamic interplay of manual control with various degrees of automation. Further, we examined different content-based interaction styles. Here, occlusion due to graphical elements was found and addressed by exploring visual reduction strategies and mid-air gestures. Our studies demonstrate that high degrees of quality and sense of control are achievable with our tools that also support creativity and established practices