37 research outputs found
Instruction-Level Power Dissipation in the Intel XScale Embedded Microprocessor
We present an instruction-level power dissipation model of the Intel XScale R° microprocessor. The XScale implements the ARMTMISA, but uses an aggressive microarchitecture and a SIMD Wireless MMXTMco-processor to speed up execution of multimedia workloads in the embedded domain.
Instruction-Level power modelling was ¯rst proposed by Tiwari et. al. in 1994. Adaptations of this model have been found to be applicable to simple ARM processors. Research also shows that instructions can be
clustered into groups with similar energy characteristics. We adapt these methodologies to the significantly more complex XScale processor. We characterize the processor in terms of the energy costs of opcode execution, operand values, pipeline stalls
etc. through accurate measurements on hardware. This instruction-based (rather than microarchitectural) approach allows us to build a high-speed power-accurate simulator that runs at MIPS-range speeds, while
achieving accuracy better than 5%. The processor core accounts only for a portion of overall power consumption, and we move beyond the core
to explore the issues involved in building a SystemC simulation framework that models power dissipation of complete systems quickly, flexibly and accurately
Performance characterization and optimization of mobile augmented reality on handheld platforms
Abstract — The introduction of low power general purpose processors (like the Intel ® Atom ™ processor) expands the capability of handheld and mobile internet devices (MIDs) to include compelling visual computing applications. One rapidly emerging visual computing usage model is known as mobile augmented reality (MAR). In the MAR usage model, the user is able to point the handheld camera to an object (like a wine bottle) or a set of objects (like an outdoor scene of buildings or monuments) and the device automatically recognizes and displays information regarding the object(s). Achieving this on the handheld requires significant compute processing resulting in a response time in the order of several seconds. In this paper, we analyze a MAR workload and identify the primary hotspot functions that incur a large fraction of the overall response time. We also present a detailed architectural characterization of the hotspot functions in terms of CPI, MPI, etc. We then implement and analyze the benefits of several software optimizations: (a) vectorization, (b) multi-threading, (c) cache conflict avoidance and (d) miscellaneous code optimizations that reduce the number of computations. We show that a 3X performance improvement in execution time can be achieved by implementing these optimizations. Overall, we believe our analysis provides a detailed understanding of the processing for a new domain of visual computing workloads (i.e. MAR) running on low power handheld compute platforms. 1
Signal Processing for Joint Source -Channel Coding of Digital Images
127 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2000.This thesis addresses the problems of signal processing for image communication and restoration. Significant attention is devoted to developing novel stochastic models for images, investigating the information theoretic performance bounds for them, and designing efficient learning and inference methods for the proposed models. Unlike the commonly accepted approach in which the design of communication systems is performed by first compressing the data into binary representation and then channel coding it to recover from transmission errors, this thesis advocates the joint source-channel coding solution to the problem. The joint approach potentially leads to significant performance gains in emerging multiuser communication scenarios like digital audio and video broadcast (DAB and DVB) and multicast over wireless and wire-line networks, multimedia communication in heterogeneous environments, and situations with uncertainty and fluctuations in the data source or channel parameters as is typical in wireless mobile communication. On the other hand, the joint source-channel coding approach is more complex than the separation-based approach, and it calls for new efficient frameworks that expose most of the gains of the joint design at a reasonable complexity. Two such frameworks are proposed in this thesis. The first joint source-channel coding approach is based on optimal combining of analog and digital signal processing methods in situations when image data is communicated over time varying channels. The second framework proposes a computationally efficient way of combining source and channel coding tasks using iterative methods from learning theory. Both frameworks are based on accurate stochastic modeling methods and show promising performance in experiments with real images. Novel stochastic modeling techniques are also applied in this thesis to the problem of image denoising, leading to state-of-the-art performance in the field.U of I OnlyRestricted to the U of I community idenfinitely during batch ingest of legacy ETD
Signal Processing for Joint Source-Channel Coding of Digital Images
This thesis addresses the problems of signal processing for image communication and restoration. Significant attention is devoted to developing novel stochastic models for images, investigating the information theoretic performance bounds for them, and designing efficient learning and inference methods for the proposed models. Unlike the commonly accepted approach in which the design of communication systems is performed by first compressing the data into binary representation and then channel coding it to recover from transmission errors, this thesis advocates the joint source-channel coding solution to the problem. The joint approach potentially leads to significant performance gains in emerging multiuser communication scenarios like digital audio and video broadcast (DAB and DVB) and multicast over wireless and wireline networks, multimedia communication in heterogeneous environments, and situations with uncertainty and fluctuations in the data source or channel parameters as is typic..