9 research outputs found

    Reconfigurable Video Coding on multicore : an overview of its main objectives

    Get PDF
    International audienceThe current monolithic and lengthy scheme behind the standardization and the design of new video coding standards is becoming inappropriate to satisfy the dynamism and changing needs of the video coding community. Such scheme and specification formalism does not allow the clear commonalities between the different codecs to be shown, at the level of the specification nor at the level of the implementation. Such a problem is one of the main reasons for the typically long interval elapsing between the time a new idea is validated until it is implemented in consumer products as part of a worldwide standard. The analysis of this problem originated a new standard initiative within the International Organization for Standardization (ISO)/ International Electrotechnical Commission (IEC) Moving Pictures Experts Group (MPEG) committee, namely Reconfigurable Video Coding (RVC). The main idea is to develop a video coding standard that overcomes many shortcomings of the current standardization and specification process by updating and progressively incrementing a modular library of components. As the name implies, flexibility and reconfigurability are new attractive features of the RVC standard. Besides allowing for the definition of new codec algorithms, such features, as well as the dataflow-based specification formalism, open the way to define video coding standards that expressly target implementations on platforms with multiple cores. This article provides an overview of the main objectives of the new RVC standard, with an emphasis on the features that enable efficient implementation on platforms with multiple cores. A brief introduction to the methodologies that efficiently map RVC codec specifications to multicore platforms is accompanied with an example of the possible breakthroughs that are expected to occur in the design and deployment of multimedia services on multicore platforms

    Multi-standard reconfigurable motion estimation processor for hybrid video codecs

    Get PDF

    Algorithm/Architecture Co-Exploration of Visual Computing: Overview and Future Perspectives

    Get PDF
    Concurrently exploring both algorithmic and architectural optimizations is a new design paradigm. This survey paper addresses the latest research and future perspectives on the simultaneous development of video coding, processing, and computing algorithms with emerging platforms that have multiple cores and reconfigurable architecture. As the algorithms in forthcoming visual systems become increasingly complex, many applications must have different profiles with different levels of performance. Hence, with expectations that the visual experience in the future will become continuously better, it is critical that advanced platforms provide higher performance, better flexibility, and lower power consumption. To achieve these goals, algorithm and architecture co-design is significant for characterizing the algorithmic complexity used to optimize targeted architecture. This paper shows that seamless weaving of the development of previously autonomous visual computing algorithms and multicore or reconfigurable architectures will unavoidably become the leading trend in the future of video technology

    A Cost Shared Quantization Algorithm and its Implementation for Multi-Standard Video CODECS

    Get PDF
    The current trend of digital convergence creates the need for the video encoder and decoder system, known as codec in short, that should support multiple video standards on a single platform. In a modern video codec, quantization is a key unit used for video compression. In this thesis, a generalized quantization algorithm and hardware implementation is presented to compute quantized coefficient for six different video codecs including the new developing codec High Efficiency Video Coding (HEVC). HEVC, successor to H.264/MPEG-4 AVC, aims to substantially improve coding efficiency compared to AVC High Profile. The thesis presents a high performance circuit shared architecture that can perform the quantization operation for HEVC, H.264/AVC, AVS, VC-1, MPEG- 2/4 and Motion JPEG (MJPEG). Since HEVC is still in drafting stage, the architecture was designed in such a way that any final changes can be accommodated into the design. The proposed quantizer architecture is completely division free as the division operation is replaced by multiplication, shift and addition operations. The design was implemented on FPGA and later synthesized in CMOS 0.18 ÎĽm technology. The results show that the proposed design satisfies the requirement of all codecs with a maximum decoding capability of 60 fps at 187.3 MHz for Xilinx Virtex4 LX60 FPGA of a 1080p HD video. The scheme is also suitable for low-cost implementation in modern multi-codec systems

    Prediction of Quality of Experience for Video Streaming Using Raw QoS Parameters

    Get PDF
    Along with the rapid growth in consumer adoption of modern portable devices, video streaming is expected to dominate a large share of the global Internet traffic in the near future. Today user experience is becoming a reliable indicator for video service providers and telecommunication operators to convey overall end-to-end system functioning. Towards this, there is a profound need for an efficient Quality of Experience (QoE) monitoring and prediction. QoE is a subjective metric, which deals with user perception and can vary due to the user expectation and context. However, available QoE measurement techniques that adopt a full reference method are impractical in real-time transmission since they require the original video sequence to be available at the receiver’s end. QoE prediction, however, requires a firm understanding of those Quality of Service (QoS) factors that are the most influential on QoE. The main aim of this thesis work is the development of novel and efficient models for video quality prediction in a non-intrusive way and to demonstrate their application in QoE-enabled optimisation schemes for video delivery. In this thesis, the correlation between QoS and QoE is utilized to objectively estimate the QoE. For this, both objective and subjective methods were used to create datasets that represent the correlation between QoS parameters and measured QoE. Firstly, the impact of selected QoS parameters from both encoding and network levels on video QoE is investigated. The obtained QoS/QoE correlation is backed by thorough statistical analysis. Secondly, the development of two novel hybrid non-reference models for predicting video quality using fuzzy logic inference systems (FIS) as a learning-based technique. Finally, attention was move onto demonstrating two applications of the developed FIS prediction model to show how QoE is used to optimise video delivery

    Portable Waveform Development for Software Defined Radios

    Get PDF
    This work focuses on the question: "How can we build waveforms that can be moved from one platform to another?\u27\u27 Therefore an approach based on the Model Driven Architecture was evaluated. Furthermore, a proof of concept is given with the port of a TETRA waveform from a USRP platform to an SFF SDR platform

    A Heterogeneous System Architecture for Low-Power Wireless Sensor Nodes in Compute-Intensive Distributed Applications

    Get PDF
    Wireless Sensor Networks (WSNs) combine embedded sensing and processing capabilities with a wireless communication infrastructure, thus supporting distributed monitoring applications. WSNs have been investigated for more than three decades, and recent social and industrial developments such as home automation, or the Internet of Things, have increased the commercial relevance of this key technology. The communication bandwidth of the sensor nodes is limited by the transportation media and the restricted energy budget of the nodes. To still keep up with the ever increasing sensor count and sampling rates, the basic data acquisition and collection capabilities of WSNs have been extended with decentralized smart feature extraction and data aggregation algorithms. Energy-efficient processing elements are thus required to meet the ever-growing compute demands of the WSN motes within the available energy budget. The Hardware-Accelerated Low Power Mote (HaLoMote) is proposed and evaluated in this thesis to address the requirements of compute-intensive WSN applications. It is a heterogeneous system architecture, that combines a Field Programmable Gate Array (FPGA) for hardware-accelerated data aggregation with an IEEE 802.15.4 based Radio Frequency System-on-Chip for the network management and the top-level control of the applications. To properly support Dynamic Power Management (DPM) on the HaLoMote, a Microsemi IGLOO FPGA with a non-volatile configuration storage was chosen for a prototype implementation, called Hardware-Accelerated Low Energy Wireless Embedded Sensor Node (HaLOEWEn). As for every multi-processor architecture, the inter-processor communication and coordination strongly influences the efficiency of the HaLoMote. Therefore, a generic communication framework is proposed in this thesis. It is tightly coupled with the DPM strategy of the HaLoMote, that supports fast transitions between active and idle modes. Low-power sleep periods can thus be scheduled within every sampling cycle, even for sampling rates of hundreds of hertz. In addition to the development of the heterogeneous system architecture, this thesis focuses on the energy consumption trade-off between wireless data transmission and in-sensor data aggregation. The HaLOEWEn is compared with typical software processors in terms of runtime and energy efficiency in the context of three monitoring applications. The building blocks of these applications comprise hardware-accelerated digital signal processing primitives, lossless data compression, a precise wireless time synchronization protocol, and a transceiver scheduling for contention free information flooding from multiple sources to all network nodes. Most of these concepts are applicable to similar distributed monitoring applications with in-sensor data aggregation. A Structural Health Monitoring (SHM) application is used for the system level evaluation of the HaLoMote concept. The Random Decrement Technique (RDT) is a particular SHM data aggregation algorithm, which determines the free-decay response of the monitored structure for subsequent modal identification. The hardware-accelerated RDT executed on a HaLOEWEn mote requires only 43 % of the energy that a recent ARM Cortex-M based microcontroller consumes for this algorithm. The functionality of the overall WSN-based SHM system is shown with a laboratory-scale demonstrator. Compared to reference data acquired by a wire-bound laboratory measurement system, the HaLOEWEn network can capture the structural information relevant for the SHM application with less than 1 % deviation

    Traitement du signal pour les communications numériques au travers de canaux radio-mobiles

    Get PDF
    This manuscript of ''Habilitation à diriger les Recherches'' (Habilitation to conduct researches) gives me the opportunity to take stock of the last 14 years on my associate professor activities and on my research works in the field of signal processing for digital communications, particularly for radio-mobile communications. The purpose of this signal processing is generally to obtain a robust transmission, despite the passage of digital information through a communication channel disrupted by the mobility between the transmitter and the receiver (Doppler effect), the phenomenon of echoes (multi-path propagation), the addition of noise or interference, or by limitations in bandwidth, in transmitted power or in signal-to-noise ratio. In order to recover properly the digital information, the receiver needs in general to have an accurate knowledge of the channel state. Much of my work has focused on receiver synchronization or more generally on the dynamic estimation of the channel parameters (delays, phases, amplitudes, Doppler shifts, ...). We have developed estimators and studied their performance in asymptotic variance, and have compared them to minimum lower bound (Cramer-rao or Bayesian Cramer Rao bounds). Some other studies have focused only on the recovering of information (''detection'' or ''equalization'' task) by the receiver after channel estimation, or proposed and analyzed emission / reception schemes, reliable for certain scenarios (transmit diversity scheme for flat fading channel, scheme with high energy efficiency, ...).Ce mémoire de HDR est l'occasion de dresser un bilan des 14 dernières années concernant mes activités d'enseignant-chercheur et mes travaux de recherche dans le domaine du traitement du signal pour les communications numériques, et plus particulièrement les communications radio-mobiles. L'objet de ce traitement du signal est globalement l'obtention d'une transmission robuste, malgré le passage de l'information numérique au travers d'un canal de communication perturbé par la mobilité entre l'émetteur et le récepteur (effet Doppler), le phénomène d'échos, l'addition de bruit ou d'interférence, ou encore par des limitations en bande-passante, en puissance transmise ou en rapport-signal à bruit. Afin de restituer au mieux l'information numérique, le récepteur a en général besoin de disposer d'une connaissance précise du canal. Une grande partie de mes travaux s'est intéressé à l'estimation dynamique des paramètres de ce canal (retards, phases, amplitudes, décalages Doppler, ...), et en particulier à la synchronisation du récepteur. Quelques autres travaux se sont intéressés seulement à la restitution de l'information (tâches de ''détection'' ou d' ''égalisation'') par le récepteur une fois le canal estimé, ou à des schémas d'émission / réception spécifiques. La synthèse des travaux commence par une introduction générale décrivant les ''canaux de communications'' et leurs problèmes potentiels, et positionne chacun de mes travaux en ces termes. Une première partie s'intéresse aux techniques de réception pour les signaux à spectre étalé des systèmes d'accès multiple à répartition par codes (CDMA). Ces systèmes large-bande offrent un fort pouvoir de résolution temporelle et des degrés de liberté, que nous avons exploités pour étudier l'égalisation et la synchronisation (de retard et de phase) en présence de trajets multiples et d'utilisateurs multiples. La première partie regroupe aussi d'autres schémas d'émission/réception, proposés pour leur robustesse dans différents scénarios (schéma à diversité pour canaux à évanouissement plats, schéma à forte efficacité énergétique, ...). La seconde partie est consacrée à l'estimation dynamique Bayésienne des paramètres du canal. On suppose ici qu'une partie des paramètres à estimer exhibe des variations temporelles aléatoires selon une certaine loi à priori. Nous proposons d'abord des estimateurs et des bornes minimales d'estimation pour des modèles de transmission relativement complexes, en raison de la distorsion temporelle due à la forte mobilité en modulation multi-porteuse (OFDM), ou de la présence de plusieurs paramètres à estimer conjointement, ou encore de non linéarités dans les modèles. Nous nous focalisons ensuite sur le problème d'estimation des amplitudes complexes des trajets d'un canal à évolution lente (à 1 ou plusieurs bonds). Nous proposons des estimateurs récursifs (dénommés CATL, pour ''Complex Amplitude Tracking Loop'') à structure imposée inspirée par les boucles à verrouillage de phase numériques, de performance asymptotiques proches des bornes minimales. Les formules analytiques approchées de performances asymptotiques et de réglages de ces estimateurs sont établies sous forme de simples fonctions des paramètres physiques (spectre Doppler, retards, niveau de bruit). Puis étant donné les liens établis entre ces estimateurs CATL et certains filtres de Kalman (construits pour des modèles d'état de type marche aléatoire intégrée), les formules approchées de performances asymptotiques et de réglage de ces filtres de Kalman sont aussi dérivées
    corecore