32 research outputs found

    Automatic synthesis of TTA processor networks from RVC-CAL dataflow programs

    Get PDF
    International audienceThe RVC-CAL dataflow language has recently become standardized through its use as the official language of Reconfigurable Video Coding (RVC), a recent standard by MPEG. The tools developed for RVC-CAL have enabled the transformation of RVC-CAL dataflow programs into C language and VHDL (among others), enabling implementations for instruction processors and HDL synthesis. This paper introduces new tools that enable automatic creation of heterogeneous multiprocessor networks out of RVC-CAL dataflow programs. Each processor in the network performs the functionality of one RVC-CAL actor. The processors are of the Transport Triggered Architecture (TTA) type, for which a complete co-design toolset exists. The existing tools enable customizing the processors according to the requirements of individual dataflow actors. The functionality of the tool chain has been demonstrated by synthesizing an MPEG-4 Simple Profile video decoder to an FPGA. This particular decoder is automatically realized into 21 tiny, heterogeneous processors

    Scheduling of CAL actor networks based on dynamic code analysis

    Get PDF
    International audienceCAL is a dataflow oriented language for writing high-level specifications of signal processing applications. The language has recently been standardized and selected for the new MPEG Reconfigurable Video Coding standard. Application specifications written in CAL can be transformed into executable implementations through development tools. Unfortunately, the present tools provide no way to schedule the CAL entities efficiently at run-time. This paper proposes an automated approach to analyze specifications written in CAL, and produce run-time schedules that perform on average 1.45 #x00D7; faster than implementations relying on default scheduling. The approach is based on quasi-static scheduling, which reduces conditional execution in the run-time system

    Few-shot Class-incremental Learning: A Survey

    Full text link
    Few-shot Class-Incremental Learning (FSCIL) presents a unique challenge in machine learning, as it necessitates the continuous learning of new classes from sparse labeled training samples without forgetting previous knowledge. While this field has seen recent progress, it remains an active area of exploration. This paper aims to provide a comprehensive and systematic review of FSCIL. In our in-depth examination, we delve into various facets of FSCIL, encompassing the problem definition, the discussion of primary challenges of unreliable empirical risk minimization and the stability-plasticity dilemma, general schemes, and relevant problems of incremental learning and few-shot learning. Besides, we offer an overview of benchmark datasets and evaluation metrics. Furthermore, we introduce the classification methods in FSCIL from data-based, structure-based, and optimization-based approaches and the object detection methods in FSCIL from anchor-free and anchor-based approaches. Beyond these, we illuminate several promising research directions within FSCIL that merit further investigation

    A Novel Application of Polynomial Solvers in mmWave Analog Radio Beamforming

    Full text link
    Beamforming is a signal processing technique where an array of antenna elements can be steered to transmit and receive radio signals in a specific direction. The usage of millimeter wave (mmWave) frequencies and multiple input multiple output (MIMO) beamforming are considered as the key innovations of 5th Generation (5G) and beyond communication systems. The technique initially performs a beam alignment procedure, followed by data transfer in the aligned directions between the transmitter and the receiver. Traditionally, beam alignment involves periodical and exhaustive beam sweeping at both transmitter and the receiver, which is a slow process causing extra communication overhead with MIMO and massive MIMO radio units. In applications such as beam tracking, angular velocity, beam steering etc., the beam alignment procedure is optimized by estimating the beam directions using first order polynomial approximations. Recent learning-based SOTA strategies for fast mmWave beam alignment also require exploration over exhaustive beam pairs during the training procedure, causing overhead to learning strategies for higher antenna configurations. In this work, we first optimize the beam alignment cost functions e.g. the data rate, to reduce the beam sweeping overhead by applying polynomial approximations of its partial derivatives which can then be solved as a system of polynomial equations using well-known tools from algebraic geometry. At this point, a question arises: 'what is a good polynomial approximation?' In this work, we attempt to obtain a 'good polynomial approximation'. Preliminary experiments indicate that our estimated polynomial approximations attain a so-called sweet-spot in terms of the solver speed and accuracy, when evaluated on test beamforming problems.Comment: Accepted for publication in the SIGSAM's ACM Communications in Computer Algebra, as an extended abstrac

    Mapping Transient Hyperventilation Induced Alterations with Estimates of the Multi-Scale Dynamics of BOLD Signal

    Get PDF
    Temporal blood oxygen level dependent (BOLD) contrast signals in functional MRI during rest may be characterized by power spectral distribution (PSD) trends of the form 1/fα. Trends with 1/f characteristics comprise fractal properties with repeating oscillation patterns in multiple time scales. Estimates of the fractal properties enable the quantification of phenomena that may otherwise be difficult to measure, such as transient, non-linear changes. In this study it was hypothesized that the fractal metrics of 1/f BOLD signal trends can map changes related to dynamic, multi-scale alterations in cerebral blood flow (CBF) after a transient hyperventilation challenge. Twenty-three normal adults were imaged in a resting-state before and after hyperventilation. Different variables (1/f trend constant α, fractal dimension Df, and, Hurst exponent H) characterizing the trends were measured from BOLD signals. The results show that fractal metrics of the BOLD signal follow the fractional Gaussian noise model, even during the dynamic CBF change that follows hyperventilation. The most dominant effect on the fractal metrics was detected in grey matter, in line with previous hyperventilation vaso-reactivity studies. The α was able to differentiate also blood vessels from grey matter changes. Df was most sensitive to grey matter. H correlated with default mode network areas before hyperventilation but this pattern vanished after hyperventilation due to a global increase in H. In the future, resting-state fMRI combined with fractal metrics of the BOLD signal may be used for analyzing multi-scale alterations of cerebral blood flow

    Observations on Power-Efficiency Trends in Mobile Communication Devices

    No full text
    Computing solutions used in mobile communications equipment are similar to those in personal and mainframe computers. The key differences between the implementations at chip level are the low leakage silicon technology and lower clock frequency used in mobile devices. The hardware and software architectures, including the operating system principles, are strikingly similar, although the mobile computing systems tend to rely more on hardware accelerators. As the performance expectations of mobile devices are increasing towards the personal computer level and beyond, power efficiency is becoming a major bottleneck. So far, the improvements of the silicon processes in mobile phones have been exploited by software designers to increase functionality and to cut development time, while usage times, and energy efficiency, have been kept at levels that satisfy the customers. Here we explain some of the observed developments and consider means of improving energy efficiency. We show that both processor and software architectures have a big impact on power consumption. Properly targeted research is needed to find the means to explicitly optimize system designs for energy efficiency, rather than maximize the nominal throughputs of the processor cores used
    corecore