32 research outputs found

    Repetitive Model Refactoring for Design Space Exploration of Intensive Signal Processing Applications

    Get PDF
    The efficient design of computation intensive multidimensional signal processing application requires to deal with three kinds of constraints: those implied by the data dependencies, the non functional requirements (real-time, power consumption) and the availability of resources of the execution platform. We propose here a strategy to use a refactoring tool dedicated to this kind of applications to help explore the design space. This strategy is illustrated on an industrial radar application modeled using the Modeling and Analysis of Real-time and Embedded systems (MARTE) UML profile. It allows to find good trade-offs in the usage of storage and computation resources and in the parallelism (both task and data parallelism) exploitation

    Testing for Associations between Loci and Environmental Gradients Using Latent Factor Mixed Models

    Get PDF
    Adaptation to local environments often occurs through natural selection acting on a large number of loci, each having a weak phenotypic effect. One way to detect these loci is to identify genetic polymorphisms that exhibit high correlation with environmental variables used as proxies for ecological pressures. Here, we propose new algorithms based on population genetics, ecological modeling, and statistical learning techniques to screen genomes for signatures of local adaptation. Implemented in the computer program "latent factor mixed model" (LFMM), these algorithms employ an approach in which population structure is introduced using unobserved variables. These fast and computationally efficient algorithms detect correlations between environmental and genetic variation while simultaneously inferring background levels of population structure. Comparing these new algorithms with related methods provides evidence that LFMM can efficiently estimate random effects due to population history and isolation-by-distance patterns when computing gene-environment correlations, and decrease the number of false-positive associations in genome scans. We then apply these models to plant and human genetic data, identifying several genes with functions related to development that exhibit strong correlations with climatic gradients.Comment: 29 pages with 8 pages of Supplementary Material (V2 revised presentation and results part

    A complete industrial multi-target graphical tool-chain for parallel implementations of signal/image applications

    Full text link
    SCOPES 2011 is co-located with the Mapping Applications to MPSoCs 2011 WorkshopInternational audienceToday, applications developed in an industrial environment need to shift from a sequential implementation to one or several different parallel ones, under stringent productivity and performance constraints. As this operation is generally complex and moreover needs particular skills which are not part of the traditional software engineering cultural background, there is a clear need for tools to assist it. A parallel programming tool-chain must be able to address various parallel target hardware platforms. This may be due to obsolescence of components, to different environmental constraints, or different requirements of applications that make some candidate targets more efficient than others. So the tool-chain must be easily adaptable to heterogeneous compositions of FPGAs, general purpose processors, DSPs, GPUs, etc. while hiding as much as possible the cumbersome details of these architectures to the user. However, with the current state of the art, except for very simple cases, it seems difficult to find sufficiently robust completely automated tool-chains tackling the often tricky architecture-specific trade-offs done for mapping), especially when real time and performance are needed. Thus, a man-in-the-loop approach is preferred, where the user can experiment different variants through fast prototyping, performance analysis, and rapid design space exploration. This implies providing the user with sufficiently abstracted views of both application and architecture, giving visibility on the information that is relevant to parallelization and mapping, as well as the commands enabling him/her to drive the main mapping decisions. We present here such models of both applications and architectures, and two complementary tools handling them and covering the entire design flow. The first tool based on the source-to-source compiler PIPS1, generates application models from application code, and the second tool, SpearDE2, supports the flow from models to code generation. We address the signal and image processing domains. [...

    Sex Chromosome Degeneration by Regulatory Evolution

    Full text link
    International audienceHighlights 13 A new theory for Y degeneration, not based on selective interference 14 Initiated by X and Y cis-regulator divergence after the arrest of recombination 15 Works faster and in larger populations than current theory 16 Works on small non-recombining region on the Y 17 eTOC Blurb 18 Y chromosomes are often degenerate. This remarkable feature of the genome of plants and 19 animals has been intensely investigated. Current theory proposes that this degeneration occurs 20 from "selective interference" after the arrest of recombination between the X and Y 21 chromosome, meaning that natural selection tends to be inefficient in the absence of 22 recombination. This theory has been in place for more than 40 years. In this paper, we propose a 2

    Introducing Dynamic Adaptation in High Performance Real-Time Computing Platforms for Sensors

    Get PDF
    International audienceIn high-end, data-intensive embedded sensor applications (radar, optronics), the evolution of algorithms is limited by the computation platform capabilities. These platforms impose Size, Weight and Power (SWaP) restrictions on top of reliability, cost, security and (potentially hard) real-time constraints. Thus mostly static mapping methods are used, negating the system's adaptation capabilities. Through the study of several industrial use-cases, our work aims at mitigating the aforementioned limitations by introducing a low-latency dynamic resource management system derived from techniques used in large-scale systems such as cloud and grid environments. We expect that this approach will be able to guarantee non-functional properties of applications and provide Quality of Service (QoS) negotiation on heterogeneous platforms

    Communication-Aware Prediction-Based Online Scheduling in High-Performance Real-Time Embedded Systems

    Get PDF
    International audienceCurrent high-end, data-intensive real-time embedded sensor applications (e.g., radar, optronics) require very specific computing platforms. The nature of such applications and the environment in which they are deployed impose numerous constraints, including realtime constraints, and computing throughput and latency needs. Static application placementis traditionally used to deal with these constraints. However, this approach fails to provide adaptation capabilities in an environment in constant evolution. Through the study of an industrial radar use-case, our work aims at mitigating the aforementioned limitations by proposing a low-latency online resource manager derived from techniques used in large-scale systems, such as cloud and grid environments. The resource manager introduced in this paper is able to dynamically allocate resources to fulfill requests coming from several sensors, making the most of the computing platform while providing guaranties on non-functional properties and Quality of Service (QoS) levels. Thanks to the load prediction implemented.in the manager, we are able to achieve a 83% load increase before overloading the platformwhile managing to reduce ten times the incurred QoS penalty. Further methods to reducethe impact of the overload are as well as possible future improvements are proposed anddiscussed

    Towards a design space exploration tool for MPSoC platforms designs: a case study

    Full text link
    International audienceThe deployment of an application onto a multicore architecture is often a long and difficult process. This is due to the fact that the characteristics of both the architecture and the application are taken into account late in the development process. It's therefore necessary to have tools pruning the solution space efficiently and accurately. In order to define such a tool, this work defines the basic metrics and step for a design space exploration ("DSE") flow. To this end our proposal have been tested and validated for a space application that requires both a high computing power and an architecture compatible with the space constraints. The results obtained are promising and have shown the viability of the approach
    corecore