963 research outputs found

    Weaving Rules into [email protected] for Embedded Smart Systems

    Get PDF
    Smart systems are characterised by their ability to analyse measured data in live and to react to changes according to expert rules. Therefore, such systems exploit appropriate data models together with actions, triggered by domain-related conditions. The challenge at hand is that smart systems usually need to process thousands of updates to detect which rules need to be triggered, often even on restricted hardware like a Raspberry Pi. Despite various approaches have been investigated to efficiently check conditions on data models, they either assume to fit into main memory or rely on high latency persistence storage systems that severely damage the reactivity of smart systems. To tackle this challenge, we propose a novel composition process, which weaves executable rules into a data model with lazy loading abilities. We quantitatively show, on a smart building case study, that our approach can handle, at low latency, big sets of rules on top of large-scale data models on restricted hardware.Comment: pre-print version, published in the proceedings of MOMO-17 Worksho

    Including antenna mispointing in a semi-analytical model for delay/Doppler altimetry

    Get PDF
    International audienceDelay/Doppler altimetry aims at reducing the measurementnoise and increasing the along-track resolution in comparison with conventional pulse limited altimetry. In a previous paper, we have proposed a semi-analytical model for delay/Doppler altimetry which considers some simplifications as the absence of mispointing antenna. This paper first proposes a new semi-analytical model for delay/Doppler altimetry. The proposed analytical expression for the flat surface impulse response considers antenna mispointing angles, a circular antenna pattern, no vertical speed effect and a uniform scattering. The two dimensional delay/Doppler map is obtained by a numerical computation of the convolution between the proposed analytical function, the probability density function of the heights of the specular scatterers and the time/frequency point target response of the radar. The approximations used to obtain the semi-analytical model are analyzed and the associated errors are quantified by analytical bounds for these errors. The second contribution of this paper concerns the estimation of the parameters associated with the multi-look semi-analytical model. Two estimation strategies based on the least squares procedure are proposed. The proposed model and algorithms are validated on both synthetic and real waveforms. The obtained results are very promising and show the accuracy of this generalized model with respect to the previous model assuming zero antenna mispointing

    ModÚle semi-analytique pour l'altimétrie SAR/Doppler sur océan

    Get PDF
    National audienceLe concept de radar altimĂ©trique SAR/Doppler est apparu au milieu des annĂ©es 90. Cette technologie permet de rĂ©duire le bruit de mesure (par augmentation du nombre d’observations) et d’augmenter la rĂ©solution le long de la trace (along-track) en utilisant l’information contenue dans la frĂ©quence Doppler. Cet article propose un nouveau modĂšle semi-analytique, basĂ© sur une approche gĂ©omĂ©trique, pour modĂ©liser les Ă©chos SAR/Doppler. Ce modĂšle est validĂ© Ă  l’aide de donnĂ©es synthĂ©tiques puis Ă  l’aide de donnĂ©es rĂ©elles issues du satellite Cryosat-2. Les rĂ©sultats obtenus sont trĂšs prometteurs et montrent le bon fonctionnement du modĂšle proposĂ©

    A Generalized Semi-Analytical model for delay/Doppler altimetry

    Get PDF
    International audienceThis paper introduces a new model for delay/Doppler altimetry, taking into account the effect of antenna mispointing. After defining the proposed model, the effect of the antenna mispointing on the altimetric waveform is analyzed as a function of along-track and across-track angles. Two least squares approaches are investigated for estimating the parameters associated with the proposed model. The first algorithm estimates four parameters including the across-track mispointing (which affects the echo's shape). The second algorithm uses the mispointing angles provided by the star-trackers and estimates the three remaining parameters. The proposed model and algorithms are validated via simulations conducted on both synthetic and real data

    The Next Evolution of MDE: A Seamless Integration of Machine Learning into Domain Modeling

    Get PDF
    Machine learning algorithms are designed to resolve unknown behaviors by extracting commonalities over massive datasets. Unfortunately, learning such global behaviors can be inaccurate and slow for systems composed of heterogeneous elements, which behave very differently, for instance as it is the case for cyber-physical systems andInternet of Things applications. Instead, to make smart deci-sions, such systems have to continuously reïŹne the behavior on a per-element basis and compose these small learning units together. However, combining and composing learned behaviors from different elements is challenging and requires domain knowledge. Therefore, there is a need to structure and combine the learned behaviors and domain knowledge together in a ïŹ‚exible way. In this paper we propose to weave machine learning into domain modeling. More speciïŹcally, we suggest to decompose machine learning into reusable, chainable, and independently computable small learning units, which we refer to as microlearning units.These micro learning units are modeled together with and at the same level as the domain data. We show, based on asmart grid case study, that our approach can be signiïŹcantly more accurate than learning a global behavior, while the performance is fast enough to be used for live learning

    Black adzes in the Early Neolithic of Belgium: Contribution of the Raman microspectrometry and petrography in characterization and sourcing

    Get PDF
    Early Neolithic (Linear Pottery Culture) adzes originate from settlements and workshops accompany the neolithization of Belgium. They are made from a wide range of extraregional lithic raw materials such as metamorphic green rocks (amphibolite) and black volcanic rocks (“basalt’) beside more local or regional raw material as flints, light-coloured (sedimentary and lightly metamorphic) quartzites, black lydites (Cambrian nodular phtanite of CĂ©roux-Mousty and Lower Namurian banded phtanites) and dark grey Lower Namurian silicified sandstones previously called “Micaceous sandstones of Horion-HozĂ©mont’. The discovery of the workshop of Noirfontaine near the city of LiĂšge in the 1970s and 1980s provides exceptional assemblage available for updating analytical studies. This research focuses on the multi-scale characterization, the discrimination and sourcing both Cambrian and Namurian black sedimentary rocks rich in secondary silica composing Early Neolithic adzes found in Belgium. Their black colour results from finely dispersed organic matter, but the absence of palynomorphs does not allow a biostratigraphic ascription. Additional petrographical analyses (Optical Petrography, Scanning Electron Microscope), X-ray diffraction, chemical analyses (Energy Dispersive Spectroscopy) and measuring the degree of graphitization of the organic matter through Raman microspectrometry have been decisive in identifying the geological and geographical provenances by comparing the acquired results with geological reference samples collected in the field or through reference collections. Cambrian lydites are coming from a very restricted area and were preferred to other more local rock sources

    Analyzing Complex Data in Motion at Scale with Temporal Graphs

    Get PDF
    Modern analytics solutions succeed to understand and predict phenomenons in a large diversity of software systems, from social networks to Internet-of-Things platforms. This success challenges analytics algorithms to deal with more and more complex data, which can be structured as graphs and evolve over time. However, the underlying data storage systems that support large-scale data analytics, such as time-series or graph databases, fail to accommodate both dimensions, which limits the integration of more advanced analysis taking into account the history of complex graphs, for example. This paper therefore introduces a formal and practical definition of temporal graphs. Temporal graphs pro- vide a compact representation of time-evolving graphs that can be used to analyze complex data in motion. In particular, we demonstrate with our open-source implementation, named GREYCAT, that the performance of temporal graphs allows analytics solutions to deal with rapidly evolving large-scale graphs

    Etude expérimentale de la dynamique basse fréquence d'une bulle de recirculation laminaire

    Get PDF
    Le dĂ©collement est intrinsĂšque aux Ă©coulements le long des parois qu'elle qu'en soit la gĂ©omĂ©trie. Les bulles de recirculation ainsi formĂ©es sont sources de phĂ©nomĂšnes d'instabilitĂ©. Cette Ă©tude expĂ©rimentale vise Ă  quantifier les instabilitĂ©s crĂ©Ă©es par une couche limite laminaire oĂč le dĂ©collement est provoquĂ© par une bosse optimisĂ©e Ă  cet effet. La premiĂšre partie consiste Ă  Ă©tudier le mouvement transverse Ă  l'intĂ©rieur de la bulle de recirculation. Dans une seconde partie, nous nous intĂ©resserons au battement basse frĂ©quence produit prĂšs du point de rattachement de l'Ă©coulement

    CalcGraph: taming the high costs of deep learning using models

    Get PDF
    Models based on differential programming, like deep neural networks, are well established in research and able to outperform manually coded counterparts in many applications. Today, there is a rising interest to introduce this flexible modeling to solve real-world problems. A major challenge when moving from research to application is the strict constraints on computational resources (memory and time). It is difficult to determine and contain the resource requirements of differential models, especially during the early training and hyperparameter exploration stages. In this article, we address this challenge by introducing CalcGraph, a model abstraction of differentiable programming layers. CalcGraph allows to model the computational resources that should be used and then CalcGraph’s model interpreter can automatically schedule the execution respecting the specifications made. We propose a novel way to efficiently switch models from storage to preallocated memory zones and vice versa to maximize the number of model executions given the available resources. We demonstrate the efficiency of our approach by showing that it consumes less resources than state-of-the-art frameworks like TensorFlow and PyTorch for single-model and multi-model execution
    • 

    corecore