61 research outputs found
Automated runnable to task mapping
We propose in this paper, a method to automatically map runnables (blocks of code with dedicated functionality) with real-time constraints to tasks (or threads). We aim at reducing the number of tasks runnables are mapped to, while preserving the schedulability of the initial system. We consider independent tasks running on a single processor. Our approach has been applied with fixed-task or fixed-job priorities assigned in a Deadline Monotonic (DM) or a Earliest Deadline First (EDF) manner
Minimizing a real-time task set through Task Clustering
International audienceIn the industry, real-time systems are specified as a set of hundreds of functionalities with timing constraints. Implementing those functionalities as threads in a one-to-one relation is not realistic due to the overhead caused by the large number of threads. In this paper, we present task clustering, which aims at minimizing the number of threads while preserving the schedulability. We prove that our clustering problem is NP-Hard and describe a heuristic to tackle it. Our approach has been applied to fixed-task or fixed-job priority based scheduling policies as Deadline Monotonic (DM) or Earliest Deadline First (EDF)
Dynamic Load Balancing Based on Applications Global States Monitoring
8 pages à paraîtreInternational audienceThe paper presents how to use a special novel distributed program design framework with evolved global control mechanisms to assure processor load balancing during execution of application programs. The new framework supports a programmer with an API and GUI for automated graphical design of program execution control based on global application states monitoring. The framework provides highlevel distributed control primitives at process level and a special control infrastructure for global asynchronous execution control at thread level. Both kinds of control assume observations of current multicore processor performance and communication throughput enabled in the executive distributed system. Methods for designing processor load balancing control based on a system of program and system properties metrics and computational data migration between application executive processes is presented and assessed by experiments with execution of graph representations of distributed programs
Ab initio quantum scattering calculations and a new potential energy surface for the HCl()-O() system: collision-induced line-shape parameters for O-perturbed R(0) 0-0 line in HCl
The remote sensing of abundance and properties of HCl -- the main atmospheric
reservoir of Cl atoms which directly participate in ozone depletion -- are
important for monitoring the partitioning of chlorine between "ozone-depleting"
and "reservoir" species. Such remote studies require knowledge of the shapes of
molecular resonances of HCl, which are perturbed by collisions with the
molecules of the surrounding air. In this work, we report the first fully
quantum calculations of collisional perturbations of the shape of a pure
rotational line in HCl perturbed by an air-relevant molecule (as the
first model system we choose the R(0) line in HCl perturbed by O). The
calculations are performed on our new highly-accurate
HCl()-O() potential energy surface. In addition
to pressure broadening and shift, we determine also their speed dependencies
and the complex Dicke parameter. This gives important input to the community
discussion on the physical meaning of the complex Dicke parameter and its
relevance for atmospheric spectra (previously, the complex Dicke parameter for
such systems was mainly determined from phenomenological fits to experimental
spectra and the physical meaning of its value in that context is questionable).
We also calculate the temperature dependence of the line-shape parameters and
obtain agreement with the available experimental data. We estimate the total
combined uncertainties of our calculations at 2% relative RMSE residuals in the
simulated line shape at 296~K. This result constitutes an important step
towards computational population of spectroscopic databases with accurate ab
initio line-shape parameters for molecular systems of terrestrial atmospheric
importance.Comment: 15 pages, 7 figures, The following article has been accepted by The
Journal of Chemical Physics. After it is published, it will be found at
https://pubs.aip.org/aip/jc
Utilization of Modified CoreGRID Ontology in an Agent-based Grid Resource Management System
isbn 978-1-880843-75-8International audienceThe Agents in Grid project is devoted to the de-velopment of an agent-based intelligent high-level Grid middleware. In the proposed system, all data process-ing is ontology-driven, and initially was based on an in-house developed mini-ontology of the Grid. Our recent analysis has indicated that we should adapt and utilize the Grid ontology developed within the framework of the CoreGRID project. This note outlines how we have modified and extended the CoreGRID ontology to fulfill the needs of our approac
Resource Management in Grids: Overview and a discussion of a possible approach for an Agent-Based Middleware
14 pagesInternational audienceResource management and job scheduling are important research issues in computational grids. When software agents are used as resource managers and brokers in the Grid a number of additional issues and possible approaches materialize. The aim of this chapter is twofold. First, we discuss traditional job scheduling in grids, and when agents are utilized as grid middleware. Second, we use this as a context for discussion of how job scheduling can be done in the agent-based system under development
A high-throughput and sensitive method to measure Global DNA Methylation: Application in Lung Cancer
<p>Abstract</p> <p>Background</p> <p>Genome-wide changes in DNA methylation are an epigenetic phenomenon that can lead to the development of disease. The study of global DNA methylation utilizes technology that requires both expensive equipment and highly specialized skill sets.</p> <p>Methods</p> <p>We have designed and developed an assay, <it>CpG</it>lobal, which is easy-to-use, does not utilize PCR, radioactivity and expensive equipment. <it>CpG</it>lobal utilizes methyl-sensitive restriction enzymes, HRP Neutravidin to detect the biotinylated nucleotides incorporated in an end-fill reaction and a luminometer to measure the chemiluminescence. The assay shows high accuracy and reproducibility in measuring global DNA methylation. Furthermore, <it>CpG</it>lobal correlates significantly with High Performance Capillary Electrophoresis (HPCE), a gold standard technology. We have applied the technology to understand the role of global DNA methylation in the natural history of lung cancer. World-wide, it is the leading cause of death attributed to any cancer. The survival rate is 15% over 5 years due to the lack of any clinical symptoms until the disease has progressed to a stage where cure is limited.</p> <p>Results</p> <p>Through the use of cell lines and paired normal/tumor samples from patients with non-small cell lung cancer (NSCLC) we show that global DNA hypomethylation is highly associated with the progression of the tumor. In addition, the results provide the first indication that the normal part of the lung from a cancer patient has already experienced a loss of methylation compared to a normal individual.</p> <p>Conclusion</p> <p>By detecting these changes in global DNA methylation, <it>CpG</it>lobal may have a role as a barometer for the onset and development of lung cancer.</p
Affect Recognition using Psychophysiological Correlates in High Intensity VR Exergaming
User experience estimation of VR exergame players by recognising their affective state could enable us to personalise and optimise their experience. Affect recognition based on psychophysiological measurements has been successful for moderate intensity activities. High intensity VR exergames pose challenges as the effects of exercise and VR headsets interfere with those measurements. We present two experiments that investigate the use of different sensors for affect recognition in a VR exergame. The first experiment compares the impact of physical exertion and gamification on psychophysiological measurements during rest, conventional exercise, VR exergaming, and sedentary VR gaming. The second experiment compares underwhelming, overwhelming and optimal VR exergaming scenarios. We identify gaze fixations, eye blinks, pupil diameter and skin conductivity as psychophysiological measures suitable for affect recognition in VR exergaming and analyse their utility in determining affective valence and arousal. Our findings provide guidelines for researchers of affective VR exergames.This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 665992 </p
Passage à l'échelle d'applications java distribuées auto-adaptatives
A rapid evolution of networks, workstations, large supercomputers and personal computers, gives rise to new architectural alternatives for parallel and distributed computing. Clusters, grids and, more recently, cloud computing can therefore give an answer to constantly growing demands for computational resources, thanks to new paradigms, software concepts and systems, which are all based on distributed programming. The main features of distributed and heterogeneous applications are their irregularity and unpredictability. To enable efficient execution of such applications, we propose a programming environment for distributed applications in Java and the runtime environment ADAJ (Adaptive Distributed Applications in Java), which optimizes dynamic placement of application objects on clusters and computers within a grid. This distribution is based on a novel mechanism of observation of object work, and the relationships between the objects. The gain from this flexible and adaptive object distribution results in better execution efficiency and in better use of the power of different computers. At the same time it minimizes communication costs and reduces extra cost associated with application control. With these mechanisms, ADAJ provides automatic and adaptive distribution of application elements across the job execution platform, giving thus a reply to the computing and resource availability changes. This operation is based on a cycle stealing method and can control of the application execution granularity. As a result, the programmer does not need to worry about it anymore. Mechanisms have been implemented for various platforms and technologies. Initially, they have been designed to run on clusters of workstations containing no more than a hundred computers. In order to scale up the solution designed for cluster computing, we have re-engineered, processed and completed it. Specifically, we have introduced a framework based on software components, to help the designer to build applications for grids of computers. This work was then extended so that the platform ADAJ is today a full middleware stack. It is based on web services and it's information system is based on agents. Mechanisms of ADAJ can now manage grid execution platforms, consisting of thousands of nodes. Finally we have tested this approach on datamining problems with some distributed algorithms, which have been specifically developed. By this work, we have responded to the current problems concerning the implementation and use of grids by designing a new SOKU (Service Oriented Knowledge Utilities) architecture. Finally, we show how this research can be integrated in the theme of embedded systems.L'évolution rapide des réseaux, des stations de travail, des gros calculateurs sans oublier les ordinateurs personnels, donne naissance à de nouvelles alternatives architecturales pour les traitements parallèles et distribués. Les Grappes, les grilles de calcul et plus récemment le calcul en nuages (Cloud Computing) répondent aux besoins en calcul sans cesse croissants, grâce à des nouveaux paradigmes et concepts logiciels et systèmes basés sur la programmation distribuée. Les principales caractéristiques des applications développées dans ce cadre sont d'être hétérogènes, irrégulières et imprévisibles. Pour permettre une exécution efficace des applications réparties en Java, nous proposons un environnement de programmation et un environnement d'exécution (ADAJ : Adaptative Distributed Applications in Java) qui optimise le placement dynamique des objets de l'application sur les grappes et les grilles d'ordinateurs. Cette répartition s'appuie sur de nouveaux mécanismes d'observation de l'activité des objets et des relations entre eux. Le gain de cette distribution flexible et adaptative des objets se traduit par une meilleure efficacité de l'exécution et la possibilité d'utiliser au mieux la puissance des différents calculateurs, tout en minimisant les coûts de communication et les surcoûts liés au contrôle de l'application. Munie de ces mécanismes, la plate-forme logicielle ADAJ assure une répartition adaptative et automatique des éléments de l'application sur la plateforme d'exécution, répondant de cette façon, aux évolutions du calcul et aux modifications de la disponibilité des ressources. Ce fonctionnement est basé sur un procédé de vol de cycle et permet de contrôler la granularité du traitement. Le programmeur n'a plus en principe, à s'en préoccuper. Les mécanismes ont été implémentés pour diverses plateformes et technologies. Dans un premier temps, ils ont été conçus pour fonctionner sur des grappes de stations de travail. Nous avons ensuite fait évoluer les solutions adoptées pour fonctionner sur des réseaux plus vastes (passage à l'échelle). En particulier, nous avons introduit un framework basé sur des composants logiciels, qui aide le concepteur à bâtir des applications pour grilles d'ordinateurs. Puis, ces travaux ont été étendus, de sorte que la plateforme ADAJ est aujourd'hui, un intergiciel à part entière. Elle est basée sur des web services et son système d'information, sur des systèmes à agents. Les mécanismes d'ADAJ peuvent maintenant gérer des plateformes d'exécution de type grille composées, à priori de milliers de machines. Nous avons finalement testé cette approche pour des problèmes de fouille de données à l'aide d'algorithmes distribués, spécifiquement développés. De cette façon nous avons répondu à la problématique actuelle concernant la mise en oeuvre et l'exploitation d'architecture de grille par des systèmes de type SOKU (Service Oriented Knowledge Utilities). Pour conclure, nous montrons comment nos travaux pourraient être utilisés dans l'environnement des systèmes-sur-puce de nouvelle génération
- …