480 research outputs found
Deploying on the Grid with DeployWare
In Proceedings of the 8th IEEE International Symposium on Cluster Computing and the Grid (TO APPEAR)International audienceIn this paper, we present DeployWare to address the deployment of distributed and heterogeneous software systems on large scale infrastructures such as grids. Deployment of software systems on grids raises many challenges like 1) the complexity to take into account orchestration of all the deployment tasks and management of software dependencies, 2) the heterogeneity of both physical infrastructures and software composing the system to deploy, 3) the validation to early detect errors before concrete deployments and 4) scalability to tackle thousands of nodes. To address these challenges, DeployWare provides a metamodel that abstracts concepts of the deployment, a virtual machine that executes deployment processes on grids from DeployWare descriptions, and a graphical console that allows to manage deployed systems, at runtime. To validate our approach, we have experimented DeployWare with a lot of software technologies, such as CORBA and SOA-based systems, on one thousand of nodes of Grid'5000, the french experimental grid infrastructure
An Aspect–Oriented Approach based on Multiparty Interactions to Specifying the Behaviour of a System
Isolating computation and coordination concerns into separate pure computation and pure coordination
enhances modularity, understandability and reusability of parallel and/or distributed software. This can
be achieved by moving interaction primitives, which are now commonly scattered in programs, into separate
modules written in a language aimed at coordinating objects and expressing how information flows
among them. The usual model for coordination is the client/server model, but it is not adequate when
several objects need to collaborate simultaneously in order to solve a problem because natural multiparty
interactions need to be decomposed into a set of low–level, binary interactions.
In this paper, we introduce CAL, an IP–based language for the description of the coordination aspect of
a system. We show that it can be successfully described in terms of simple multiparty interactions that can
be animated and are also amenable to formal reasoning.Comisión Interministerial de Ciencia y Tecnología (CICYT) MENHIR TIC 97–0593–C05–0
The cgml: a xml language for mobile cartography
Increasing processing power and storage capabilities encourage systematic adoption of high-end mobile devices, such as programmable cellular phones and wireless-enabled PDA to implement new exciting applications. The performances of modern mobile devices are bringing innovative scenarios, based on position awareness and ambient intelligence paradigms. The market is moving from old 'Wireless Applications' approach to Mobile Computing, which aims to exploit mobile host capabilities. This paper presents the compact Geographic Markup Language (cGML), an XML-based language defined to enable design and development of LBS applications specific for mobile devices, and an example of client-server architecture using it
Compact gml: merging mobile computing and mobile cartography
The use of portable devices is moving from "Wireless Applications", typically implemented as browsing-on-the-road, to "Mobile Computing", which aims to exploit increasing processing power of consumer devices. As users get connected with smartphones and PDAs, they look for geographic information and location-aware services. While browser-based approaches have been explored (using static images or graphics formats such as Mobile SVG), a data model tailored for local computation on mobile devices is still missing. This paper presents the Compact Geographic Markup Language (cGML) that enables design and development of specific purpose GIS applications for portable consumer devices where a cGML document can be used as a spatial query result as well
Boosting big data streaming applications in clouds with burstFlow
The rapid growth of stream applications in financial markets, health care, education, social media, and sensor networks represents a remarkable milestone for data processing and analytic in recent years, leading to new challenges to handle Big Data in real-time. Traditionally, a single cloud infrastructure often holds the deployment of Stream Processing applications because it has extensive and adaptative virtual computing resources. Hence, data sources send data from distant and different locations of the cloud infrastructure, increasing the application latency. The cloud infrastructure may be geographically distributed and it requires to run a set of frameworks to handle communication. These frameworks often comprise a Message Queue System and a Stream Processing Framework. The frameworks explore Multi-Cloud deploying each service in a different cloud and communication via high latency network links. This creates challenges to meet real-time application requirements because the data streams have different and unpredictable latencies forcing cloud providers' communication systems to adjust to the environment changes continually. Previous works explore static micro-batch demonstrating its potential to overcome communication issues. This paper introduces BurstFlow, a tool for enhancing communication across data sources located at the edges of the Internet and Big Data Stream Processing applications located in cloud infrastructures. BurstFlow introduces a strategy for adjusting the micro-batch sizes dynamically according to the time required for communication and computation. BurstFlow also presents an adaptive data partition policy for distributing incoming streams across available machines by considering memory and CPU capacities. The experiments use a real-world multi-cloud deployment showing that BurstFlow can reduce the execution time up to 77% when compared to the state-of-the-art solutions, improving CPU efficiency by up to 49%
A Better Alternative to Piecewise Linear Time Series Segmentation
Time series are difficult to monitor, summarize and predict. Segmentation
organizes time series into few intervals having uniform characteristics
(flatness, linearity, modality, monotonicity and so on). For scalability, we
require fast linear time algorithms. The popular piecewise linear model can
determine where the data goes up or down and at what rate. Unfortunately, when
the data does not follow a linear model, the computation of the local slope
creates overfitting. We propose an adaptive time series model where the
polynomial degree of each interval vary (constant, linear and so on). Given a
number of regressors, the cost of each interval is its polynomial degree:
constant intervals cost 1 regressor, linear intervals cost 2 regressors, and so
on. Our goal is to minimize the Euclidean (l_2) error for a given model
complexity. Experimentally, we investigate the model where intervals can be
either constant or linear. Over synthetic random walks, historical stock market
prices, and electrocardiograms, the adaptive model provides a more accurate
segmentation than the piecewise linear model without increasing the
cross-validation error or the running time, while providing a richer vocabulary
to applications. Implementation issues, such as numerical stability and
real-world performance, are discussed.Comment: to appear in SIAM Data Mining 200
Baghera Assessment Project, designing an hybrid and emergent educational society
Edited by Sophie Soury-Lavergne ; Available at: http://www-leibniz.imag.fr/LesCahiers/2003/Cahier81/BAP_CahiersLaboLeibniz.PDFResearch reportThe Baghera Assessment Project (BAP) has the objective to ex plore a new avenue for the design of e-Learning environments. The key features of BAP's approach are: (i) the concept of emergence in multi-agents systems as modelling framework, (ii) the shaping of a new theoretic al framework for modelling student knowledge, namely the cK¢ model. This new model has been constructed, based on the current research in cognitive science and education, to bridge research on education and research on the design of learning environments
Deploying on the Grid with DeployWare
In Proceedings of the 8th IEEE International Symposium on Cluster Computing and the Grid (TO APPEAR)International audienceIn this paper, we present DeployWare to address the deployment of distributed and heterogeneous software systems on large scale infrastructures such as grids. Deployment of software systems on grids raises many challenges like 1) the complexity to take into account orchestration of all the deployment tasks and management of software dependencies, 2) the heterogeneity of both physical infrastructures and software composing the system to deploy, 3) the validation to early detect errors before concrete deployments and 4) scalability to tackle thousands of nodes. To address these challenges, DeployWare provides a metamodel that abstracts concepts of the deployment, a virtual machine that executes deployment processes on grids from DeployWare descriptions, and a graphical console that allows to manage deployed systems, at runtime. To validate our approach, we have experimented DeployWare with a lot of software technologies, such as CORBA and SOA-based systems, on one thousand of nodes of Grid'5000, the french experimental grid infrastructure
An agile and adaptive holonic architecture for manufacturing control
Tese de doutoramento. Engenharia Electrotécnica e de Computadores. 2004. Faculdade de Engenharia. Universidade do Port
- …