74 research outputs found

    An executable interface specification for industrial embedded system design.

    Get PDF
    Nowadays, designers resort to abstraction techniques to conquer the complexity of industrial embedded systems during the design process. However, due to the large semantic gap between the abstractions and the implementation, the designers often fails to apply the abstraction techniques. In this paper, an EIS-based (executable interface specification) approach is proposed for the embedded system design.The proposed approach starts with using interface state diagrams to specify system architectures. A set of rules is introduced to transfer these diagrams into an executable model (EIS model) consistently. By making use of simulation/verification techniques, many architectural design errors can be detected in the EIS model at an early design stage. In the end, the EIS model can be systematically transferred into an interpreted implementation or a compiled implementation based on the constraints of the embedded platform. In this way, the inconsistencies between the high-level abstractions and the implementation can largely be reduced

    Coordination and P2P computing

    Get PDF
    Peer-to-Peer (P2P) refers to a class of systems and/or applications that use distributed resources in a decentralized and autonomous manner to achieve a goal. A number of successful applications, like BitTorrent (for file and content sharing) and SETI@Home (for distributed computing) have demonstrated the feasibility of this approach. As a new form of distributed computing, P2P computing has the same coordination problems as other forms of distributed computing. Coordination has been considered an important issue in distributed computing for a long time and many coordination models and languages have been developed. This research focuses on how to solve coordination problems in P2P computing. In particular, it is to provide a seamless P2P computing environment so that the migration of computation components is transparent. This research extends Manifold, an event-driven coordination model, to meet P2P computing requirements and integrates the P2P-Manifold model into an existing platform. The integration hides the complexity of the coordination model and makes the model easy to use

    On Distributed Verification and Verified Distribution

    Get PDF
    Fokkink, W.J. [Promotor]Pol, J.C. van de [Copromotor

    Swarm Based Implementation of a Virtual Distributed Database System in a Sensor Network

    Get PDF
    The deployment of unmanned aerial vehicles (UAVs) in recent military operations has had success in carrying out surveillance and combat missions in sensitive areas. An area of intense research on UAVs has been on controlling a group of small-sized UAVs to carry out reconnaissance missions normally undertaken by large UAVs such as Predator or Global Hawk. A control strategy for coordinating the UAV movements of such a group of UAVs adopts the bio-inspired swarm model to produce autonomous group behavior. This research proposes establishing a distributed database system on a group of swarming UAVs, providing for data storage during a reconnaissance mission. A distributed database system model is simulated treating each UAV as a distributed database site connected by a wireless network. In this model, each UAV carries a sensor and communicates to a command center when queried. Drawing equivalence to a sensor network, the network of UAVs poses as a dynamic ad-hoc sensor network. The distributed database system based on a swarm of UAVs is tested against a set of reconnaissance test suites with respect to evaluating system performance. The design of experiments focuses on the effects of varying the query input and types of swarming UAVs on overall system performance. The results show that the topology of the UAVs has a distinct impact on the output of the sensor database. The experiments measuring system delays also confirm the expectation that in a distributed system, inter-node communication costs outweigh processing costs

    Interoperability based Dynamic Data Mediation using Adaptive Multi-Agent Systems for Co-Simulation

    Get PDF
    A co-simulation is the coupling of several simulation tools where each one handles part of a modular problem which allows each designer to interact with the complex system in order to retain its business expertise and continue to use its own digital tools. For this co-simulation to work, the ability to exchange data between the tools in meaningful ways, known as Interoperability, is required. This paper describes the design of such interoperability based on the FMI (Functional Mock up Interface) standard and a dynamic data mediation using adaptive multi-agent systems for a co-simulation. It is currently being applied in neOCampus, the ambient campus of the University of Toulouse III - Paul Sabatier

    Constraint-based protocols for distributed problem solving

    Get PDF
    AbstractDistributed Problem Solving (DPS) approaches decompose problems into subproblems to be solved by interacting, cooperative software agents. Thus, DPS is suitable for solving problems characterized by many interdependencies among subproblems in the context of parallel and distributed architectures. Concurrent Constraint Programming (CCP) provides a powerful execution framework for DPS where constraints define local problem solving and the exchange of information among agents declaratively. To optimize DPS, the protocol for constraint communication must be tuned to the specific kind of DPS problem and the characteristics of the underlying system architecture. In this paper, we provide a formal framework for modeling different problems and we show how the framework applies to simple yet generalizable examples

    8 - Coordination in Distributed Systems

    Get PDF

    Model driven design and data integration in semantic web information systems

    Get PDF
    The Web is quickly evolving in many ways. It has evolved from a Web of documents into a Web of applications in which a growing number of designers offer new and interactive Web applications with people all over the world. However, application design and implementation remain complex, error-prone and laborious. In parallel there is also an evolution from a Web of documents into a Web of `knowledge' as a growing number of data owners are sharing their data sources with a growing audience. This brings the potential new applications for these data sources, including scenarios in which these datasets are reused and integrated with other existing and new data sources. However, the heterogeneity of these data sources in syntax, semantics and structure represents a great challenge for application designers. The Semantic Web is a collection of standards and technologies that offer solutions for at least the syntactic and some structural issues. If offers semantic freedom and flexibility, but this leaves the issue of semantic interoperability. In this thesis we present Hera-S, an evolution of the Model Driven Web Engineering (MDWE) method Hera. MDWEs allow designers to create data centric applications using models instead of programming. Hera-S especially targets Semantic Web sources and provides a flexible method for designing personalized adaptive Web applications. Hera-S defines several models that together define the target Web application. Moreover we implemented a framework called Hydragen, which is able to execute the Hera-S models to run the desired Web application. Hera-S' core is the Application Model (AM) in which the main logic of the application is defined, i.e. defining the groups of data elements that form logical units or subunits, the personalization conditions, and the relationships between the units. Hera-S also uses a so-called Domain Model (DM) that describes the content and its structure. However, this DM is not Hera-S specific, but instead allows any Semantic Web source representation as its DM, as long as its content can be queried by the standardized Semantic Web query language SPARQL. The same holds for the User Model (UM). The UM can be used for personalization conditions, but also as a source of user-related content if necessary. In fact, the difference between DM and UM is conceptual as their implementation within Hydragen is the same. Hera-S also defines a presentation model (PM) which defines presentation details of elements like order and style. In order to help designers with building their Web applications we have introduced a toolset, Hera Studio, which allows to build the different models graphically. Hera Studio also provides some additional functionality like model checking and deployment of the models in Hydragen. Both Hera-S and its implementation Hydragen are designed to be flexible regarding the user of models. In order to achieve this Hydragen is a stateless engine that queries for relevant information from the models at every page request. This allows the models and data to be changed in the datastore during runtime. We show that one way to exploit this flexibility is by applying aspect-orientation to the AM. Aspect-orientation allows us to dynamically inject functionality that pervades the entire application. Another way to exploit Hera-S' flexibility is in reusing specialized components, e.g. for presentation generation. We present a configuration of Hydragen in which we replace our native presentation generation functionality by the AMACONT engine. AMACONT provides more extensive multi-level presentation generation and adaptation capabilities as well aspect-orientation and a form of semantic based adaptation. Hera-S was designed to allow the (re-)use of any (Semantic) Web datasource. It even opens up the possibility for data integration at the back end, by using an extendible storage layer in our database of choice Sesame. However, even though theoretically possible it still leaves much of the actual data integration issue. As this is a recurring issue in many domains, a broader challenge than for Hera-S design only, we decided to look at this issue in isolation. We present a framework called Relco which provides a language to express data transformation operations as well as a collection of techniques that can be used to (semi-)automatically find relationships between concepts in different ontologies. This is done with a combination of syntactic, semantic and collaboration techniques, which together provide strong clues for which concepts are most likely related. In order to prove the applicability of Relco we explore five application scenarios in different domains for which data integration is a central aspect. This includes a cultural heritage portal, Explorer, for which data from several datasources was integrated and was made available by a mapview, a timeline and a graph view. Explorer also allows users to provide metadata for objects via a tagging mechanism. Another application is SenSee: an electronic TV-guide and recommender. TV-guide data was integrated and enriched with semantically structured data from several sources. Recommendations are computed by exploiting the underlying semantic structure. ViTa was a project in which several techniques for tagging and searching educational videos were evaluated. This includes scenarios in which user tags are related with an ontology, or other tags, using the Relco framework. The MobiLife project targeted the facilitation of a new generation of mobile applications that would use context-based personalization. This can be done using a context-based user profiling platform that can also be used for user model data exchange between mobile applications using technologies like Relco. The final application scenario that is shown is from the GRAPPLE project which targeted the integration of adaptive technology into current learning management systems. A large part of this integration is achieved by using a user modeling component framework in which any application can store user model information, but which can also be used for the exchange of user model data
    • …
    corecore