23,199 research outputs found

    Smart technologies for effective reconfiguration: the FASTER approach

    Get PDF
    Current and future computing systems increasingly require that their functionality stays flexible after the system is operational, in order to cope with changing user requirements and improvements in system features, i.e. changing protocols and data-coding standards, evolving demands for support of different user applications, and newly emerging applications in communication, computing and consumer electronics. Therefore, extending the functionality and the lifetime of products requires the addition of new functionality to track and satisfy the customers needs and market and technology trends. Many contemporary products along with the software part incorporate hardware accelerators for reasons of performance and power efficiency. While adaptivity of software is straightforward, adaptation of the hardware to changing requirements constitutes a challenging problem requiring delicate solutions. The FASTER (Facilitating Analysis and Synthesis Technologies for Effective Reconfiguration) project aims at introducing a complete methodology to allow designers to easily implement a system specification on a platform which includes a general purpose processor combined with multiple accelerators running on an FPGA, taking as input a high-level description and fully exploiting, both at design time and at run time, the capabilities of partial dynamic reconfiguration. The goal is that for selected application domains, the FASTER toolchain will be able to reduce the design and verification time of complex reconfigurable systems providing additional novel verification features that are not available in existing tool flows

    Challenges in Bridging Social Semantics and Formal Semantics on the Web

    Get PDF
    This paper describes several results of Wimmics, a research lab which names stands for: web-instrumented man-machine interactions, communities, and semantics. The approaches introduced here rely on graph-oriented knowledge representation, reasoning and operationalization to model and support actors, actions and interactions in web-based epistemic communities. The re-search results are applied to support and foster interactions in online communities and manage their resources

    Mapping for the Masses: Accessing Web 2.0 through Crowdsourcing

    Get PDF
    The authors describe how we are harnessing the power of web 2.0 technologies to create new approaches to collecting, mapping, and sharing geocoded data. The authors begin with GMapCreator that lets users fashion new maps using Google Maps as a base. The authors then describe MapTube that enables users to archive maps and demonstrate how it can be used in a variety of contexts to share map information, to put existing maps into a form that can be shared, and to create new maps from the bottom-up using a combination of crowdcasting, crowdsourcing, and traditional broadcasting. The authors conclude by arguing that such tools are helping to define a neogeography that is essentially "mapping for the masses,'' while noting that there are many issues of quality, accuracy, copyright, and trust that will influence the impact of these tools on map-based communication

    Evolving Objects in Temporal Information Systems

    Get PDF
    This paper presents a semantic foundation of temporal conceptual models used to design temporal information systems. We consider a modelling language able to express both timestamping and evolution constraints. We conduct a deeper investigation of evolution constraints, eventually devising a model-theoretic semantics for a full-fledged model with both timestamping and evolution constraints. The proposed formalization is meant both to clarify the meaning of the various temporal constructors that appeared in the literature and to give a rigorous definition, in the context of temporal information systems, to notions like satisfiability, subsumption and logical implication. Furthermore, we show how to express temporal constraints using a subset of first-order temporal logic, i.e. DLRUS, the description logic DLR extended with the temporal operators Since and Until. We show how DLRUS is able to capture the various modelling constraints in a succinct way and to perform automated reasoning on temporal conceptual models

    Model Creation and Equivalence Proofs of Cellular Automata and Artificial Neural Networks

    Full text link
    Computational methods and mathematical models have invaded arguably every scientific discipline forming its own field of research called computational science. Mathematical models are the theoretical foundation of computational science. Since Newton's time, differential equations in mathematical models have been widely and successfully used to describe the macroscopic or global behaviour of systems. With spatially inhomogeneous, time-varying, local element-specific, and often non-linear interactions, the dynamics of complex systems is in contrast more efficiently described by local rules and thus in an algorithmic and local or microscopic manner. The theory of mathematical modelling taking into account these characteristics of complex systems has to be established still. We recently presented a so-called allagmatic method including a system metamodel to provide a framework for describing, modelling, simulating, and interpreting complex systems. Implementations of cellular automata and artificial neural networks were described and created with that method. Guidance from philosophy were helpful in these first studies focusing on programming and feasibility. A rigorous mathematical formalism, however, is still missing. This would not only more precisely describe and define the system metamodel, it would also further generalise it and with that extend its reach to formal treatment in applied mathematics and theoretical aspects of computational science as well as extend its applicability to other mathematical and computational models such as agent-based models. Here, a mathematical definition of the system metamodel is provided. Based on the presented formalism, model creation and equivalence of cellular automata and artificial neural networks are proved. It thus provides a formal approach for studying the creation of mathematical models as well as their structural and operational comparison.Comment: 13 pages, 1 tabl

    Ontology-based composition and matching for dynamic cloud service coordination

    Get PDF
    Recent cross-organisational software service offerings, such as cloud computing, create higher integration needs. In particular, services are combined through brokers and mediators, solutions to allow individual services to collaborate and their interaction to be coordinated are required. The need to address dynamic management - caused by cloud and on-demand environments - can be addressed through service coordination based on ontology-based composition and matching techniques. Our solution to composition and matching utilises a service coordination space that acts as a passive infrastructure for collaboration where users submit requests that are then selected and taken on by providers. We discuss the information models and the coordination principles of such a collaboration environment in terms of an ontology and its underlying description logics. We provide ontology-based solutions for structural composition of descriptions and matching between requested and provided services

    A method for rigorous design of reconfigurable systems

    Get PDF
    Reconfigurability, understood as the ability of a system to behave differently in different modes of operation and commute between them along its lifetime, is a cross-cutting concern in modern Software Engineering. This paper introduces a specification method for reconfigurable software based on a global transition structure to capture the system's reconfiguration space, and a local specification of each operation mode in whatever logic (equational, first-order, partial, fuzzy, probabilistic, etc.) is found expressive enough for handling its requirements. In the method these two levels are not only made explicit and juxtaposed, but formally interrelated. The key to achieve such a goal is a systematic process of hybridisation of logics through which the relationship between the local and global levels of a specification becomes internalised in the logic itself.This work is financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation – COMPETE 2020 Programme and by National Funds through the Portuguese funding agency, FCT – Fundação para a Ciência e a Tecnologia within projects POCI-01-0145-FEDER-016692 and UID/MAT/04106/2013. The first author is further supported by the BPD FCT Grant SFRH/BPD/103004/2014, and R. Neves is sponsored by FCT Grant SFRH/BD/52234/2013. M.A. Martins is also funded by the EU FP7 Marie Curie PIRSESGA-2012-318986 project GeTFun: Generalizing Truth-Functionality

    Verification of Agent-Based Artifact Systems

    Full text link
    Artifact systems are a novel paradigm for specifying and implementing business processes described in terms of interacting modules called artifacts. Artifacts consist of data and lifecycles, accounting respectively for the relational structure of the artifacts' states and their possible evolutions over time. In this paper we put forward artifact-centric multi-agent systems, a novel formalisation of artifact systems in the context of multi-agent systems operating on them. Differently from the usual process-based models of services, the semantics we give explicitly accounts for the data structures on which artifact systems are defined. We study the model checking problem for artifact-centric multi-agent systems against specifications written in a quantified version of temporal-epistemic logic expressing the knowledge of the agents in the exchange. We begin by noting that the problem is undecidable in general. We then identify two noteworthy restrictions, one syntactical and one semantical, that enable us to find bisimilar finite abstractions and therefore reduce the model checking problem to the instance on finite models. Under these assumptions we show that the model checking problem for these systems is EXPSPACE-complete. We then introduce artifact-centric programs, compact and declarative representations of the programs governing both the artifact system and the agents. We show that, while these in principle generate infinite-state systems, under natural conditions their verification problem can be solved on finite abstractions that can be effectively computed from the programs. Finally we exemplify the theoretical results of the paper through a mainstream procurement scenario from the artifact systems literature

    Robotic ubiquitous cognitive ecology for smart homes

    Get PDF
    Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent- based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feed- back received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work
    corecore