29 research outputs found

    Building Smart Space Applications with PErvasive Computing in Embedded Systems (PECES) Middleware

    Get PDF
    The increasing number of devices that are invisibly embedded into our surrounding environment as well as the proliferation of wireless communication and sensing technologies are the basis for visions like ambient intelligence, ubiquitous and pervasive computing. PErvasive Computing in Embedded Systems (PECES) project develops the technological basis to enable the global cooperation of embedded devices residing in different smart spaces in a context-dependent, secure and trustworthy manner. This paper presents PECES middleware that consists of flexible context ontology, a middleware that is capable of dynamically forming execution environments that are secure and trustworthy. This paper also presents set of tools to facilitate application development using the PECES middleware

    Development tools for context aware and secure pervasive computing in embedded systems middleware

    Get PDF
    PhD ThesisThe increasing number of devices that are invisibly embedded into our surrounding environment as well as the proliferation of wireless communication and sensing technologies are the basis for visions like ambient intelligence, ubiquitous and pervasive computing. The PErvasive Computing in Embedded Systems (PECES) project developed the technological basis to enable the global cooperation of embedded devices residing in different smart spaces in a context-dependent, secure and trustworthy manner. The PECES development tools aim to help the application developer to build applications using the PECES middleware and simulate the smart space dynamics such as device connections and context changes, etc. To ease the middleware development process, the development tools are implemented as Eclipse plugins and integrated into the Eclipse Integrated Development Environment (IDE). The development tools provide graphical user interface (GUI) to configure, model and test the PECES middleware based smart space applications. This thesis presents the design, implementation and devaluation of three groups of tools namely Configuration Tool (Peces Project, Peces Device Definition, Peces Ontology Instantiation, Peces Security Configuration, Peces Service Definition, Peces Role Specification Definition, Peces Hierarchical Role Specification Definition), Modelling Tool (Peces Event Editor, Peces Event Diagram) and Testing Tool which enalble application developer to build, model and test the PECES middleware based smart space application using the novel concepts such as role assignment, context ontologies and security

    Context modelling for natural Human Computer Interaction applications in e-health

    Get PDF
    The conception of IoT (Internet of Things) is accepted as the future tendency of Internet among academia and industry. It will enable people and things to be connected at anytime and anyplace, with anything and anyone. IoT has been proposed to be applied into many areas such as Healthcare, Transportation,Logistics, and Smart environment etc. However, this thesis emphasizes on the home healthcare area as it is the potential healthcare model to solve many problems such as the limited medical resources, the increasing demands for healthcare from elderly and chronic patients which the traditional model is not capable of. A remarkable change in IoT in semantic oriented vision is that vast sensors or devices are involved which could generate enormous data. Methods to manage the data including acquiring, interpreting, processing and storing data need to be implemented. Apart from this, other abilities that IoT is not capable of are concluded, namely, interoperation, context awareness and security & privacy. Context awareness is an emerging technology to manage and take advantage of context to enable any type of system to provide personalized services. The aim of this thesis is to explore ways to facilitate context awareness in IoT. In order to realize this objective, a preliminary research is carried out in this thesis. The most basic premise to realize context awareness is to collect, model, understand, reason and make use of context. A complete literature review for the existing context modelling and context reasoning techniques is conducted. The conclusion is that the ontology-based context modelling and ontology-based context reasoning are the most promising and efficient techniques to manage context. In order to fuse ontology into IoT, a specific ontology-based context awareness framework is proposed for IoT applications. In general, the framework is composed of eight components which are hardware, UI (User Interface), Context modelling, Context fusion, Context reasoning, Context repository, Security unit and Context dissemination. Moreover, on the basis of TOVE (Toronto Virtual Enterprise), a formal ontology developing methodology is proposed and illustrated which consists of four stages: Specification & Conceptualization, Competency Formulation, Implementation and Validation & Documentation. In addition, a home healthcare scenario is elaborated by listing its well-defined functionalities. Aiming at representing this specific scenario, the proposed ontology developing methodology is applied and the ontology-based model is developed in a free and open-source ontology editor called Protégé. Finally, the accuracy and completeness of the proposed ontology are validated to show that this proposed ontology is able to accurately represent the scenario of interest

    inContexto: framework to obtain people context using wearable sensors and social network sites

    Get PDF
    Mención Internacional en el título de doctorAmbient Intelligent (AmI) technology is developing fast and will promote a new generation of applications with some characteristics in the area of context awareness, anticipatory behavior, home security, monitoring, Health Care and video surveillance. AmI Environments should be surrounded by multiples sensors in order to discover people needs. These kind of scenarios are characterized by intelligent environments, which are able to recognize inconspicuously the presence of individuals and react to their needs. In such systems, people are conceived as the main actor, always in control, playing multiple roles, and this is perhaps the new real facet of research related to AmI: it introduces a new dimension creating synergies between the user and the environment. The AmI paradigm sets the principles to design pervasive and transparent infrastructures being capable of observing people without prying into their lives, and also adapting to their needs. There are several basis concepts to consider for retrieving people context, however the most important for users is that sensors devices must be unobtrusive. Many technologies are conceived as hand-held or wearable, taking advantage of the intelligence embedded in the environment. Mobile technologies and Social Network Sites make it possible to collect people information anywhere at anytime, and provide users with up-to-date information ready for decision-making processes. Nevertheless, the management of these sensors for collecting user context poses several challenges. Besides the limited computational capabilities of mobile devices, mobile systems face specific problems that cannot be solved by traditional knowledge management methodologies and tools, and thus require creative new solutions. This dissertation proposes a set of techniques, interfaces and algorithms for the implementation of inferring context information from new kind of sensors (Smartphones and Social Networking). The huge potential of both new sensors have motivated us to design a framework that can intelligently capture different sensory data in real-time. Smartphones may obtain and process physical phenomena from embedded sensors (Accelerometer, gyroscope, compass, magnetometer, proximity sensor, light sensor, GPS, etc.) and SNS the affective ones. Subsequently this information could be transmitted to remote locations without any human intervention. The mechanisms proposed here are based on the implementation of a basic framework that modifies information from the raw data to the most descriptive action. To this end, the development of this thesis starts from a inContexto framework which exploits off-the-shelf sensor-enabled mobile phones and SNS people presence to automatically infer people’s context. The main goals of our architecture are: (i) Collection, storage, analyse, and sharing of the user context information, (ii) Plug-and-play support for a wide variety of sensing devices, (iii) Privacy preservation of individuals sharing their data, and (iv) Easy application development. Furthermore our inContexto has been implemented to allow third party application to participate and improve people context.La Inteligencia Ambiental (AmI) está sufriendo una evolución rápida y en un futuro cercano saldrán a la luz una nueva generación de aplicaciones en el área de los sistemas basados en contexto, seguridad en el hogar, monitorización, salud y video vigilancia. Los entornos AmI se caracterizan por estar plagados de sensores los cuales, están encargados de capturar información de la gente que hay en ellos para describir sus necesidades. Este tipo de escenarios se caracterizan por ser entornos inteligentes, capaces de reconocer autónomamente la presencia de personas y reaccionar a sus necesidades. En dichos sistemas, las personas o usuarios se conciben como el actor principal, siempre en control, jugando múltiples roles, y esto es una nueva característica dentro del marco de la investigación relacionada con AmI: introducir nuevas sinergias entre el usuario y el entorno que le rodea. El paradigma AmI establece los principios para el diseño de arquitecturas generales que son capaces de capturar información relevante de las personas sin entrometerse en su vida, y además adaptar dicha información a las necesidades del mismo. Existen diferentes conceptos a tener en cuenta para la captura del contexto de las personas, sin embargo, el factor más importante es que los dispositivos usados deben ser transparentes para el usuario, es decir que trabajen de manera autónoma y sin la ayuda del mismo. Los nuevos teléfonos móviles inteligentes o smarpthone y las redes sociales permiten extraer información de las personas en cualquier lugar en cualquier momento, y así poder proporcionar a los usuarios ayuda para la toma de decisiones en las actividades de su vida real. Sin embargo, la gestión de la información de estos sensores, los cuales nos permiten inferir el contexto, plantean varios desafíos a resolver En primer lugar la limitación de las capacidades tanto computacionales como de disponibilidad (consumo de energía) de los dispositivos móviles, los sistemas móviles se enfrentan a problemas específicos que no pueden ser resueltos por las metodologías y herramientas de gestión del conocimiento tradicional, y por lo tanto requieren de nuevas soluciones creativas. En esta tesis se propone un conjunto de técnicas, interfaces y algoritmos para inferir la información de contexto de las personas a través de nuevos sensores, los cuales han sido infrautilizados hasta el momento como son los smartphone y Redes Sociales. Gracias al enorme potencial de estos nuevos sensores nos ha motivado para diseñar un framework que de manera transparente al usuario puede capturar diferentes datos sensoriales en tiempo real. A través de los Smartphone se puede obtener y procesar los fenómenos físicos (Correr, Andar, etc.) de las personas, utilizando los sensores embebidos como el acelerómetro, giroscopio, brújula, magnetómetro, sensor de proximidad, sensor de luz, GPS, etc. Además a través de las redes sociales se podría obtener información de los fenómenos afectivos del usuario. Posteriormente, esta información se transmitirá para su procesamiento y búsqueda de nuevas inferencias sin la colaboración del usuario, de manera transparente. Los mecanismos propuestos en esta tesis se basan en la aplicación de un framework, inContexto, que recoge la información de los sensores (Señales, palabras, etc.) para posteriormente generar una acción más descriptiva y entendible por el usuario. Los principales objetivos que presenta inContexto son: (i) Recogida, almacenamiento, análisis e intercambio de la información de contexto de usuario, (ii) el apoyo Plug-and-play para una amplia variedad de dispositivos, (iii) la preservación de privacidad de los las personas, y (iv) el desarrollo de nuevas aplicaciones fácilmente, permitiendo a través de inContexto el acceso a los datos a aplicaciones de terceros para mejorar la información recogida.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Juan Pavón Mestras.- Secretario: Miguel Ángel Patricio Guisado.- Vocal: Nayat Sánchez P

    A Two-Level Information Modelling Translation Methodology and Framework to Achieve Semantic Interoperability in Constrained GeoObservational Sensor Systems

    Get PDF
    As geographical observational data capture, storage and sharing technologies such as in situ remote monitoring systems and spatial data infrastructures evolve, the vision of a Digital Earth, first articulated by Al Gore in 1998 is getting ever closer. However, there are still many challenges and open research questions. For example, data quality, provenance and heterogeneity remain an issue due to the complexity of geo-spatial data and information representation. Observational data are often inadequately semantically enriched by geo-observational information systems or spatial data infrastructures and so they often do not fully capture the true meaning of the associated datasets. Furthermore, data models underpinning these information systems are typically too rigid in their data representation to allow for the ever-changing and evolving nature of geo-spatial domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in an interoperable and computable way. The health domain experiences similar challenges with representing complex and evolving domain information concepts. Within any complex domain (such as Earth system science or health) two categories or levels of domain concepts exist. Those concepts that remain stable over a long period of time, and those concepts that are prone to change, as the domain knowledge evolves, and new discoveries are made. Health informaticians have developed a sophisticated two-level modelling systems design approach for electronic health documentation over many years, and with the use of archetypes, have shown how data, information, and knowledge interoperability among heterogenous systems can be achieved. This research investigates whether two-level modelling can be translated from the health domain to the geo-spatial domain and applied to observing scenarios to achieve semantic interoperability within and between spatial data infrastructures, beyond what is possible with current state-of-the-art approaches. A detailed review of state-of-the-art SDIs, geo-spatial standards and the two-level modelling methodology was performed. A cross-domain translation methodology was developed, and a proof-of-concept geo-spatial two-level modelling framework was defined and implemented. The Open Geospatial Consortium’s (OGC) Observations & Measurements (O&M) standard was re-profiled to aid investigation of the two-level information modelling approach. An evaluation of the method was undertaken using II specific use-case scenarios. Information modelling was performed using the two-level modelling method to show how existing historical ocean observing datasets can be expressed semantically and harmonized using two-level modelling. Also, the flexibility of the approach was investigated by applying the method to an air quality monitoring scenario using a technologically constrained monitoring sensor system. This work has demonstrated that two-level modelling can be translated to the geospatial domain and then further developed to be used within a constrained technological sensor system; using traditional wireless sensor networks, semantic web technologies and Internet of Things based technologies. Domain specific evaluation results show that twolevel modelling presents a viable approach to achieve semantic interoperability between constrained geo-observational sensor systems and spatial data infrastructures for ocean observing and city based air quality observing scenarios. This has been demonstrated through the re-purposing of selected, existing geospatial data models and standards. However, it was found that re-using existing standards requires careful ontological analysis per domain concept and so caution is recommended in assuming the wider applicability of the approach. While the benefits of adopting a two-level information modelling approach to geospatial information modelling are potentially great, it was found that translation to a new domain is complex. The complexity of the approach was found to be a barrier to adoption, especially in commercial based projects where standards implementation is low on implementation road maps and the perceived benefits of standards adherence are low. Arising from this work, a novel set of base software components, methods and fundamental geo-archetypes have been developed. However, during this work it was not possible to form the required rich community of supporters to fully validate geoarchetypes. Therefore, the findings of this work are not exhaustive, and the archetype models produced are only indicative. The findings of this work can be used as the basis to encourage further investigation and uptake of two-level modelling within the Earth system science and geo-spatial domain. Ultimately, the outcomes of this work are to recommend further development and evaluation of the approach, building on the positive results thus far, and the base software artefacts developed to support the approach

    Secure Web Services für ambiente eingebettete Systeme

    Get PDF
    Das Internet der Dinge oder Smart Homes haben gemeinsam, dass in ihnen Kleinstgeräte ihren Dienst verrichten, die nicht über die Kapazitäten verfügen, klassische Sicherheitsmechanismen auszuführen. Außerdem müssen sie auch von technischen Laien sicher betrieben werden können. Diese Arbeit schlägt die Sicherheitsinfrastruktur DPWSec vor. DPWSec bildet eine Sicherheitsinfrastruktur, die einer ausführlichen Anforderungsliste genügt. Es wird gezeigt, dass sich die Kernkonzepte von DPWSec auf andere Basistechnologien übertragen lassen, wodurch eine sichere Protokollinteroperabilität möglich wird

    Sensor Fusion for Location Estimation Technologies

    No full text
    Location estimation performance is not always satisfactory and improving it can be expensive. The performance of location estimation technology can be increased by refining the existing location estimation technologies. A better way of increasing performance is to use multiple technologies and combine the available data provided by them in order to obtain better results. Also, maintaining one's location privacy while using location estimation technology is a challenge. How can this problem be solved? In order to make it easier to perform sensor fusion on the available data and to speed up development, a flexible framework centered around a component-based architecture was designed. In order to test the performance of location estimation using the proposed sensor fusion framework, the framework and all the necessary components were implemented and tested. In order to solve the location estimation privacy issues, a comprehensive design that considers all aspects of the problem, from the physical aspects of using radio transmissions to communicating and using location data, is proposed. The experimental results of testing the location estimation sensor fusion framework show that by using sensor fusion, the availability of location estimation is always increased and the accuracy is always increased on average. The experimental results also allow the profiling of the sensor fusion framework's time and energy consumption. In the case of time consumption, there is a 0.32% - 17.06% - 5.05% - 77.58% split between results overhead, engine overhead, component communication time and component execution time on an average. The more measurements are gathered by the data gathering components, the more the component execution time increases relative to all the other execution times because component execution time is the only one that increases while the others remain constant

    A linguistic approach to concurrent, distributed, and adaptive programming across heterogeneous platforms

    Get PDF
    Two major trends in computing hardware during the last decade have been an increase in the number of processing cores found in individual computer hardware platforms and an ubiquity of distributed, heterogeneous systems. Together, these changes can improve not only the performance of a range of applications, but the types of applications that can be created. Despite the advances in hardware technology, advances in programming of such systems has not kept pace. Traditional concurrent programming has always been challenging, and is only set to be come more so as the level of hardware concurrency increases. The different hardware platforms which make up heterogeneous systems come with domain-specific programming models, which are not designed to interact, or take into account the different resource-constraints present across different hardware devices, motivating a need for runtime reconfiguration or adaptation. This dissertation investigates the actor model of computation as an appropriate abstraction to address the issues present in programming concurrent, distributed, and adaptive applications across different scales and types of computing hardware. Given the limitations of other approaches, this dissertation describes a new actor-based programming language (Ensemble) and its runtime to address these challenges. The goal of this language is to enable non-specialist programmers to take advantage of parallel, distributed, and adaptive programming without the programmer requiring in-depth knowledge of hardware architectures or software frameworks. There is also a description of the design and implementation of the runtime system which executes Ensemble applications across a range of heterogeneous platforms. To show the suitability of the actor-based abstraction in creating applications for such systems, the language and runtime were evaluated in terms of linguistic complexity and performance. These evaluations covered programming embedded, concurrent, distributed, and adaptable applications, as well as combinations thereof. The results show that the actor provides an objectively simple way to program such systems without sacrificing performance

    Correct-by-Construction Development of Dynamic Topology Control Algorithms

    Get PDF
    Wireless devices are influencing our everyday lives today and will even more so in the future. A wireless sensor network (WSN) consists of dozens to hundreds of small, cheap, battery-powered, resource-constrained sensor devices (motes) that cooperate to serve a common purpose. These networks are applied in safety- and security-critical areas (e.g., e-health, intrusion detection). The topology of such a system is an attributed graph consisting of nodes representing the devices and edges representing the communication links between devices. Topology control (TC) improves the energy consumption behavior of a WSN by blocking costly links. This allows a mote to reduce its transmission power. A TC algorithm must fulfill important consistency properties (e.g., that the resulting topology is connected). The traditional development process for TC algorithms only considers consistency properties during the initial specification phase. The actual implementation is carried out manually, which is error prone and time consuming. Thus, it is difficult to verify that the implementation fulfills the required consistency properties. The problem becomes even more severe if the development process is iterative. Additionally, many TC algorithms are batch algorithms, which process the entire topology, irrespective of the extent of the topology modifications since the last execution. Therefore, dynamic TC is desirable, which reacts to change events of the topology. In this thesis, we propose a model-driven correct-by-construction methodology for developing dynamic TC algorithms. We model local consistency properties using graph constraints and global consistency properties using second-order logic. Graph transformation rules capture the different types of topology modifications. To specify the control flow of a TC algorithm, we employ the programmed graph transformation language story-driven modeling. We presume that local consistency properties jointly imply the global consistency properties. We ensure the fulfillment of the local consistency properties by synthesizing weakest preconditions for each rule. The synthesized preconditions prohibit the application of a rule if and only if the application would lead to a violation of a consistency property. Still, this restriction is infeasible for topology modifications that need to be executed in any case. Therefore, as a major contribution of this thesis, we propose the anticipation loop synthesis algorithm, which transforms the synthesized preconditions into routines that anticipate all violations of these preconditions. This algorithm also enables the correct-by-construction runtime reconfiguration of adaptive WSNs. We provide tooling for both common evaluation steps. Cobolt allows to evaluate the specified TC algorithms rapidly using the network simulator Simonstrator. cMoflon generates embedded C code for hardware testbeds that build on the sensor operating system Contiki
    corecore