7 research outputs found

    Building Smart Space Applications with PErvasive Computing in Embedded Systems (PECES) Middleware

    Get PDF
    The increasing number of devices that are invisibly embedded into our surrounding environment as well as the proliferation of wireless communication and sensing technologies are the basis for visions like ambient intelligence, ubiquitous and pervasive computing. PErvasive Computing in Embedded Systems (PECES) project develops the technological basis to enable the global cooperation of embedded devices residing in different smart spaces in a context-dependent, secure and trustworthy manner. This paper presents PECES middleware that consists of flexible context ontology, a middleware that is capable of dynamically forming execution environments that are secure and trustworthy. This paper also presents set of tools to facilitate application development using the PECES middleware

    Visual Development Environment for Semantically Interoperable Smart Cities Applications

    Get PDF
    International audienceThis paper presents an IoT architecture for the semantic interoperability of diverse IoT systems and applications in smart cities. The architecture virtualizes diverse IoT systems and ensures their modelling and representation according to common standards-based IoT ontologies. Furthermore, based on this architecture, the paper introduces a first-of-a-kind visual development environment which eases the development of semantically interoperable applications in smart citites. The development environment comes with a range of visual tools, which enable the assembly of non-trivial data-driven applications in smart cities, including applications that leverage data streams from diverse IoT systems. Moreover, these tools allow developers to leverage the functionalities and building blocks of the presented architecture. Overall, the introduced visual environment advances the state of the art in IoT developments for smart cities towards the direction of semantic interoperability for data driven application

    The SEMAINE API: Towards a Standards-Based Framework for Building Emotion-Oriented Systems

    Get PDF
    This paper presents the SEMAINE API, an open source framework for building emotion-oriented systems. By encouraging and simplifying the use of standard representation formats, the framework aims to contribute to interoperability and reuse of system components in the research community. By providing a Java and C++ wrapper around a message-oriented middleware, the API makes it easy to integrate components running on different operating systems and written in different programming languages. The SEMAINE system 1.0 is presented as an example of a full-scale system built on top of the SEMAINE API. Three small example systems are described in detail to illustrate how integration between existing and new components is realised with minimal effort

    Gaze, Posture and Gesture Recognition to Minimize Focus Shifts for Intelligent Operating Rooms in a Collaborative Support System

    Get PDF
    This paper describes the design of intelligent, collaborative operating rooms based on highly intuitive, natural and multimodal interaction. Intelligent operating rooms minimize surgeon’s focus shifts by minimizing both the focus spatial offset (distance moved by surgeon’s head or gaze to the new target) and the movement spatial offset (distance surgeon covers physically). These spatio-temporal measures have an impact on the surgeon’s performance in the operating room. I describe how machine vision techniques are used to extract spatio-temporal measures and to interact with the system, and how computer graphics techniques can be used to display visual medical information effectively and rapidly. Design considerations are discussed and examples showing the feasibility of the different approaches are presented

    Kompensation positionsbezogener Artefakte in Aktivitätserkennung

    Get PDF
    This thesis investigates, how placement variations of electronic devices influence the possibility of using sensors integrated in those devices for context recognition. The vast majority of context recognition research assumes well defined, fixed sen- sor locations. Although this might be acceptable for some application domains (e.g. in an industrial setting), users, in general, will have a hard time coping with these limitations. If one needs to remember to carry dedicated sensors and to adjust their orientation from time to time, the activity recognition system is more distracting than helpful. How can we deal with device location and orientation changes to make context sensing mainstream? This thesis presents a systematic evaluation of device placement effects in context recognition. We first deal with detecting if a device is carried on the body or placed somewhere in the environ- ment. If the device is placed on the body, it is useful to know on which body part. We also address how to deal with sensors changing their position and their orientation during use. For each of these topics some highlights are given in the following. Regarding environmental placement, we introduce an active sampling ap- proach to infer symbolic object location. This approach requires only simple sensors (acceleration, sound) and no infrastructure setup. The method works for specific placements such as "on the couch", "in the desk drawer" as well as for general location classes, such as "closed wood compartment" or "open iron sur- face". In the experimental evaluation we reach a recognition accuracy of 90% and above over a total of over 1200 measurements from 35 specific locations (taken from 3 different rooms) and 12 abstract location classes. To derive the coarse device placement on the body, we present a method solely based on rotation and acceleration signals from the device. It works independent of the device orientation. The on-body placement recognition rate is around 80% over 4 min. of unconstrained motion data for the worst scenario and up to 90% over a 2 min. interval for the best scenario. We use over 30 hours of motion data for the analysis. Two special issues of device placement are orientation and displacement. This thesis proposes a set of heuristics that significantly increase the robustness of motion sensor-based activity recognition with respect to sen- sor displacement. We show how, within certain limits and with modest quality degradation, motion sensor-based activity recognition can be implemented in a displacement tolerant way. We evaluate our heuristics first on a set of synthetic lower arm motions which are well suited to illustrate the strengths and limits of our approach, then on an extended modes of locomotion problem (sensors on the upper leg) and finally on a set of exercises performed on various gym machines (sensors placed on the lower arm). In this example our heuristic raises the dis- placed recognition rate from 24% for a displaced accelerometer, which had 96% recognition when not displaced, to 82%

    The SEMAINE API : a component integration framework for a naturally interacting and emotionally competent embodied conversational agent

    Get PDF
    The present thesis addresses the topic area of Embodied Conversational Agents (ECAs) with capabilities for natural interaction with a human user and emotional competence with respect to the perception and generation of emotional expressivity. The focus is on the technological underpinnings that facilitate the implementation of a real-time system with these capabilities, built from re-usable components. The thesis comprises three main contributions. First, it describes a new component integration framework, the SEMAINE API, which makes it easy to build emotion-oriented systems from components which interact with one another using standard and pre-standard XML representations. Second, it presents a prepare-and-trigger system architecture which substantially speeds up the time to animation for system utterances that can be pre-planned. Third, it reports on the W3C Emotion Markup Language, an upcoming web standard for representing emotions in technological systems. We assess critical aspects of system performance, showing that the framework provides a good basis for implementing real-time interactive ECA systems, and illustrate by means of three examples that the SEMAINE API makes it is easy to build new emotion-oriented systems from new and existing components.Die vorliegende Dissertation behandelt das Thema der virtuellen Agenten mit Fähigkeiten zur natürlichen Benutzer-Interaktion sowie emotionaler Kompetenz bzgl. der Wahrnehmung und Generierung emotionalen Ausdrucks. Der Schwerpunkt der Arbeit liegt auf den technologischen Grundlagen für die Implementierung eines echtzeitfähigen Systems mit diesen Fähigkeiten, das aus wiederverwendbaren Komponenten erstellt werden kann. Die Arbeit umfasst drei Kernaspekte. Zum Einen beschreibt sie ein neues Framework zur Komponenten-Integration, die SEMAINE API: Diese erleichtert die Erstellung von Emotions-orientierten Systemen aus Komponenten, die untereinander mittels Standard- oder Prä-Standard-Repräsentationen kommunizieren. Zweitens wird eine Systemarchitektur vorgestellt, welche Vorbereitung und Auslösung von Systemverhalten entkoppelt und so zu einer substanziellen Beschleunigung der Generierungszeit führt, wenn Systemäußerungen im Voraus geplant werden können. Drittens beschreibt die Arbeit die W3C Emotion Markup Language, einen werdenden Web-Standard zur Repräsentation von Emotionen in technologischen Systemen. Es werden kritische Aspekte der Systemperformanz untersucht, wodurch gezeigt wird, dass das Framework eine gute Basis für die Implementierung echtzeitfähiger interaktiver Agentensysteme darstellt. Anhand von drei Beispielen wird illustriert, dass mit der SEMAINE API leicht neue Emotions-orientierte Systeme aus neuen und existierenden Komponenten erstellt werden können
    corecore