199 research outputs found

    Modular Middleware for Gestural Data and Devices Management

    Get PDF
    In the last few years, the use of gestural data has become a key enabler for human-computer interaction (HCI) applications. The growing diffusion of low-cost acquisition devices has thus led to the development of a class of middleware aimed at ensuring a fast and easy integration of such devices within the actual HCI applications. The purpose of this paper is to present a modular middleware for gestural data and devices management. First, we describe a brief review of the state of the art of similar middleware. Then, we discuss the proposed architecture and the motivation behind its design choices. Finally, we present a use case aimed at demonstrating the potential uses as well as the limitations of our middleware

    A multimodal framework for interactive sonification and sound-based communication

    Get PDF

    An electronic architecture for mediating digital information in a hallway fac̦ade

    Get PDF
    Ubiquitous computing requires integration of physical space with digital information. This presents the challenges of integrating electronics, physical space, software and the interaction tools which can effectively communicate with the audience. Many research groups have embraced different techniques depending on location, context, space, and availability of necessary skills to make the world around us as an interface to the digital world. Encouraged by early successes and fostered by project undertaken by tangible visualization group. We introduce an architecture of Blades and Tiles for the development and realization of interactive wall surfaces. It provides an inexpensive, open-ended platform for constructing large-scale tangible and embedded interfaces. In this paper, we propose tiles built using inexpensive pegboards and a gateway for each of these tiles to provide access to digital information. The paper describes the architecture using a corridor fa\c{c}ade application. The corridor fa\c{c}ade uses full-spectrum LEDs, physical labels and stencils, and capacitive touch sensors to provide mediated representation, monitoring and querying of physical and digital content. Example contents include the physical and online status of people and the activity and dynamics of online research content repositories. Several complementary devices such as Microsoft PixelSense and smartdevices can support additional user interaction with the system. This enables interested people in synergistic physical environments to observe, explore, understand, and engage in ongoing activities and relationships. This paper describes the hardware architecture and software libraries employed and how they are used in our research center hallway and academic semester projects

    Multimodal Interface for Human–Robot Collaboration

    Get PDF
    Human–robot collaboration (HRC) is one of the key aspects of Industry 4.0 (I4.0) and requires intuitive modalities for humans to communicate seamlessly with robots, such as speech, touch, or bodily gestures. However, utilizing these modalities is usually not enough to ensure a good user experience and a consideration of the human factors. Therefore, this paper presents a software component, Multi-Modal Offline and Online Programming (M2O2P), which considers such characteristics and establishes a communication channel with a robot with predefined yet configurable hand gestures. The solution was evaluated within a smart factory use case in the Smart Human Oriented Platform for Connected Factories (SHOP4CF) EU project. The evaluation focused on the effects of the gesture personalization on the perceived workload of the users using NASA-TLX and the usability of the component. The results of the study showed that the personalization of the gestures reduced the physical and mental workload and was preferred by the participants, while overall the workload of the tasks did not significantly differ. Furthermore, the high system usability scale (SUS) score of the application, with a mean of 79.25, indicates the overall usability of the component. Additionally, the gesture recognition accuracy of M2O2P was measured as 99.05%, which is similar to the results of state-of-the-art applications.publishedVersionPeer reviewe

    In-hand object detection and tracking using 2D and 3D information

    Get PDF
    As robots are introduced increasingly in human-inhabited areas, they would need a perception system able to detect the actions the humans around it are performing. This information is crucial in order to act accordingly in this changing environment. Humans utilize different objects and tools in various tasks and hence, one of the most useful informations that could be extracted to recognize the actions are the objects that the person is using. As an example, if a person is holding a book, he is probably reading. The information about the objects the humans are holding is useful to determine the activities they are undergoing. This thesis presents a system that is able to track the user’s hand and learn and recognize the object being held. When instructed to learn, the software extracts key information about the object and stores it with a unique identification number for later recognition. If the user triggers the recognition mode, the system compares the current object’s information with the data previously stored and outputs the best match. The system uses both 2D and 3D descriptors to improve the recognition stage. In order to reduce the noise, there are two separate matching procedures for 2D and 3D that output a preliminary prediction at a rate of 30 predictions per second. Finally, a weighted average is performed with these 30 predictions for both 2D and 3D and the final prediction of the system is obtained. The experiments carried out to validate the system reveal that it is capable of recognizing objects from a pool of 6 different objects with a F1 score value near 80% for each case. The experiments demonstrate that the system performs better when combines the information of 2D and 3D descriptors than when used 2D or 3D descriptors separately. The performance tests show that the system is able to run on real time with minimum computer requirements of roughly one physical core (at 2.4GHz) and less than 1 GB of RAM memory. Also, it is possible to implement the software in a distributed system since the bandwidth measurements carried out disclose a maximum bandwidth lower than 7 MB/s. This system is, to the best of my knowledge, the first in the art to implement an in-hand object learning and recognition algorithm using 2D and 3D information. The introduction of both types of data and the inclusion of a posterior decision step improves the robustness and the accuracy of the system. The software developed in this thesis is to serve as a building block for further research on the topic in order to create a more natural human-robot interaction an understanding. This creation of a human- like interaction with the environment for robots is a crucial step towards their complete autonomy and acceptance in human areas.La tendencia a introducir robots asistenciales en nuestra vida cotidiana es cada vez mayor. Esto hace necesaria la incorporación de un sistema de percepción en los robots capaz de detectar las tareas que las personas están realizando. Para ello, el reconocimiento de los objetos que se utilizan es una de las informaciones más útiles que se pueden extraer. Por ejemplo, si una persona está sosteniendo un libro, probablemente esté leyendo. La información acerca de los objetos que las personas utilizan sirve para identificar lo que están haciendo. Esta tesis presenta un sistema que es capaz de seguir la mano del usuario y aprender y reconocer el objeto que ésta sostiene. Durante el modo de aprendizaje, el programa extrae información importante sobre el objeto y la guarda con un número de identificación único. El modo de reconocimiento, por su parte, compara la información extraída del objeto actual con la guardada previamente. La salida del sistema es el número de identificación del objeto aprendido más parecido al actual. El sistema utiliza descriptores 2D y 3D para mejorar la fase de reconocimiento. Para reducir el ruido, se compara la información 2D y 3D por separado y se extrae una predicción preliminar a una velocidad de 30 predicciones por segundo. Posteriormente, se realiza una media ponderada de esas 30 predicciones para obtener el resultado final. Los experimentos realizados para validar el sistema revelan que es capaz de reconocer objetos de un conjunto total de 6 con un valor F cercano al 80% en todos los casos. Los resultados demuestran que el valor F obtenido por el sistema es mejor que aquel obtenido por las predicciones individuales en 2D y 3D. Los tests de rendimiento que se han realizado en el sistema indican que es capaz de operar en tiempo real. Para ello necesita un ordenador con unos requerimientos mínimos de un núcleo (a 2.4 GHz) y 1 GB de memoria RAM. También señalan que es posible implementar el programa en un sistema distribuído debido a que el máximo de ancho de banda obtenido es menor de 7 MB/s. Este sistema es, según los datos de que dispongo, el primero en incorporar un reconocimiento y aprendizaje de objetos sostenidos por una mano utilizando información 2D y 3D. La introducción de ambos tipos de datos y de una posterior etapa de decisión mejora la robustez y la precisión del sistema. El programa desarrollado en esta tesis sirve como un primer paso para incentivar la investigación en este campo, con la intención de crear una interacción más natural entre humanos y robots. La introducción en los robots de una capacidad de relación con el entorno similar a la humana es un paso decisivo hacia su completa autonomía y su aceptación en áreas habitadas por humanos.Ingeniería Electrónica Industrial y Automátic

    An Evidence-based Roadmap for IoT Software Systems Engineering

    Full text link
    Context: The Internet of Things (IoT) has brought expectations for software inclusion in everyday objects. However, it has challenges and requires multidisciplinary technical knowledge involving different areas that should be combined to enable IoT software systems engineering. Goal: To present an evidence-based roadmap for IoT development to support developers in specifying, designing, and implementing IoT systems. Method: An iterative approach based on experimental studies to acquire evidence to define the IoT Roadmap. Next, the Systems Engineering Body of Knowledge life cycle was used to organize the roadmap and set temporal dimensions for IoT software systems engineering. Results: The studies revealed seven IoT Facets influencing IoT development. The IoT Roadmap comprises 117 items organized into 29 categories representing different concerns for each Facet. In addition, an experimental study was conducted observing a real case of a healthcare IoT project, indicating the roadmap applicability. Conclusions: The IoT Roadmap can be a feasible instrument to assist IoT software systems engineering because it can (a) support researchers and practitioners in understanding and characterizing the IoT and (b) provide a checklist to identify the applicable recommendations for engineering IoT software systems

    TiFEE : an input event-handling framework with touchless device support

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Universidade do Porto. Faculdade de Engenharia. 201

    A Pervasive Middleware for Activity Recognition with Smartphones

    Get PDF
    Title from PDF of title page, viewed on August 28, 2015Thesis advisor: Yugyung LeeVitaIncludes bibliographic references (pages 61-67)Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2015Activity Recognition (AR) is an important research topic in pervasive computing. With the rapid increase in the use of pervasive devices, huge sensor data is generated from diverse devices on a daily basis. Analysis of the sensor data is a significant area of research for AR. There are several devices and techniques available for AR, but the increasing number of sensor devices and data demands new approaches for adaptive, lightweight and accurate AR. We propose a new middleware called the Pervasive Middleware for Activity Recognition (PEMAR) to address these problems. We implemented PEMAR on a Big Data platform incorporating machine-learning techniques to make it adaptive and accurate for the AR of sensor data. The middleware is composed of the following: (1) Filtering and Segmentation to detect different activities; (2) A human centered adaptive approach to create accurate personal models, leveraging on the existing impersonal models; (3) An activity library to serve different mobile applications; and (4) Activity Recognition services to accurately perform AR. We evaluated recognition accuracy of PEMAR using a generated dataset (15 activities, 50 subjects) and USC-Human Activity Dataset (12 activities, 14 subjects) and observed a better accuracy for personal trained AR compared to impersonal trained AR. We tested the applicability and adaptivity of PEMAR by using several motion based applications.Introduction -- Related work -- Middleware for gesture recognition -- Implementation and applications -- Results and evaluation -- Conclusion and future wor

    Development of actuated Tangible User Interfaces: new interaction concepts and evaluation methods

    Get PDF
    Riedenklau E. Development of actuated Tangible User Interfaces: new interaction concepts and evaluation methods. Bielefeld: Universität Bielefeld; 2016.Making information understandable and literally graspable is the main goal of tangible interaction research. By giving digital data physical representations (Tangible User Interface Objects, or TUIOs), they can be used and manipulated like everyday objects with the users’ natural manipulation skills. Such physical interaction is basically of uni-directional kind, directed from the user to the system, limiting the possible interaction patterns. In other words, the system has no means to actively support the physical interaction. Within the frame of tabletop tangible user interfaces, this problem was addressed by the introduction of actuated TUIOs, that are controllable by the system. Within the frame of this thesis, we present the development of our own actuated TUIOs and address multiple interaction concepts we identified as research gaps in literature on actuated Tangible User Interfaces (TUIs). Gestural interaction is a natural means for humans to non-verbally communicate using their hands. TUIs should be able to support gestural interaction, since our hands are already heavily involved in the interaction. This has rarely been investigated in literature. For a tangible social network client application, we investigate two methods for collecting user-defined gestures that our system should be able to interpret for triggering actions. Versatile systems often understand a wide palette of commands. Another approach for triggering actions is the use of menus. We explore the design space of menu metaphors used in TUIs and present our own actuated dial-based approach. Rich interaction modalities may support the understandability of the represented data and make the interaction with them more appealing, but also mean high demands on real-time precessing. We highlight new research directions for integrated feature rich and multi-modal interaction, such as graphical display, sound output, tactile feedback, our actuated menu and automatically maintained relations between actuated TUIOs within a remote collaboration application. We also tackle the introduction of further sophisticated measures for the evaluation of TUIs to provide further evidence to the theories on tangible interaction. We tested our enhanced measures within a comparative study. Since one of the key factors in effective manual interaction is speed, we benchmarked both the human hand’s manipulation speed and compare it with the capabilities of our own implementation of actuated TUIOs and the systems described in literature. After briefly discussing applications that lie beyond the scope of this thesis, we conclude with a collection of design guidelines gathered in the course of this work and integrate them together with our findings into a larger frame
    • …
    corecore