595 research outputs found

    dWatch: a Personal Wrist Watch for Smart Environments

    Get PDF
    Intelligent environments, such as smart homes or domotic systems, have the potential to support people in many of their ordinary activities, by allowing complex control strategies for managing various capabilities of a house or a building: lights, doors, temperature, power and energy, music, etc. Such environments, typically, provide these control strategies by means of computers, touch screen panels, mobile phones, tablets, or In-House Displays. An unobtrusive and typically wearable device, like a bracelet or a wrist watch, that lets users perform various operations in their homes and to receive notifications from the environment, could strenghten the interaction with such systems, in particular for those people not accustomed to computer systems (e.g., elderly) or in contexts where they are not in front of a screen. Moreover, such wearable devices reduce the technological gap introduced in the environment by home automation systems, thus permitting a higher level of acceptance in the daily activities and improving the interaction between the environment and its inhabitants. In this paper, we introduce the dWatch, an off-the-shelf personal wearable notification and control device, integrated in an intelligent platform for domotic systems, designed to optimize the way people use the environment, and built as a wrist watch so that it is easily accessible, worn by people on a regular basis and unobtrusiv

    Automotive gestures recognition based on capacitive sensing

    Get PDF
    Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e ComputadoresDriven by technological advancements, vehicles have steadily increased in sophistication, specially in the way drivers and passengers interact with their vehicles. For example, the BMW 7 series driver-controlled systems, contains over 700 functions. Whereas, it makes easier to navigate streets, talk on phone and more, this may lead to visual distraction, since when paying attention to a task not driving related, the brain focus on that activity. That distraction is, according to studies, the third cause of accidents, only surpassed by speeding and drunk driving. Driver distraction is stressed as the main concern by regulators, in particular, National Highway Transportation Safety Agency (NHTSA), which is developing recommended limits for the amount of time a driver needs to spend glancing away from the road to operate in-car features. Diverting attention from driving can be fatal; therefore, automakers have been challenged to design safer and comfortable human-machine interfaces (HMIs) without missing the latest technological achievements. This dissertation aims to mitigate driver distraction by developing a gestural recognition system that allows the user a more comfortable and intuitive experience while driving. The developed system outlines the algorithms to recognize gestures using the capacitive technology.Impulsionados pelos avanços tecnológicos, os automóveis tem de forma continua aumentado em complexidade, sobretudo na forma como os conductores e passageiros interagem com os seus veículos. Por exemplo, os sistemas controlados pelo condutor do BMW série 7 continham mais de 700 funções. Embora, isto facilite a navegação entre locais, falar ao telemóvel entre outros, isso pode levar a uma distração visual, já que ao prestar atenção a uma tarefa não relacionados com a condução, o cérebro se concentra nessa atividade. Essa distração é, de acordo com os estudos, a terceira causa de acidentes, apenas ultrapassada pelo excesso de velocidade e condução embriagada. A distração do condutor é realçada como a principal preocupação dos reguladores, em particular, a National Highway Transportation Safety Agency (NHTSA), que está desenvolvendo os limites recomendados para a quantidade de tempo que um condutor precisa de desviar o olhar da estrada para controlar os sistemas do carro. Desviar a atenção da conducção, pode ser fatal; portanto, os fabricante de automóveis têm sido desafiados a projetar interfaces homemmáquina (HMIs) mais seguras e confortáveis, sem perder as últimas conquistas tecnológicas. Esta dissertação tem como objetivo minimizar a distração do condutor, desenvolvendo um sistema de reconhecimento gestual que permite ao utilizador uma experiência mais confortável e intuitiva ao conduzir. O sistema desenvolvido descreve os algoritmos de reconhecimento de gestos usando a tecnologia capacitiva.It is worth noting that this work has been financially supported by the Portugal Incentive System for Research and Technological Development in scope of the projects in co-promotion number 036265/2013 (HMIExcel 2013-2015), number 002814/2015 (iFACTORY 2015-2018) and number 002797/2015 (INNOVCAR 2015-2018)

    Interactive, Mobile Internet of Things Device with Image Capture Capability

    Get PDF
    Aspects of the present disclosure are directed to an interactive, mobile Internet of Things (IoT) device with image capture capability. In particular, one example embodiment of the present disclosure is a “life-size” (e.g., around 4-6 feet in height) anthropomorphic device that can move around an environment and interact with humans and/or other devices in the area. For example, the device can respond to commands and/or capture images of and/or with humans in the area. As one example, the captured images can be in the style of a self-portrait photograph (also known as a “selfie”) and can depict the device positioned alongside one or more humans

    3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit

    Full text link
    We present 3DTouch, a novel 3D wearable input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally working across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure

    Sensitive and Makeable Computational Materials for the Creation of Smart Everyday Objects

    Get PDF
    The vision of computational materials is to create smart everyday objects using the materi- als that have sensing and computational capabilities embedded into them. However, today’s development of computational materials is limited because its interfaces (i.e. sensors) are unable to support wide ranges of human interactions , and withstand the fabrication meth- ods of everyday objects (e.g. cutting and assembling). These barriers hinder citizens from creating smart every day objects using computational materials on a large scale. To overcome the barriers, this dissertation presents the approaches to develop compu- tational materials to be 1) sensitive to a wide variety of user interactions, including explicit interactions (e.g. user inputs) and implicit interactions (e.g. user contexts), and 2) makeable against a wide range of fabrication operations, such cutting and assembling. I exemplify the approaches through five research projects on two common materials, textile and wood. For each project, I explore how a material interface can be made to sense user inputs or activities, and how it can be optimized to balance sensitivity and fabrication complexity. I discuss the sensing algorithms and machine learning model to interpret the sensor data as high-level abstraction and interaction. I show the practical applications of developed computational materials. I demonstrate the evaluation study to validate their performance and robustness. In the end of this dissertation, I summarize the contributions of my thesis and discuss future directions for the vision of computational materials

    On-Body Sensing: From Gesture-Based Input to Activity-Driven Interaction

    Full text link

    Smart Floor for In Room Detection

    Get PDF
    Determining the state of rooms, including whether a room is occupied, can improve the functionality and operation of many devices and services that are part of the Internet of Things (IoT). Furthermore, sensors associated with a smart floor can perform operations including detecting the state of a room and communicating the state to other IoT devices that can perform operations based on the state of the room (e.g., turning on lights when a person is detected in the room)

    Capacitive Sensing and Communication for Ubiquitous Interaction and Environmental Perception

    Get PDF
    During the last decade, the functionalities of electronic devices within a living environment constantly increased. Besides the personal computer, now tablet PCs, smart household appliances, and smartwatches enriched the technology landscape. The trend towards an ever-growing number of computing systems has resulted in many highly heterogeneous human-machine interfaces. Users are forced to adapt to technology instead of having the technology adapt to them. Gathering context information about the user is a key factor for improving the interaction experience. Emerging wearable devices show the benefits of sophisticated sensors which make interaction more efficient, natural, and enjoyable. However, many technologies still lack of these desirable properties, motivating me to work towards new ways of sensing a user's actions and thus enriching the context. In my dissertation I follow a human-centric approach which ranges from sensing hand movements to recognizing whole-body interactions with objects. This goal can be approached with a vast variety of novel and existing sensing approaches. I focused on perceiving the environment with quasi-electrostatic fields by making use of capacitive coupling between devices and objects. Following this approach, it is possible to implement interfaces that are able to recognize gestures, body movements and manipulations of the environment at typical distances up to 50cm. These sensors usually have a limited resolution and can be sensitive to other conductive objects or electrical devices that affect electric fields. The technique allows for designing very energy-efficient and high-speed sensors that can be deployed unobtrusively underneath any kind of non-conductive surface. Compared to other sensing techniques, exploiting capacitive coupling also has a low impact on a user's perceived privacy. In this work, I also aim at enhancing the interaction experience with new perceptional capabilities based on capacitive coupling. I follow a bottom-up methodology and begin by presenting two low-level approaches for environmental perception. In order to perceive a user in detail, I present a rapid prototyping toolkit for capacitive proximity sensing. The prototyping toolkit shows significant advancements in terms of temporal and spatial resolution. Due to some limitations, namely the inability to determine the identity and fine-grained manipulations of objects, I contribute a generic method for communications based on capacitive coupling. The method allows for designing highly interactive systems that can exchange information through air and the human body. I furthermore show how human body parts can be recognized from capacitive proximity sensors. The method is able to extract multiple object parameters and track body parts in real-time. I conclude my thesis with contributions in the domain of context-aware devices and explicit gesture-recognition systems

    Multimodal human hand motion sensing and analysis - a review

    Get PDF
    corecore