1,912 research outputs found

    Tangible UI by object and material classification with radar

    Get PDF
    Radar signals penetrate, scatter, absorb and reflect energy into proximate objects and ground penetrating and aerial radar systems are well established. We describe a highly accurate system based on a combination of a monostatic radar (Google Soli), supervised machine learning to support object and material classification based Uls. Based on RadarCat techniques, we explore the development of tangible user interfaces without modification of the objects or complex infrastructures. This affords new forms of interaction with digital devices, proximate objects and micro-gestures.Postprin

    SpeCam: sensing surface color and material with the front-facing camera of mobile device

    Get PDF
    SpeCam is a lightweight surface color and material sensing approach for mobile devices which only uses the front-facing camera and the display as a multi-spectral light source. We leverage the natural use of mobile devices (placing it face-down) to detect the material underneath and therefore infer the location or placement of the device. SpeCam can then be used to support discreet micro-interactions to avoid the numerous distractions that users daily face with today's mobile devices. Our two-parts study shows that SpeCam can i) recognize colors in the HSB space with 10 degrees apart near the 3 dominant colors and 4 degrees otherwise and ii) 30 types of surface materials with 99% accuracy. These findings are further supported by a spectroscopy study. Finally, we suggest a series of applications based on simple mobile micro-interactions suitable for using the phone when placed face-down.Postprin

    Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

    Get PDF
    How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies

    Development of flight experiment task requirements. Volume 2: Technical Report. Part 1: Program report and Appendices A-G

    Get PDF
    Activities are documented of the study to determine skills required of on-orbit crew personnel of the space shuttle. The material is presented in four sections that include: (1) methodology for identifying flight experiment task-skill requirements, (2) task-skill analysis of selected flight experiments, (3) study results and conclusions, and (4) new technology

    Virtuaalse proovikabiini 3D kehakujude ja roboti juhtimisalgoritmide uurimine

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneVirtuaalne riiete proovimine on üks põhilistest teenustest, mille pakkumine võib suurendada rõivapoodide edukust, sest tänu sellele lahendusele väheneb füüsilise töö vajadus proovimise faasis ning riiete proovimine muutub kasutaja jaoks mugavamaks. Samas pole enamikel varem välja pakutud masinnägemise ja graafika meetoditel õnnestunud inimkeha realistlik modelleerimine, eriti terve keha 3D modelleerimine, mis vajab suurt kogust andmeid ja palju arvutuslikku ressurssi. Varasemad katsed on ebaõnnestunud põhiliselt seetõttu, et ei ole suudetud korralikult arvesse võtta samaaegseid muutusi keha pinnal. Lisaks pole varasemad meetodid enamasti suutnud kujutiste liikumisi realistlikult reaalajas visualiseerida. Käesolev projekt kavatseb kõrvaldada eelmainitud puudused nii, et rahuldada virtuaalse proovikabiini vajadusi. Välja pakutud meetod seisneb nii kasutaja keha kui ka riiete skaneerimises, analüüsimises, modelleerimises, mõõtmete arvutamises, orientiiride paigutamises, mannekeenidelt võetud 3D visuaalsete andmete segmenteerimises ning riiete mudeli paigutamises ja visualiseerimises kasutaja kehal. Selle projekti käigus koguti visuaalseid andmeid kasutades 3D laserskannerit ja Kinecti optilist kaamerat ning koostati nendest andmebaas. Neid andmeid kasutati välja töötatud algoritmide testimiseks, mis peamiselt tegelevad riiete realistliku visuaalse kujutamisega inimkehal ja suuruse pakkumise süsteemi täiendamisega virtuaalse proovikabiini kontekstis.Virtual fitting constitutes a fundamental element of the developments expected to rise the commercial prosperity of online garment retailers to a new level, as it is expected to reduce the load of the manual labor and physical efforts required. Nevertheless, most of the previously proposed computer vision and graphics methods have failed to accurately and realistically model the human body, especially, when it comes to the 3D modeling of the whole human body. The failure is largely related to the huge data and calculations required, which in reality is caused mainly by inability to properly account for the simultaneous variations in the body surface. In addition, most of the foregoing techniques cannot render realistic movement representations in real-time. This project intends to overcome the aforementioned shortcomings so as to satisfy the requirements of a virtual fitting room. The proposed methodology consists in scanning and performing some specific analyses of both the user's body and the prospective garment to be virtually fitted, modeling, extracting measurements and assigning reference points on them, and segmenting the 3D visual data imported from the mannequins. Finally, superimposing, adopting and depicting the resulting garment model on the user's body. The project is intended to gather sufficient amounts of visual data using a 3D laser scanner and the Kinect optical camera, to manage it in form of a usable database, in order to experimentally implement the algorithms devised. The latter will provide a realistic visual representation of the garment on the body, and enhance the size-advisor system in the context of the virtual fitting room under study

    A Framework For Abstracting, Designing And Building Tangible Gesture Interactive Systems

    Get PDF
    This thesis discusses tangible gesture interaction, a novel paradigm for interacting with computer that blends concepts from the more popular fields of tangible interaction and gesture interaction. Taking advantage of the human innate abilities to manipulate physical objects and to communicate through gestures, tangible gesture interaction is particularly interesting for interacting in smart environments, bringing the interaction with computer beyond the screen, back to the real world. Since tangible gesture interaction is a relatively new field of research, this thesis presents a conceptual framework that aims at supporting future work in this field. The Tangible Gesture Interaction Framework provides support on three levels. First, it helps reflecting from a theoretical point of view on the different types of tangible gestures that can be designed, physically, through a taxonomy based on three components (move, hold and touch) and additional attributes, and semantically, through a taxonomy of the semantic constructs that can be used to associate meaning to tangible gestures. Second, it helps conceiving new tangible gesture interactive systems and designing new interactions based on gestures with objects, through dedicated guidelines for tangible gesture definition and common practices for different application domains. Third, it helps building new tangible gesture interactive systems supporting the choice between four different technological approaches (embedded and embodied, wearable, environmental or hybrid) and providing general guidance for the different approaches. As an application of this framework, this thesis presents also seven tangible gesture interactive systems for three different application domains, i.e., interacting with the In-Vehicle Infotainment System (IVIS) of the car, the emotional and interpersonal communication, and the interaction in a smart home. For the first application domain, four different systems that use gestures on the steering wheel as interaction means with the IVIS have been designed, developed and evaluated. For the second application domain, an anthropomorphic lamp able to recognize gestures that humans typically perform for interpersonal communication has been conceived and developed. A second system, based on smart t-shirts, recognizes when two people hug and reward the gesture with an exchange of digital information. Finally, a smart watch for recognizing gestures performed with objects held in the hand in the context of the smart home has been investigated. The analysis of existing systems found in literature and of the system developed during this thesis shows that the framework has a good descriptive and evaluative power. The applications developed during this thesis show that the proposed framework has also a good generative power.Questa tesi discute l’interazione gestuale tangibile, un nuovo paradigma per interagire con il computer che unisce i principi dei più comuni campi di studio dell’interazione tangibile e dell’interazione gestuale. Sfruttando le abilità innate dell’uomo di manipolare oggetti fisici e di comunicare con i gesti, l’interazione gestuale tangibile si rivela particolarmente interessante per interagire negli ambienti intelligenti, riportando l’attenzione sul nostro mondo reale, al di là dello schermo dei computer o degli smartphone. Poiché l’interazione gestuale tangibile è un campo di studio relativamente recente, questa tesi presenta un framework (quadro teorico) che ha lo scopo di assistere lavori futuri in questo campo. Il Framework per l’Interazione Gestuale Tangibile fornisce supporto su tre livelli. Per prima cosa, aiuta a riflettere da un punto di vista teorico sui diversi tipi di gesti tangibili che possono essere eseguiti fisicamente, grazie a una tassonomia basata su tre componenti (muovere, tenere, toccare) e attributi addizionali, e che possono essere concepiti semanticamente, grazie a una tassonomia di tutti i costrutti semantici che permettono di associare dei significati ai gesti tangibili. In secondo luogo, il framework proposto aiuta a concepire nuovi sistemi interattivi basati su gesti tangibili e a ideare nuove interazioni basate su gesti con gli oggetti, attraverso linee guida per la definizione di gesti tangibili e una selezione delle migliore pratiche per i differenti campi di applicazione. Infine, il framework aiuta a implementare nuovi sistemi interattivi basati su gesti tangibili, permettendo di scegliere tra quattro differenti approcci tecnologici (incarnato e integrato negli oggetti, indossabile, distribuito nell’ambiente, o ibrido) e fornendo una guida generale per la scelta tra questi differenti approcci. Come applicazione di questo framework, questa tesi presenta anche sette sistemi interattivi basati su gesti tangibili, realizzati per tre differenti campi di applicazione: l’interazione con i sistemi di infotainment degli autoveicoli, la comunicazione interpersonale delle emozioni, e l’interazione nella casa intelligente. Per il primo campo di applicazione, sono stati progettati, sviluppati e testati quattro differenti sistemi che usano gesti tangibili effettuati sul volante come modalità di interazione con il sistema di infotainment. Per il secondo campo di applicazione, è stata concepita e sviluppata una lampada antropomorfica in grado di riconoscere i gesti tipici dell’interazione interpersonale. Per lo stesso campo di applicazione, un secondo sistema, basato su una maglietta intelligente, riconosce quando due persone si abbracciano e ricompensa questo gesto con uno scambio di informazioni digitali. Infine, per l’interazione nella casa intelligente, è stata investigata la realizzazione di uno smart watch per il riconoscimento di gesti eseguiti con oggetti tenuti nella mano. L’analisi dei sistemi interattivi esistenti basati su gesti tangibili permette di dimostrare che il framework ha un buon potere descrittivo e valutativo. Le applicazioni sviluppate durante la tesi mostrano che il framework proposto ha anche un valido potere generativo

    Holographic reality: enhancing the artificial reality experience throuhg interactive 3D holography

    Get PDF
    Holography was made know by several science-fiction productions, however this technology dates back to the year 1940. Despite the considerable age of this discovery, this technology remains inaccessible to the average consumer. The main goal of this manuscript is to advance the state of the art in interactive holography, providing an accessible and low-cost solution. The final product intends to nudge the HCI com munity to explore potential applications, in particular to be aquatic centric and environmentally friendly. Two main user studies are performed, in order to determine the impact of the proposed solution by a sample audience. Provided user studies include a first prototype as a Tangible User Interface - TUI for Holographic Reality - HR Second study included the Holographic Mounted Display - HMD for proposed HR interface, further analyzing the interactive holographic experience without hand-held devices. Both of these studies were further compared with an Augmented Reality setting. Obtained results demonstrate a significantly higher score for the HMD approach. This suggests it is the better solution, most likely due to the added simplicity and immersiveness features it has. However the TUI study did score higher in several key parameters, and should be considered for future studies. Comparing with an AR experience, the HMD study scores slightly lower, but manages to surpass AR in several parameters. Several approaches were outlined and evaluated, depicting different methods for the creation of Interactive Holographic Reality experiences. In spite of the low maturity of holographic technology, it can be concluded it is comparable and can keep up to other more developed and mature artificial reality settings, further supporting the need for the existence of the Holographic Reality conceptA tecnologia holográfica tornou-se conhecida através da ficção científica, contudo esta tecnologia remonta até ao ano 1940. Apesar da considerável idade desta descoberta, esta tecnologia continua a não ser acessíveil para o consumidor. O objetivo deste manuscrito é avançar o estado de arte da Holografia Interactiva, e fornecer uma solução de baixo custo. O objetivo do produto final é persuadir a comunidade HCI para a exploração de aplicações desta tecnologia, em particular em contextos aquáticos e pró-ambientais. Dois estudos principais foram efetuados, de modo a determinar qual o impacto da solução pro posta numa amostra. Os estudos fornecidos incluem um protótipo inicial baseado numa Interface Tangível e Realidade Holográfica e um dispositivo tangível. O segundo estudo inclui uma interface baseada num dispositivo head-mounted e em Realidade Holográfica, de modo a analisar e avaliar a experiência interativa e holográfica. Ambos os estudos são comparados com uma experiência semelhante, em Realidade Aumentada. Os resultados obtidos demonstram que o estudo HMD recebeu uma avaliação significante mel hor, em comparação com a abordagem TUI. Isto sugere que uma abordagem "head-mounted" tende a ser melhor solução, muito provavelmente devido às vantagens que possui em relação à simplicidade e imersividade que oferece. Contudo, o estudo TUI recebeu pontuações mais altas em alguns parâmetros chave, e deve ser considerados para a implementação de futuros estudos. Comparando com uma experiência de realidade aumentada, o estudo HMD recebeu uma avaliação ligeiramente menor, mas por uma margem mínima, e ultrapassando a AR em alguns parâmetros. Várias abordagens foram deliniadas e avaliadas, com diferentes métodos para a criação de experiências de Realidade Holográfica. Apesar da pouca maturidade da tecnologia holográfica, podemos concluir que a mesma é comparável e consegue acompanhar outros tipos de realidade artificial, que são muito mais desenvolvidos, o que suporta a necessidade da existência do conceito de Realidade Holográfica

    Earth as Interface: Exploring chemical senses with Multisensory HCI Design for Environmental Health Communication

    Get PDF
    As environmental problems intensify, the chemical senses -that is smell and taste, are the most relevantsenses to evidence them.As such, environmental exposure vectors that can reach human beings comprise air,food, soil and water[1].Within this context, understanding the link between environmental exposures andhealth[2]is crucial to make informed choices, protect the environment and adapt to new environmentalconditions[3].Smell and taste lead therefore to multi-sensorial experiences which convey multi-layered information aboutlocal and global events[4]. However, these senses are usually absent when those problems are represented indigital systems. The multisensory HCIdesign framework investigateschemical sense inclusion withdigital systems[5]. Ongoing efforts tackledigitalization of smell and taste for digital delivery, transmission or substitution [6]. Despite experimentsproved technological feasibility, its dissemination depends on relevant applicationdevelopment[7].This thesis aims to fillthose gaps by demonstratinghow chemical senses provide the means to link environment and health based on scientific andgeolocation narratives [8], [9],[10]. We present a Multisensory HCI design process which accomplished symbolicdisplaying smell and taste and led us to a new multi-sensorial interaction system presented herein. We describe the conceptualization, design and evaluation of Earthsensum, an exploratory case study project.Earthsensumoffered to 16 participants in the study, environmental smell and taste experiences about real geolocations to participants of the study. These experiences were represented digitally using mobilevirtual reality (MVR) and mobile augmented reality (MAR). Its technologies bridge the real and digital Worlds through digital representations where we can reproduce the multi-sensorial experiences. Our study findings showed that the purposed interaction system is intuitive and can lead not only to a betterunderstanding of smell and taste perception as also of environmental problems. Participants comprehensionabout the link between environmental exposures and health was successful and they would recommend thissystem as education tools. Our conceptual design approach was validated and further developments wereencouraged.In this thesis,we demonstratehow to applyMultisensory HCI methodology to design with chemical senses. Weconclude that the presented symbolic representation model of smell and taste allows communicatingtheseexperiences on digital platforms. Due to its context-dependency, MVR and MAR platforms are adequatetechnologies to be applied for this purpose.Future developments intend to explore further the conceptual approach. These developments are centredon the use of the system to induce hopefully behaviourchange. Thisthesisopens up new application possibilities of digital chemical sense communication,Multisensory HCI Design and environmental health communication.À medida que os problemas ambientais se intensificam, os sentidos químicos -isto é, o cheiroe sabor, são os sentidos mais relevantes para evidenciá-los. Como tais, os vetores de exposição ambiental que podem atingir os seres humanos compreendem o ar, alimentos, solo e água [1]. Neste contexto, compreender a ligação entre as exposições ambientais e a saúde [2] é crucial para exercerescolhas informadas, proteger o meio ambiente e adaptar a novas condições ambientais [3]. O cheiroe o saborconduzemassima experiências multissensoriais que transmitem informações de múltiplas camadas sobre eventos locais e globais [4]. No entanto, esses sentidos geralmente estão ausentes quando esses problemas são representados em sistemas digitais. A disciplina do design de Interação Humano-Computador(HCI)multissensorial investiga a inclusão dossentidos químicos em sistemas digitais [9]. O seu foco atual residena digitalização de cheirose sabores para o envio, transmissão ou substituiçãode sentidos[10]. Apesar dasexperimentaçõescomprovarem a viabilidade tecnológica, a sua disseminação está dependentedo desenvolvimento de aplicações relevantes [11]. Estatese pretendepreencher estas lacunas ao demonstrar como os sentidos químicos explicitama interconexãoentre o meio ambiente e a saúde, recorrendo a narrativas científicas econtextualizadasgeograficamente[12], [13], [14]. Apresentamos uma metodologiade design HCImultissensorial que concretizouum sistema de representação simbólica de cheiro e sabor e nos conduziu a um novo sistema de interação multissensorial, que aqui apresentamos. Descrevemos o nosso estudo exploratório Earthsensum, que integra aconceptualização, design e avaliação. Earthsensumofereceu a 16participantes do estudo experiências ambientais de cheiro e sabor relacionadas com localizações geográficasreais. Essas experiências foram representadas digitalmente através derealidade virtual(VR)e realidade aumentada(AR).Estas tecnologias conectamo mundo real e digital através de representações digitais onde podemos reproduzir as experiências multissensoriais. Os resultados do nosso estudo provaramque o sistema interativo proposto é intuitivo e pode levar não apenas a uma melhor compreensão da perceção do cheiroe sabor, como também dos problemas ambientais. O entendimentosobre a interdependência entre exposições ambientais e saúde teve êxitoe os participantes recomendariam este sistema como ferramenta para aeducação. A nossa abordagem conceptual foi positivamentevalidadae novos desenvolvimentos foram incentivados. Nesta tese, demonstramos como aplicar metodologiasde design HCImultissensorialpara projetar com ossentidos químicos. Comprovamosque o modelo apresentado de representação simbólica do cheiroe do saborpermite comunicar essas experiênciasem plataformas digitais. Por serem dependentesdocontexto, as plataformas de aplicações emVR e AR são tecnologias adequadaspara este fim.Desenvolvimentos futuros pretendem aprofundar a nossa abordagemconceptual. Em particular, aspiramos desenvolvera aplicaçãodo sistema para promover mudanças de comportamento. Esta tese propõenovas possibilidades de aplicação da comunicação dos sentidos químicos em plataformas digitais, dedesign multissensorial HCI e de comunicação de saúde ambiental
    corecore