2,182 research outputs found

    Designing a mobile collaborative system for navigating and reviewing oil industry cad models

    Get PDF
    In this paper, we describe an industrial experience with the creation of a new product for collaboratively navigating and reviewing 3D engineering models, applied to the oil industry. Together with professional oil industry engineers from a large oil company, a team of HCI researchers per formed task analysis and storyboards, designed, imple mented and qualitatively evaluated a prototype that com bines the power of mobility brought by tablets with new navigation modes that employ every sensor present in the tablet to deliver a better experience. The system was the target of a qualitative assessment made by architects and oil industry engineering experts. Lessons learned are valuable, both in terms of performance and experience design, issues that necessarily arise when creating new collaborative vir tual reality systemsinfo:eu-repo/semantics/publishedVersio

    The challenges in computer supported conceptual engineering design

    Get PDF
    Computer Aided Engineering Design (CAED) supports the engineering design process during the detail design, but it is not commonly used in the conceptual design stage. This article explores through literature why this is and how the engineering design research community is responding through the development of new conceptual CAED systems and HCI (Human Computer Interface) prototypes. First the requirements and challenges for future conceptual CAED and HCI solutions to better support conceptual design are explored and categorised. Then the prototypes developed in both areas, since 2000, are discussed. Characteristics already considered and those required for future development of CAED systems and HCIs are proposed and discussed, one of the key ones being experience. The prototypes reviewed offer innovative solutions, but only address selected requirements of conceptual design, and are thus unlikely to not provide a solution which would fit the wider needs of the engineering design industry. More importantly, while the majority of prototypes show promising results they are of low maturity and require further development

    Understanding face and eye visibility in front-facing cameras of smartphones used in the wild

    Get PDF
    Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations

    Designing for Cross-Device Interactions

    Get PDF
    Driven by technological advancements, we now own and operate an ever-growing number of digital devices, leading to an increased amount of digital data we produce, use, and maintain. However, while there is a substantial increase in computing power and availability of devices and data, many tasks we conduct with our devices are not well connected across multiple devices. We conduct our tasks sequentially instead of in parallel, while collaborative work across multiple devices is cumbersome to set up or simply not possible. To address these limitations, this thesis is concerned with cross-device computing. In particular it aims to conceptualise, prototype, and study interactions in cross-device computing. This thesis contributes to the field of Human-Computer Interaction (HCI)—and more specifically to the area of cross-device computing—in three ways: first, this work conceptualises previous work through a taxonomy of cross-device computing resulting in an in-depth understanding of the field, that identifies underexplored research areas, enabling the transfer of key insights into the design of interaction techniques. Second, three case studies were conducted that show how cross-device interactions can support curation work as well as augment users’ existing devices for individual and collaborative work. These case studies incorporate novel interaction techniques for supporting cross-device work. Third, through studying cross-device interactions and group collaboration, this thesis provides insights into how researchers can understand and evaluate multi- and cross-device interactions for individual and collaborative work. We provide a visualization and querying tool that facilitates interaction analysis of spatial measures and video recordings to facilitate such evaluations of cross-device work. Overall, the work in this thesis advances the field of cross-device computing with its taxonomy guiding research directions, novel interaction techniques and case studies demonstrating cross-device interactions for curation, and insights into and tools for effective evaluation of cross-device systems

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Handheld Augmented Reality in education

    Full text link
    [ES] En esta tesis llevamos a cabo una investigación en Realidad Aumentada (AR) orientada a entornos de aprendizaje, donde la interacción con los estudiantes se realiza con dispositivos de mano. A través de tres estudios exploramos las respuestas en el aprendizaje que se pueden obtener usando AR en dispositivos de mano, en un juego que desarrollamos para niños. Exploramos la influencia de AR en Entornos de Aprendizaje de Realidad Virtual (VRLE) y las ventajas que pueden aportar, así como sus límites. También probamos el juego en dos dispositivos de mano distintos (un smartphone y un Tablet PC) y presentamos las conclusiones comparándolos en torno a la satisfación y la interacción. Finalmente, comparamos interfaces táctiles y tangibles en aplicaciones de AR para niños bajo una perspectiva en Interacción Hombre-Máquina.[EN] In this thesis we conduct a research in Augmented Reality (AR) aimed to learning environments, where the interaction with the students is carried out using handheld devices. Through three studies we explore the learning outcomes that can be obtained using handheld AR in a game that we developed for children. We explored the influence of AR in Virtual Reality Learning Environments (VRLE) and the advantages that can involve, as well as the limits. We also tested the game in two different handheld devices (a smartphone and a Tablet PC) and present the conclusions comparing them concerning satisfaction and interaction. Finally, we compare the use tactile and tangible user interfaces in AR applications for children under a Human-Computer Interaction perspective.González Gancedo, S. (2012). Handheld Augmented Reality in education. http://hdl.handle.net/10251/17973Archivo delegad

    Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

    Get PDF
    How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies
    corecore