168 research outputs found

    Evaluating Context-Aware Applications Accessed Through Wearable Devices as Assistive Technology for Students with Disabilities

    Get PDF
    The purpose of these two single subject design studies was to evaluate the use of the wearable and context-aware technologies for college students with intellectual disability and autism as tools to increase independence and vocational skills. There is a compelling need for the development of tools and strategies that will facilitate independence, self-sufficiency, and address poor outcomes in adulthood for students with disabilities. Technology is considered to be a great equalizer for people with disabilities. The proliferation of new technologies allows access to real-time, contextually-based information as a means to compensate for limitations in cognitive functioning and decrease the complexity of prerequisite skills for successful use of previous technologies. Six students participated in two single-subject design studies; three students participate in Study I and three different students participated in Study II. The results of these studies are discussed in the context applying new technology applications to assist and improve individuals with intellectual disability and autism to self-manage technological supports to learn new skills, set reminders, and enhance independence. During Study I, students were successfully taught to use a wearable smartglasses device, which delivered digital auditory and visual information to complete three novel vocational tasks. The results indicated that all students learned all vocational task using the wearable device. Students also continued to use the device beyond the initial training phase to self-direct their learning and self-manage prompts for task completion as needed. During Study II, students were successfully taught to use a wearable smartwatch device to enter novel appointments for the coming week, as well as complete the tasks associated with each appointment. The results indicated that all students were able to self-operate the wearable device to enter appointments, attend all appointments on-time and complete all associated tasks

    Potential of mobile applications in human-centric production and logistics management

    Get PDF
    With the increasing market penetration of smart devices (smartphones, smartwatches, and tablets), various mobile applications (apps) have been developed to fulfill tasks in daily life. Recently, efforts have been made to develop apps to support human operators in industrial work. When apps installed on commercial devices are utilized, tasks that were formerly done purely manually or with the help of investment-intensive specific devices can be performed more efficiently and/or at a lower cost and with reduced errors. Despite their advantages, smart devices have limitations because embedded sensors (e.g., accelerometers) and components (e.g., cameras) are usually designed for nonindustrial use. Hence, validation experiments and case studies for industrial applications are needed to ensure the reliability of app usage. In this study, a systematic literature review was employed to identify the state of knowledge about the use of mobile apps in production and logistics management. The results show how apps can support human centricity based on the enabling technologies and components of smart devices. An outlook for future research and applications is provided, including the need for proper validation studies to ensure the diversity and reliability of apps and more research on psychosocial aspects of human-technology interaction

    Blending the Material and Digital World for Hybrid Interfaces

    Get PDF
    The development of digital technologies in the 21st century is progressing continuously and new device classes such as tablets, smartphones or smartwatches are finding their way into our everyday lives. However, this development also poses problems, as these prevailing touch and gestural interfaces often lack tangibility, take little account of haptic qualities and therefore require full attention from their users. Compared to traditional tools and analog interfaces, the human skills to experience and manipulate material in its natural environment and context remain unexploited. To combine the best of both, a key question is how it is possible to blend the material world and digital world to design and realize novel hybrid interfaces in a meaningful way. Research on Tangible User Interfaces (TUIs) investigates the coupling between physical objects and virtual data. In contrast, hybrid interfaces, which specifically aim to digitally enrich analog artifacts of everyday work, have not yet been sufficiently researched and systematically discussed. Therefore, this doctoral thesis rethinks how user interfaces can provide useful digital functionality while maintaining their physical properties and familiar patterns of use in the real world. However, the development of such hybrid interfaces raises overarching research questions about the design: Which kind of physical interfaces are worth exploring? What type of digital enhancement will improve existing interfaces? How can hybrid interfaces retain their physical properties while enabling new digital functions? What are suitable methods to explore different design? And how to support technology-enthusiast users in prototyping? For a systematic investigation, the thesis builds on a design-oriented, exploratory and iterative development process using digital fabrication methods and novel materials. As a main contribution, four specific research projects are presented that apply and discuss different visual and interactive augmentation principles along real-world applications. The applications range from digitally-enhanced paper, interactive cords over visual watch strap extensions to novel prototyping tools for smart garments. While almost all of them integrate visual feedback and haptic input, none of them are built on rigid, rectangular pixel screens or use standard input modalities, as they all aim to reveal new design approaches. The dissertation shows how valuable it can be to rethink familiar, analog applications while thoughtfully extending them digitally. Finally, this thesis’ extensive work of engineering versatile research platforms is accompanied by overarching conceptual work, user evaluations and technical experiments, as well as literature reviews.Die Durchdringung digitaler Technologien im 21. Jahrhundert schreitet stetig voran und neue Geräteklassen wie Tablets, Smartphones oder Smartwatches erobern unseren Alltag. Diese Entwicklung birgt aber auch Probleme, denn die vorherrschenden berührungsempfindlichen Oberflächen berücksichtigen kaum haptische Qualitäten und erfordern daher die volle Aufmerksamkeit ihrer Nutzer:innen. Im Vergleich zu traditionellen Werkzeugen und analogen Schnittstellen bleiben die menschlichen Fähigkeiten ungenutzt, die Umwelt mit allen Sinnen zu begreifen und wahrzunehmen. Um das Beste aus beiden Welten zu vereinen, stellt sich daher die Frage, wie neuartige hybride Schnittstellen sinnvoll gestaltet und realisiert werden können, um die materielle und die digitale Welt zu verschmelzen. In der Forschung zu Tangible User Interfaces (TUIs) wird die Verbindung zwischen physischen Objekten und virtuellen Daten untersucht. Noch nicht ausreichend erforscht wurden hingegen hybride Schnittstellen, die speziell darauf abzielen, physische Gegenstände des Alltags digital zu erweitern und anhand geeigneter Designparameter und Entwurfsräume systematisch zu untersuchen. In dieser Dissertation wird daher untersucht, wie Materialität und Digitalität nahtlos ineinander übergehen können. Es soll erforscht werden, wie künftige Benutzungsschnittstellen nützliche digitale Funktionen bereitstellen können, ohne ihre physischen Eigenschaften und vertrauten Nutzungsmuster in der realen Welt zu verlieren. Die Entwicklung solcher hybriden Ansätze wirft jedoch übergreifende Forschungsfragen zum Design auf: Welche Arten von physischen Schnittstellen sind es wert, betrachtet zu werden? Welche Art von digitaler Erweiterung verbessert das Bestehende? Wie können hybride Konzepte ihre physischen Eigenschaften beibehalten und gleichzeitig neue digitale Funktionen ermöglichen? Was sind geeignete Methoden, um verschiedene Designs zu erforschen? Wie kann man Technologiebegeisterte bei der Erstellung von Prototypen unterstützen? Für eine systematische Untersuchung stützt sich die Arbeit auf einen designorientierten, explorativen und iterativen Entwicklungsprozess unter Verwendung digitaler Fabrikationsmethoden und neuartiger Materialien. Im Hauptteil werden vier Forschungsprojekte vorgestellt, die verschiedene visuelle und interaktive Prinzipien entlang realer Anwendungen diskutieren. Die Szenarien reichen von digital angereichertem Papier, interaktiven Kordeln über visuelle Erweiterungen von Uhrarmbändern bis hin zu neuartigen Prototyping-Tools für intelligente Kleidungsstücke. Um neue Designansätze aufzuzeigen, integrieren nahezu alle visuelles Feedback und haptische Eingaben, um Alternativen zu Standard-Eingabemodalitäten auf starren Pixelbildschirmen zu schaffen. Die Dissertation hat gezeigt, wie wertvoll es sein kann, bekannte, analoge Anwendungen zu überdenken und sie dabei gleichzeitig mit Bedacht digital zu erweitern. Dabei umfasst die vorliegende Arbeit sowohl realisierte technische Forschungsplattformen als auch übergreifende konzeptionelle Arbeiten, Nutzerstudien und technische Experimente sowie die Analyse existierender Forschungsarbeiten

    Personal Shopping Assistance and Navigator System for Visually Impaired People

    Get PDF
    International audienceIn this paper, a personal assistant and navigator system for visually impaired people will be described. The showcase presented in-tends to demonstrate how partially sighted people could be aided by the technology in performing an ordinary activity, like going to a mall and moving inside it to find a specific product. We propose an Android ap-plication that integrates Pedestrian Dead Reckoning and Computer Vi-sion algorithms, using an off-the-shelf Smartphone connected to a Smart-watch. The detection, recognition and pose estimation of specific objects or features in the scene derive an estimate of user location with sub-meter accuracy when combined with a hardware-sensor pedometer. The pro-posed prototype interfaces with a user by means of Augmented Reality, exploring a variety of sensorial modalities other than just visual overlay, namely audio and haptic modalities, to create a seamless immersive user experience. The interface and interaction of the preliminary platform have been studied through specific evaluation methods. The feedback gathered will be taken into consideration to further improve the pro-posed system

    Touch-Move-Release: Studies of Surface and Motion Gestures for Mobile Augmented Reality

    Get PDF
    Recent advancements in both hardware and software for mobile devices have allowed developers to create better mobile Augmented Reality (AR) experiences, which has led to an increase in the number of mobile AR applications and users engaging in these experiences. However, despite a broad range of mobile AR applications available to date, the majority of these applications that we surveyed still primarily use surface gestures, i.e., gesturing on the touch screen surface of the device, as the default interaction method and do not utilise the affordance of three-dimensional user interaction that AR interfaces support. In this research, we have investigated and compared two methods of gesture interaction for mobile AR applications: Surface Gestures, which are commonly used in mainstream applications, and Motion Gestures, which take advantage of the spatial information of the mobile device. Our goal is to determine if motion gestures are comparable or even superior to surface gestures for mobile AR applications. To achieve this, we have conducted two user studies: an elicitation study 15 and a validation study. The first study recruited twenty-one participants and elicited two sets of 16 gestures, surface and mobile gestures, for twelve everyday mobile AR tasks. This yielded a total 17 of five hundred and four gestures. The two sets of gestures were classified and compared in terms of goodness, ease of use, and engagement. As expected, the participants’ elicited surface gestures are familiar and easy to use, while motion gestures were found more engaging. Using design patterns derived from the elicited motion gestures, we proposed a novel interaction technique called ”TMR” (Touch-Move-Release). We developed a mobile AR game similar to Pokemon GO to validate this new technique and implemented a selected gesture chosen from ´ the two gesture sets. A validation study was conducted with ten participants, and we found that the motion gesture enhanced engagement and provided a better game experience. In contrast, the surface gesture provided higher precision resulting in higher accuracy and was easier to use. Finally, we discuss the implications of our findings and give our design recommendations for using the elicited gestures

    Leveraging smart technologies to improve the management of diabetic foot ulcers and extend ulcer-free days in remission

    Get PDF
    © 2020 John Wiley & Sons Ltd The prevalent and long neglected diabetic foot ulcer (DFU) and its related complications rank among the most debilitating and costly sequelae of diabetes. Management of the DFU is multifaceted and requires constant monitoring from patients, caregivers, and healthcare providers. The alarmingly high rates of recurrence of ulcerations in the diabetic foot requires a change in our approach to care and to the vernacular in the medical literature. Our efforts should be directed not only on healing of open wounds, but also on maximizing ulcer-free days for the patient in diabetic foot remission. The increasing development and use of technology within every aspect of our lives represents an opportunity for creative solutions to prevent or better manage this devastating condition. In particular, recent advances in wearable and mobile health technologies appear to show promise in measuring and modulating dangerous foot pressure and inflammation to extend remission and improve the quality of life for these most complex patients. This review article discusses how harnessing wearables and digital technologies may improve the management and optimize prevention of DFUs by identifying high-risk patients for triage and timely intervention, personalizing prescription of offloading, and improving adherence to protective footwear. While still in their infancy, we envisage a future network of skin-worn, jewellery-worn, and implantable sensors that, if allowed to effectively communicate with one another and the patient, could dramatically impact measuring, personalizing, and managing how we and the patients we serve move through our collective world

    A comparison of surface and motion user-defined gestures for mobile augmented reality.

    Get PDF
    Augmented Reality (AR) technology permits interaction between the virtual and physical worlds. Recent advancements in mobile devices allow for a better mobile AR experience, and in turn, improving user adoption rate and increasing the number of mobile AR applications across a wide range of disciplines. Nevertheless, the majority of mobile AR applications, that we have surveyed, adopted surface gestures as the default interaction method for the AR experience and have not utilised three-dimensional (3D) spatial interaction, as supported by AR interfaces. This research investigates two types of gestures for interacting in mobile AR applications, surface gestures, which have been deployed by mainstream applications, and motion gestures, that take advantages of 3D movement of the handheld device. Our goal is to find out if there exists a gesture-based interaction suitable for handheld devices, that can utilise the 3D interaction of mobile AR applications. We conducted two user studies, an elicitation study and a validation study. In the elicitation study, we elicited two sets of gestures, surface and motion, for mobile AR applications. We recruited twenty-one participants to perform twelve common mobile AR tasks, which yielded a total of five-hundred and four gestures. We classified and illustrated the two sets of gestures, and compared them in terms of goodness, ease of use, and engagement. The elicitation process yielded two separate sets of user-defined gestures; legacy surface gestures, which were familiar and easy to use by the participants, and motion gestures, which found to be more engaging. From the design patterns of the motion gestures, we proposed a novel interaction technique for mobile AR called TMR (Touch-Move-Release). To validate our elicited gestures in an actual application, we conducted a second study. We have developed a mobile AR game similar to Pokémon GO and implemented the selected gestures from the elicitation study. The study was conducted with ten participants, and we found that the motion gesture could provide more engagement and better game experience. Nevertheless, surface gestures were more accurate and easier to use. We discussed the implications of our findings and gave our design recommendations for designers on the usage of the elicited gestures. Our research can be further explored in the future. It can be used as a "prequel" to the design of better gesture-based interaction technique for different tasks in various mobile AR applications

    Enabling Collaborative Visual Analysis across Heterogeneous Devices

    Get PDF
    We are surrounded by novel device technologies emerging at an unprecedented pace. These devices are heterogeneous in nature: in large and small sizes with many input and sensing mechanisms. When many such devices are used by multiple users with a shared goal, they form a heterogeneous device ecosystem. A device ecosystem has great potential in data science to act as a natural medium for multiple analysts to make sense of data using visualization. It is essential as today's big data problems require more than a single mind or a single machine to solve them. Towards this vision, I introduce the concept of collaborative, cross-device visual analytics (C2-VA) and outline a reference model to develop user interfaces for C2-VA. This dissertation covers interaction models, coordination techniques, and software platforms to enable full stack support for C2-VA. Firstly, we connected devices to form an ecosystem using software primitives introduced in the early frameworks from this dissertation. To work in a device ecosystem, we designed multi-user interaction for visual analysis in front of large displays by finding a balance between proxemics and mid-air gestures. Extending these techniques, we considered the roles of different devices–large and small–to present a conceptual framework for utilizing multiple devices for visual analytics. When applying this framework, findings from a user study showcase flexibility in the analytic workflow and potential for generation of complex insights in device ecosystems. Beyond this, we supported coordination between multiple users in a device ecosystem by depicting the presence, attention, and data coverage of each analyst within a group. Building on these parts of the C2-VA stack, the culmination of this dissertation is a platform called Vistrates. This platform introduces a component model for modular creation of user interfaces that work across multiple devices and users. A component is an analytical primitive–a data processing method, a visualization, or an interaction technique–that is reusable, composable, and extensible. Together, components can support a complex analytical activity. On top of the component model, the support for collaboration and device ecosystems comes for granted in Vistrates. Overall, this enables the exploration of new research ideas within C2-VA
    corecore