3,188 research outputs found

    Evaluating students readiness, expectancy, acceptance and effectiveness of augmented reality based construction technology education

    Get PDF
    Augmented reality (AR) has the potential to enhance the teaching and learning experience in construction technology which involves the learning of construction processes and understanding the construction elements. Augmented reality also provides the ability to change and improve the nature of education. This is due to the possibility of overlaying media onto the real world for content consumption using smartphones and tablets devices, which enables students to access information anywhere and anytime. However, before implementing a new approach to teaching, the state of whether the students are ready to use AR have to be identified. This also goes toward what the students expect when using AR in learning, how do they accept using AR and effectiveness of using AR in learning. Therefore, the purpose of this study is to (1) Identify the readiness of students on using AR in teaching; (2) Identifying what do the students expect when using AR in learning construction technology; (3) Identifying the student’s acceptance of AR in learning; (4) The effectiveness of AR in construction technology learning. A quantitative method of analysis has been implemented measuring the mean score of objective 1-3 based on the student’s responses to the questionnaire. On the other hand, the second phase of the study which is to determine whether using AR is effective in learning was done by comparing pre-test and post-test results. Results from the study show assuring indicators that students accept the usage of AR in construction technology education, the application also fulfils their expectations on what AR could aid in the learning process and for student’s acceptance, the result shows that students accepted the usage of AR as a learning tool. Meanwhile, the results regarding AR effectiveness on construction technology displayed noticeable improvements regarding student’s pre-test and post-test results with 68% of students display improvements in their scores

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Augmented reality in architecture and construction education: state of the field and opportunities

    Get PDF
    Over the past decade, the architecture and construction (AC) industries have been evolving from traditional practices into more current, interdisciplinary and technology integrated methods. Complex and intricate digital technologies and mobile computing such as simulation, computational design and immersive technologies, have been exploited for different purposes such as reducing cost and time, improving design and enhancing overall project efficiency. Immersive technologies and augmented reality (AR), in particular, have proven to be extremely beneficial in this field. However, the application and usage of these technologies and devices in higher education teaching and learning environments are yet to be fully explored and still scarce. More importantly, there is still a significant gap in developing pedagogies and teaching methods that embrace the usage of such technologies in the AC curricula. This study, therefore, aims to critically analyse the current state-of-the-art and present the developed and improved AR approaches in teaching and learning methods of AC, addressing the identified gap in the extant literature, while developing transformational frameworks to link the gaps to their future research agenda. The conducted analysis incorporates the critical role of the AR implications on the AC students’ skillsets, pedagogical philosophies in AC curricula, techno-educational aspects and content domains in the design and implementation of AR environments for AC learning. The outcomes of this comprehensive study prepare trainers, instructors, and the future generation of AC workers for the rapid advancements in this industry

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    Spherical tangible user interfaces in mixed reality

    Get PDF
    The popularity of virtual reality (VR) and augmented reality (AR) has grown rapidly in recent years, both in academia and commercial applications. This is rooted in technological advances and affordable head-mounted displays (HMDs). Whether in games or professional applications, HMDs allow for immersive audio-visual experiences that transport users to compelling digital worlds or convincingly augment the real world. However, as true to life as these experiences have become in a visual and auditory sense, the question remains how we can model interaction with these virtual environments in an equally natural way. Solutions providing intuitive tangible interaction would bear the potential to fundamentally make the mixed reality (MR) spectrum more accessible, especially for novice users. Research on tangible user interfaces (TUIs) has pursued this goal by coupling virtual to real-world objects. Tangible interaction has been shown to provide significant advantages for numerous use cases. Spherical tangible user interfaces (STUIs) present a special case of these devices, mainly due to their ability to fully embody any spherical virtual content. In general, spherical devices increasingly transition from mere technology demonstrators to usable multi-modal interfaces. For this dissertation, we explore the application of STUIs in MR environments primarily by comparing them to state-of-the-art input techniques in four different contexts. Thus, investigating the questions of embodiment, overall user performance, and the ability of STUIs relying on their shape alone to support complex interaction techniques. First, we examine how spherical devices can embody immersive visualizations. In an initial study, we test the practicality of a tracked sphere embodying three kinds of visualizations. We examine simulated multi-touch interaction on a spherical surface and compare two different sphere sizes to VR controllers. Results confirmed our prototype's viability and indicate improved pattern recognition and advantages for the smaller sphere. Second, to further substantiate VR as a prototyping technology, we demonstrate how a large tangible spherical display can be simulated in VR. We show how VR can fundamentally extend the capabilities of real spherical displays by adding physical rotation to a simulated multi-touch surface. After a first study evaluating the general viability of simulating such a display in VR, our second study revealed the superiority of a rotating spherical display. Third, we present a concept for a spherical input device for tangible AR (TAR). We show how such a device can provide basic object manipulation capabilities utilizing two different modes and compare it to controller techniques with increasing hardware complexity. Our results show that our button-less sphere-based technique is only outperformed by a mode-less controller variant that uses physical buttons and a touchpad. Fourth, to study the intrinsic problem of VR locomotion, we explore two opposing approaches: a continuous and a discrete technique. For the first, we demonstrate a spherical locomotion device supporting two different locomotion paradigms that propel a user's first-person avatar accordingly. We found that a position control paradigm applied to a sphere performed mostly superior in comparison to button-supported controller interaction. For discrete locomotion, we evaluate the concept of a spherical world in miniature (SWIM) used for avatar teleportation in a large virtual environment. Results showed that users subjectively preferred the sphere-based technique over regular controllers and on average, achieved lower task times and higher accuracy. To conclude the thesis, we discuss our findings, insights, and subsequent contribution to our central research questions to derive recommendations for designing techniques based on spherical input devices and an outlook on the future development of spherical devices in the mixed reality spectrum.Die PopularitĂ€t von Virtual Reality (VR) und Augmented Reality (AR) hat in den letzten Jahren rasant zugenommen, sowohl im akademischen Bereich als auch bei kommerziellen Anwendungen. Dies ist in erster Linie auf technologische Fortschritte und erschwingliche Head-Mounted Displays (HMDs) zurĂŒckzufĂŒhren. Ob in Spielen oder professionellen Anwendungen, HMDs ermöglichen immersive audiovisuelle Erfahrungen, die uns in fesselnde digitale Welten versetzen oder die reale Welt ĂŒberzeugend erweitern. Doch so lebensecht diese Erfahrungen in visueller und auditiver Hinsicht geworden sind, so bleibt doch die Frage, wie die Interaktion mit diesen virtuellen Umgebungen auf ebenso natĂŒrliche Weise gestaltet werden kann. Lösungen, die eine intuitive, greifbare Interaktion ermöglichen, hĂ€tten das Potenzial, das Spektrum der Mixed Reality (MR) fundamental zugĂ€nglicher zu machen, insbesondere fĂŒr Unerfahrene. Die Forschung an Tangible User Interfaces (TUIs) hat dieses Ziel durch das Koppeln virtueller und realer Objekte verfolgt und so hat sich gezeigt, dass greifbare Interaktion fĂŒr zahlreiche AnwendungsfĂ€lle signifikante Vorteile bietet. Spherical Tangible User Interfaces (STUIs) stellen einen Spezialfall von greifbaren Interfaces dar, insbesondere aufgrund ihrer FĂ€higkeit, beliebige sphĂ€rische virtuelle Inhalte vollstĂ€ndig verkörpern zu können. Generell entwickeln sich sphĂ€rische GerĂ€te zunehmend von reinen Technologiedemonstratoren zu nutzbaren multimodalen Instrumenten, die auf eine breite Palette von Interaktionstechniken zurĂŒckgreifen können. Diese Dissertation untersucht primĂ€r die Anwendung von STUIs in MR-Umgebungen durch einen Vergleich mit State-of-the-Art-Eingabetechniken in vier verschiedenen Kontexten. Dies ermöglicht die Erforschung der Bedeutung der Verkörperung virtueller Objekte, der Benutzerleistung im Allgemeinen und der FĂ€higkeit von STUIs, die sich lediglich auf ihre Form verlassen, komplexe Interaktionstechniken zu unterstĂŒtzen. ZunĂ€chst erforschen wir, wie sphĂ€rische GerĂ€te immersive Visualisierungen verkörpern können. Eine erste Studie ergrĂŒndet die Praxistauglichkeit einer einfach konstruierten, getrackten Kugel, die drei Arten von Visualisierungen verkörpert. Wir testen simulierte Multi-Touch-Interaktion auf einer sphĂ€rischen OberflĂ€che und vergleichen zwei KugelgrĂ¶ĂŸen mit VR-Controllern. Die Ergebnisse bestĂ€tigten die Praxistauglichkeit des Prototyps und deuten auf verbesserte Mustererkennung sowie Vorteile fĂŒr die kleinere Kugel hin. Zweitens, um die ValiditĂ€t von VR als Prototyping-Technologie zu bekrĂ€ftigen, demonstrieren wir, wie ein großes, anfassbares sphĂ€risches Display in VR simuliert werden kann. Es zeigt sich, wie VR die Möglichkeiten realer sphĂ€rischer Displays substantiell erweitern kann, indem eine simulierte Multi-Touch-OberflĂ€che um die FĂ€higkeit der physischen Rotation ergĂ€nzt wird. Nach einer ersten Studie, die die generelle Machbarkeit der Simulation eines solchen Displays in VR evaluiert, zeigte eine zweite Studie die Überlegenheit des drehbaren sphĂ€rischen Displays. Drittens prĂ€sentiert diese Arbeit ein Konzept fĂŒr ein sphĂ€risches EingabegerĂ€t fĂŒr Tangible AR (TAR). Wir zeigen, wie ein solches Werkzeug grundlegende FĂ€higkeiten zur Objektmanipulation unter Verwendung von zwei verschiedenen Modi bereitstellen kann und vergleichen es mit Eingabetechniken deren HardwarekomplexitĂ€t zunehmend steigt. Unsere Ergebnisse zeigen, dass die kugelbasierte Technik, die ohne Knöpfe auskommt, nur von einer Controller-Variante ĂŒbertroffen wird, die physische Knöpfe und ein Touchpad verwendet und somit nicht auf unterschiedliche Modi angewiesen ist. Viertens, um das intrinsische Problem der Fortbewegung in VR zu erforschen, untersuchen wir zwei gegensĂ€tzliche AnsĂ€tze: eine kontinuierliche und eine diskrete Technik. FĂŒr die erste prĂ€sentieren wir ein sphĂ€risches EingabegerĂ€t zur Fortbewegung, das zwei verschiedene Paradigmen unterstĂŒtzt, die einen First-Person-Avatar entsprechend bewegen. Es zeigte sich, dass das Paradigma der direkten Positionssteuerung, angewandt auf einen Kugel-Controller, im Vergleich zu regulĂ€rer Controller-Interaktion, die zusĂ€tzlich auf physische Knöpfe zurĂŒckgreifen kann, meist besser abschneidet. Im Bereich der diskreten Fortbewegung evaluieren wir das Konzept einer kugelförmingen Miniaturwelt (Spherical World in Miniature, SWIM), die fĂŒr die Avatar-Teleportation in einer großen virtuellen Umgebung verwendet werden kann. Die Ergebnisse zeigten eine subjektive Bevorzugung der kugelbasierten Technik im Vergleich zu regulĂ€ren Controllern und im Durchschnitt eine schnellere Lösung der Aufgaben sowie eine höhere Genauigkeit. Zum Abschluss der Arbeit diskutieren wir unsere Ergebnisse, Erkenntnisse und die daraus resultierenden BeitrĂ€ge zu unseren zentralen Forschungsfragen, um daraus Empfehlungen fĂŒr die Gestaltung von Techniken auf Basis kugelförmiger EingabegerĂ€te und einen Ausblick auf die mögliche zukĂŒnftige Entwicklung sphĂ€rischer EingabegrĂ€te im Mixed-Reality-Bereich abzuleiten

    IMMERSIVE VIRTUAL REALITY FOR EXPERIENTIAL LEARNING

    Get PDF
    Immersive virtual reality is any computer-generated environment capable of fooling the user’s senses with a feeling of presence (being there). Two different types of hardware are usually used to access immersive virtual reality: Head Mounted Displays (HMD) or Cave Automated Virtual Environment (CAVE). Due to its ability to generate any kind of environment, either real or imaginary, immersive virtual reality can be used as a tool to deliver experiential learning, as described by Kolb (1984) in his experiential learning circle model. Such model identifies four different steps that, as part of a circle, describe the process of learning by experiencing something, these steps are: (1) concrete experience, (2) observations and reflections, (3) formulation of abstract concepts and generalization, (4) testing implications of concepts in new situations. Immersive virtual reality has been out for decades, but in spite of the big buzz around it, a large adoption of the technology has not occurred yet. One of the main barriers to adoptions is the high cost of gear needed. However, recent development in technology are pushing prices down. For instance, Google Cardboard offers a very inexpensive way to experience virtual reality through smartphones. Moreover, the price of HMD and the powerful computers needed to run virtual reality software are expected to fall as it already happened with desktop computers before. The Technology Acceptance Model (TAM), as introduced by Davis (1989), is an attempt to understand the factors behind the adoption of new technologies. In particular, this model introduces the two key concepts of (1) perceived usefulness and (2) perceived ease of use. Looking at these, the manuscript attempts to bring some light in the current state of the adoption. The findings of this study have both theoretical and managerial implications, useful both to schools and vendors. The main finding of this study is that more research is needed to understand how people learn in immersive virtual reality, and how to develop software capable of delivering experiential learning. A tighter collaboration between schools, students, manufacturers, software developers seems to be the most viable way to go

    SERENITY: THE FUTURE OF COGNITIVE MODULATION FOR THE HYPER ENABLED OPERATOR

    Get PDF
    In the Special Operations community, cognitive enhancement and resilience is at the forefront of the 2035 Hyper Enabled Operator Program (HEO). The United States Special Operations Command’s vision is to combine cutting-edge communications and data capabilities into a next generation tactical system for the end user. Using algorithms and autonomous systems to enhance the ability to make rational decisions faster can ultimately determine life or death on the battlefield. Over the past several years, cognitive enhancement with the introduction of brain computer interface (BCI) technology has had major breakthroughs in the medical and science fields. This thesis looks to analyze BCI technology for future cognitive dominance and cognitive overmatch in the Hyper Enabled Operator. Machine-assisted cognitive enhancement is not beyond reach for special operations; throughout the research and after multiple interviews with subject matter experts, it has been concluded that interfaces using transcranial alternating current stimulation (tACS), median nerve stimulation (MNS), or several other exploratory procedures have been successful with enhancing cognition and reducing cognitive load. Special Operations should not shy away from transformational innovative technology or wait for commercial or lab-tested solutions. To start, Special Operations should foster avant-garde theories that provide solutions and evolve ideas into unsophisticated prototypes that can be fielded immediately.Major, United States ArmyApproved for public release. Distribution is unlimited

    Proceedings of the 1st joint workshop on Smart Connected and Wearable Things 2016

    Get PDF
    These are the Proceedings of the 1st joint workshop on Smart Connected and Wearable Things (SCWT'2016, Co-located with IUI 2016). The SCWT workshop integrates the SmartObjects and IoWT workshops. It focusses on the advanced interactions with smart objects in the context of the Internet-of-Things (IoT), and on the increasing popularity of wearables as advanced means to facilitate such interactions

    The Virtual Reality Revolution: The Vision and the Reality

    Get PDF
    • 

    corecore