153 research outputs found

    Factors influencing reading news on the mobile devices in Qatar in light of augmented reality (AR) & Virtual reality (VR)

    Get PDF
    Purpose: The purpose of this study is to examine and better understand the factors influencing reading news on the mobile devices in Qatar from the viewpoint of intention to adopt augmented reality (AR) and virtual reality (VR). Design/Methodology/Approach: A large convenience sample of 699 respondents from Qatar was surveyed. The researcher employed KMO measure of sampling adequacy and the Bartless test of sphericity to establish the construct validity of the instrument using advanced SPSS. The four extracted and rotated dimensions were found to be reliable and valid. Main Findings: Findings from Multiple Regression Analysis have confirmed two out of six hypotheses. Two independent variables (1) nationality and (2) interest in using AR & VR out of six explanatory variables are significant in predicting the use of mobile devices to follow regional and international news in Qatar from the viewpoint of intention to adopt augmented reality (AR) and virtual reality (VR). Implications: Gone are the days when Qataris have to wait to get the traditional printed newspapers. A more proactive approach from decision makers in Qatar’s newspaper printing industry is needed to benefit from the findings of this research and the digitalization of international news. Novelty: This article empirically correlates two fields of research: the digital copy of the real world and reading international news.This work was supported by the Social and Economic Survey Research Institute, Qatar University and is sponsored by the Qatar National Research Fund. The researcher would like to acknowledge Dr. Justin Gengler for making the data available. The author would also like to thank Mr. John Lee Holmes, Mr. Abdulrahman Rahmany, Mr. Anis Miladi and Mr. Isam Mohamed Abdel Hameed in giving so generously of their time at various stages of the data collection and entry

    Factors influencing reading news on the mobile devices in Qatar in light of augmented reality (AR) & Virtual reality (VR)

    Get PDF
    Purpose: The purpose of this study is to examine and better understand the factors influencing reading news on the mobile devices in Qatar from the viewpoint of intention to adopt augmented reality (AR) and virtual reality (VR). Design/Methodology/Approach: A large convenience sample of 699 respondents from Qatar was surveyed. The researcher employed KMO measure of sampling adequacy and the Bartless test of sphericity to establish the construct validity of the instrument using advanced SPSS. The four extracted and rotated dimensions were found to be reliable and valid. Main Findings: Findings from Multiple Regression Analysis have confirmed two out of six hypotheses. Two independent variables (1) nationality and (2) interest in using AR & VR out of six explanatory variables are significant in predicting the use of mobile devices to follow regional and international news in Qatar from the viewpoint of intention to adopt augmented reality (AR) and virtual reality (VR). Implications: Gone are the days when Qataris have to wait to get the traditional printed newspapers. A more proactive approach from decision makers in Qatar’s newspaper printing industry is needed to benefit from the findings of this research and the digitalization of international news. Novelty: This article empirically correlates two fields of research: the digital copy of the real world and reading international news.This work was supported by the Social and Economic Survey Research Institute, Qatar University and is sponsored by the Qatar National Research Fund. The researcher would like to acknowledge Dr. Justin Gengler for making the data available. The author would also like to thank Mr. John Lee Holmes, Mr. Abdulrahman Rahmany, Mr. Anis Miladi and Mr. Isam Mohamed Abdel Hameed in giving so generously of their time at various stages of the data collection and entry

    A Toolkit for Exploring Augmented Reality Through Construction with Children

    Get PDF
    International audienceAugmented Reality begins to be widely mainstream among children, due to some interactive successes in video games and social networks. Based on this interest, we present CartonEd, an open and complete toolkit suitable for children dedicated to the construction of an augmented reality headset device. The toolkit let the children play and explore augmented reality with and beyond handheld devices. Inspired by the Do-It-Yourself movement, the toolkit includes different components such as blueprints, tutorials, videos, mobile apps, a software development kit and an official website. Among the mobile applications, one is implemented to guide the children through the construction process while experiencing augmented reality. To validate our solution (in particular the construction process and the guiding app) and understand its effect on children in regard to their relation to the augmented reality, we conducted four construction sessions. Our study examines the usability of the guiding app and the construction process. We report in this paper the main components of the CartonEd toolkit and the results of an evaluation among 57 children and teenagers (ages 8-16), showing a positive outcome about their own constructed device (all functional), their feelings and wishes regarding the augmented reality

    Enriching mobile interaction with garment-based wearable computing devices

    Get PDF
    Wearable computing is on the brink of moving from research to mainstream. The first simple products, such as fitness wristbands and smart watches, hit the mass market and achieved considerable market penetration. However, the number and versatility of research prototypes in the field of wearable computing is far beyond the available devices on the market. Particularly, smart garments as a specific type of wearable computer, have high potential to change the way we interact with computing systems. Due to the proximity to the user`s body, smart garments allow to unobtrusively sense implicit and explicit user input. Smart garments are capable of sensing physiological information, detecting touch input, and recognizing the movement of the user. In this thesis, we explore how smart garments can enrich mobile interaction. Employing a user-centered design process, we demonstrate how different input and output modalities can enrich interaction capabilities of mobile devices such as mobile phones or smart watches. To understand the context of use, we chart the design space for mobile interaction through wearable devices. We focus on the device placement on the body as well as interaction modality. We use a probe-based research approach to systematically investigate the possible inputs and outputs for garment based wearable computing devices. We develop six different research probes showing how mobile interaction benefits from wearable computing devices and what requirements these devices pose for mobile operating systems. On the input side, we look at explicit input using touch and mid-air gestures as well as implicit input using physiological signals. Although touch input is well known from mobile devices, the limited screen real estate as well as the occlusion of the display by the input finger are challenges that can be overcome with touch-enabled garments. Additionally, mid-air gestures provide a more sophisticated and abstract form of input. We present a gesture elicitation study to address the special requirements of mobile interaction and present the resulting gesture set. As garments are worn, they allow different physiological signals to be sensed. We explore how we can leverage these physiological signals for implicit input. We conduct a study assessing physiological information by focusing on the workload of drivers in an automotive setting. We show that we can infer the driver´s workload using these physiological signals. Beside the input capabilities of garments, we explore how garments can be used as output. We present research probes covering the most important output modalities, namely visual, auditory, and haptic. We explore how low resolution displays can serve as a context display and how and where content should be placed on such a display. For auditory output, we investigate a novel authentication mechanism utilizing the closeness of wearable devices to the body. We show that by probing audio cues through the head of the user and re-recording them, user authentication is feasible. Last, we investigate EMS as a haptic feedback method. We show that by actuating the user`s body, an embodied form of haptic feedback can be achieved. From the aforementioned research probes, we distilled a set of design recommendations. These recommendations are grouped into interaction-based and technology-based recommendations and serve as a basis for designing novel ways of mobile interaction. We implement a system based on these recommendations. The system supports developers in integrating wearable sensors and actuators by providing an easy to use API for accessing these devices. In conclusion, this thesis broadens the understanding of how garment-based wearable computing devices can enrich mobile interaction. It outlines challenges and opportunities on an interaction and technological level. The unique characteristics of smart garments make them a promising technology for making the next step in mobile interaction

    Utilising presence in places to support mobile interaction

    Get PDF
    Physical places are given contextual meaning by the objects and people that make up the space. Presence in physical places can be utilised to support mobile interaction by making access to media and notifications on a smartphone easier and more visible to other people. Smartphone interfaces can be extended into the physical world in a meaningful way by anchoring digital content to artefacts, and interactions situated around physical artefacts can provide contextual meaning to private manipulations with a mobile device. Additionally, places themselves are designed to support a set of tasks, and the logical structure of places can be used to organise content on the smartphone. Menus that adapt the functionality of a smartphone can support the user by presenting the tools most likely to be needed just-in-time, so that information needs can be satisfied quickly and with little cognitive effort. Furthermore, places are often shared with people whom the user knows, and the smartphone can facilitate social situations by providing access to content that stimulates conversation. However, the smartphone can disrupt a collaborative environment, by alerting the user with unimportant notifications, or sucking the user in to the digital world with attractive content that is only shown on a private screen. Sharing smartphone content on a situated display creates an inclusive and unobtrusive user experience, and can increase focus on a primary task by allowing content to be read at a glance. Mobile interaction situated around artefacts of personal places is investigated as a way to support users to access content from their smartphone while managing their physical presence. A menu that adapts to personal places is evaluated to reduce the time and effort of app navigation, and coordinating smartphone content on a situated display is found to support social engagement and the negotiation of notifications. Improving the sensing of smartphone users in places is a challenge that is out-with the scope of this thesis. Instead, interaction designers and developers should be provided with low-cost positioning tools that utilise presence in places, and enable quantitative and qualitative data to be collected in user evaluations. Two lightweight positioning tools are developed with the low-cost sensors that are currently available: The Microsoft Kinect depth sensor allows movements of a smartphone user to be tracked in a limited area of a place, and Bluetooth beacons enable the larger context of a place to be detected. Positioning experiments with each sensor are performed to highlight the capabilities and limitations of current sensing techniques for designing interactions with a smartphone. Both tools enable prototypes to be built with a rapid prototyping approach, and mobile interactions can be tested with more advanced sensing techniques as they become available. Sensing technologies are becoming pervasive, and it will soon be possible to perform reliable place detection in-the-wild. Novel interactions that utilise presence in places can support smartphone users by making access to useful functionality easy and more visible to the people who matter most in everyday life

    A comparison of processing techniques for producing prototype injection moulding inserts.

    Get PDF
    This project involves the investigation of processing techniques for producing low-cost moulding inserts used in the particulate injection moulding (PIM) process. Prototype moulds were made from both additive and subtractive processes as well as a combination of the two. The general motivation for this was to reduce the entry cost of users when considering PIM. PIM cavity inserts were first made by conventional machining from a polymer block using the pocket NC desktop mill. PIM cavity inserts were also made by fused filament deposition modelling using the Tiertime UP plus 3D printer. The injection moulding trials manifested in surface finish and part removal defects. The feedstock was a titanium metal blend which is brittle in comparison to commodity polymers. That in combination with the mesoscale features, small cross-sections and complex geometries were considered the main problems. For both processing methods, fixes were identified and made to test the theory. These consisted of a blended approach that saw a combination of both the additive and subtractive processes being used. The parts produced from the three processing methods are investigated and their respective merits and issues are discussed

    Reducing risk in pre-production investigations through undergraduate engineering projects.

    Get PDF
    This poster is the culmination of final year Bachelor of Engineering Technology (B.Eng.Tech) student projects in 2017 and 2018. The B.Eng.Tech is a level seven qualification that aligns with the Sydney accord for a three-year engineering degree and hence is internationally benchmarked. The enabling mechanism of these projects is the industry connectivity that creates real-world projects and highlights the benefits of the investigation of process at the technologist level. The methodologies we use are basic and transparent, with enough depth of technical knowledge to ensure the industry partners gain from the collaboration process. The process we use minimizes the disconnect between the student and the industry supervisor while maintaining the academic freedom of the student and the commercial sensitivities of the supervisor. The general motivation for this approach is the reduction of the entry cost of the industry to enable consideration of new technologies and thereby reducing risk to core business and shareholder profits. The poster presents several images and interpretive dialogue to explain the positive and negative aspects of the student process

    Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD

    A Utility Framework for Selecting Immersive Interactive Capability and Technology for Virtual Laboratories

    Get PDF
    There has been an increase in the use of virtual reality (VR) technology in the education community since VR is emerging as a potent educational tool that offers students with a rich source of educational material and makes learning exciting and interactive. With a rise of popularity and market expansion in VR technology in the past few years, a variety of consumer VR electronics have boosted educators and researchers’ interest in using these devices for practicing engineering and science laboratory experiments. However, little is known about how such devices may be well-suited for active learning in a laboratory environment. This research aims to address this gap by formulating a utility framework to help educators and decision-makers efficiently select a type of VR device that matches with their design and capability requirements for their virtual laboratory blueprint. Furthermore, a framework use case is demonstrated by not only surveying five types of VR devices ranging from low-immersive to full-immersive along with their capabilities (i.e., hardware specifications, cost, and availability) but also considering the interaction techniques in each VR device based on the desired laboratory task. To validate the framework, a research study is carried out to compare these five VR devices and investigate which device can provide an overall best-fit for a 3D virtual laboratory content that we implemented based on the interaction level, usability and performance effectiveness
    • …
    corecore