6,766 research outputs found

    Mobile, collaborative augmented reality using cloudlets

    Get PDF
    The evolution in mobile applications to support advanced interactivity and demanding multimedia features is still ongoing. Novel application concepts (e.g. mobile Augmented Reality (AR)) are however hindered by the inherently limited resources available on mobile platforms (not withstanding the dramatic performance increases of mobile hardware). Offloading resource intensive application components to the cloud, also known as "cyber foraging", has proven to be a valuable solution in a variety of scenarios. However, also for collaborative scenarios, in which data together with its processing are shared between multiple users, this offloading concept is highly promising. In this paper, we investigate the challenges posed by offloading collaborative mobile applications. We present a middleware platform capable of autonomously deploying software components to minimize average CPU load, while guaranteeing smooth collaboration. As a use case, we present and evaluate a collaborative AR application, offering interaction between users, the physical environment as well as with the virtual objects superimposed on this physical environment

    PlaceRaider: Virtual Theft in Physical Spaces with Smartphones

    Full text link
    As smartphones become more pervasive, they are increasingly targeted by malware. At the same time, each new generation of smartphone features increasingly powerful onboard sensor suites. A new strain of sensor malware has been developing that leverages these sensors to steal information from the physical environment (e.g., researchers have recently demonstrated how malware can listen for spoken credit card numbers through the microphone, or feel keystroke vibrations using the accelerometer). Yet the possibilities of what malware can see through a camera have been understudied. This paper introduces a novel visual malware called PlaceRaider, which allows remote attackers to engage in remote reconnaissance and what we call virtual theft. Through completely opportunistic use of the camera on the phone and other sensors, PlaceRaider constructs rich, three dimensional models of indoor environments. Remote burglars can thus download the physical space, study the environment carefully, and steal virtual objects from the environment (such as financial documents, information on computer monitors, and personally identifiable information). Through two human subject studies we demonstrate the effectiveness of using mobile devices as powerful surveillance and virtual theft platforms, and we suggest several possible defenses against visual malware

    SIMNET: simulation-based exercises for computer net-work curriculum through gamification and augmented reality

    Get PDF
    Gamification and Augmented Reality techniques, in recent years, have tackled many subjects and environments. Its implementation can, in particular, strengthen teaching and learning processes in schools and universities. Therefore, new forms of knowledge, based on interactions with objects, contributing game, experimentation and collaborative work. Through the technologies mentioned above, we intend to develop an application that serves as a didactic tool, giving support in the area of Computer Networks. This application aims to stand out in simulated controlled environments to create computer networks, taking into ac-count the necessary physical devices and the different physical and logical topologies. The main goal is to enrich the students’ learning experiences and contrib-ute to teacher-student interaction, through collaborative learning provided by the tool, minimizing the need for expensive equipment in learning environments.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Smart Photos

    Get PDF
    Recent technological leaps have been a great catalyst for changing how people interact with the world around us. Specifically, the field of Augmented Reality has led to many software and hardware advances that have formed a digital intermediary between humans and their environment. As of now, Augmented Reality is available to the select few with the means of obtaining Google Glass, Oculus Rifts, and other relatively expensive platforms. Be that as it may, the tech industry\u27s current goal has been integration of this technology into the public\u27s smartphones and everyday devices. One inhibitor of this goal is the difficulty of finding an Augmented Reality application whose usage could satisfy an everyday need or attraction. Augmented reality presents our world in a unique perspective that can be found nowhere else in the natural world. However, visual impact is weak without substance or meaning. The best technology is invisible, and what makes a good product is its ability to fill a void in a person\u27s life. The most important researchers in this field are those who have been augmenting the tasks that most would consider mundane, such as overlaying nutritional information directly onto a meal [4]. In the same vein, we hope to incorporate Augmented Reality into everyday life by unlocking the full potential of a technology often believed to have already have reached its peak. The humble photograph, a classic invention and unwavering enhancement to the human experience, captures moments in space and time and compresses them into a single permanent state. These two-dimensional assortments of pixels give us a physical representation of the memories we form in specific periods of our lives. We believe this representation can be further enhanced in what we like to call a Smart Photo. The idea behind a Smart Photo is to unlock the full potential in the way that people can interact with photographs. This same notion is explored in the field of Virtual Reality with inventions such as 3D movies, which provide a special appeal that ordinary 2D films cannot. The 3D technology places the viewer inside the film\u27s environment. We intend to marry this seemingly mutually exclusive dichotomy by processing 2D photos alongside their 3D counterparts

    Mobile learning: benefits of augmented reality in geometry teaching

    Get PDF
    As a consequence of the technological advances and the widespread use of mobile devices to access information and communication in the last decades, mobile learning has become a spontaneous learning model, providing a more flexible and collaborative technology-based learning. Thus, mobile technologies can create new opportunities for enhancing the pupils’ learning experiences. This paper presents the development of a game to assist teaching and learning, aiming to help students acquire knowledge in the field of geometry. The game was intended to develop the following competences in primary school learners (8-10 years): a better visualization of geometric objects on a plane and in space; understanding of the properties of geometric solids; and familiarization with the vocabulary of geometry. Findings show that by using the game, students have improved around 35% the hits of correct responses to the classification and differentiation between edge, vertex and face in 3D solids.This research was supported by the Arts and Humanities Research Council Design Star CDT (AH/L503770/1), the Portuguese Foundation for Science and Technology (FCT) projects LARSyS (UID/EEA/50009/2013) and CIAC-Research Centre for Arts and Communication.info:eu-repo/semantics/publishedVersio

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    Vuforia v1.5 SDK: Analysis and evaluation of capabilities

    Get PDF
    This thesis goes into the augmented reality world and, being more specific in Vuforia uses, searching as an achievement the analysis of its characteristics. The first objective of this thesis is make a short explanation of what is understood by augmented reality and the actual different varieties of AR applications, and then the SDK’s features and its architecture and elements. In other hand, to understand the basis of the detection process realized by the Vuforia’s library is important to explain the approach to the considerations of image recognition, because it is the way in which Vuforia recognizes the different patterns. Another objective has been the exposition of the possible fields of applications using this library and a brief of the main steps to create an implementation always using Unity3D, due to Vuforia is only a SDK not an IDE. The reason to choose this way is due to the facilities that are provided by Unity3D when creating the application itself, because it already has implemented all necessary to access the hardware of the smartphone, as well as those that control the Vuforia’s elements. In other way, the Vuforia’s version used during the thesis has been the 1.5, but two months ago Qualcomm was launched the new 2.0 version, that it is not intended to form part of this study, although some of the most significant new capabilities are explained. Finally, the last and perhaps the most important objective have been the test and the results, where they have used three different smartphones to compare the values. Following this methodology has been possible to conclude which part of the results are due to the features and capabilities of the different smartphones and which part depends only of the Vuforia’s library.Català: Aquest projecte s’endinsa al món de la realitat augmentada, més concretament a l’anàlisi de les característiques y funcionalitats del SDK Vuforia. En primer objectiu serà posar en perspectiva el que s’entén per realitat augmentada i de les variants existents avui dia d’aplicacions que fan ús de la RA. A continuació es mencionen les característiques d’aquest SDK, la seva arquitectura i els seus elements. En aquesta part també s’han tingut en compte les consideracions de reconeixement d’imatge, ja que es la manera en la qual Vuforia realitza el reconeixement dels diferents patrons. El següent pas es tractar d’exposar els possibles camps d’aplicació d’aquesta llibreria, i una breu explicació dels principals passos per crear una aplicació sota Unity3D, tenint en compte sempre que Vuforia es només un SDK i no un IDE. La raó per escollir aquest entorn es degut a les ventatges que ofereix Unity3D a l’hora de crear l’aplicació, degut a que ja disposa de tot el necessari per accedir tant al hardware del propi dispositiu mòbil com a els propis elements que integren Vuforia. D’altra banda, la versió de Vuforia utilitzada durant el projecte ha sigut la 1.5, encara que fa poc més de dos mesos Qualcomm va alliberar la nova versió 2.0, la qual no forma part dels objectius d’aquest projecte, encara que una part de les noves funcionalitats més significatives s’exposen breument. Finalment, es conclourà amb els tests i resultats obtinguts. Per realitzar totes aquestes proves s’han utilitzat tres terminals diferents per poder comparar valors. A més, utilitzant aquest mètode, ha sigut possible concloure quina part dels resultats obtinguts es deuen a les característiques i capacitats dels diferents terminals i quina part depèn exclusivament de la pròpia llibreria Vuforia

    Semantic multimedia remote display for mobile thin clients

    Get PDF
    Current remote display technologies for mobile thin clients convert practically all types of graphical content into sequences of images rendered by the client. Consequently, important information concerning the content semantics is lost. The present paper goes beyond this bottleneck by developing a semantic multimedia remote display. The principle consists of representing the graphical content as a real-time interactive multimedia scene graph. The underlying architecture features novel components for scene-graph creation and management, as well as for user interactivity handling. The experimental setup considers the Linux X windows system and BiFS/LASeR multimedia scene technologies on the server and client sides, respectively. The implemented solution was benchmarked against currently deployed solutions (VNC and Microsoft-RDP), by considering text editing and WWW browsing applications. The quantitative assessments demonstrate: (1) visual quality expressed by seven objective metrics, e.g., PSNR values between 30 and 42 dB or SSIM values larger than 0.9999; (2) downlink bandwidth gain factors ranging from 2 to 60; (3) real-time user event management expressed by network round-trip time reduction by factors of 4-6 and by uplink bandwidth gain factors from 3 to 10; (4) feasible CPU activity, larger than in the RDP case but reduced by a factor of 1.5 with respect to the VNC-HEXTILE
    corecore