282 research outputs found

    Aplikasi Android Untuk Terapi Arachnophobia Berbasis Markerless Augmented Reality

    Get PDF
    Fobia adalah rasa ketakutan yang berlebihan pada sesuatu objek, kondisi atau fenomena. Fobia dapat menghambat kehidupan orang yang mengidapnya dari stres ringan hingga bunuh diri. Pada dasarnya fobia bisa dikatakan  abnormalitas mental.Fobia dikolompakan dalam berbagai jenis Berdasarkan objek ketakutan, salah satunya adalah fobia spesifik/kusus yang memiliki objek ketakutan terhadap suatu benda, salah satunya hewan. Fobia bukanlah gangguan yang tidak bisa disembuhkan, Fobia dapat disembuhkan salah satunya dengan terapi. Namun di zaman modern ini pengobatan terapi fobia dengan menggunakan objek secara lanngsung dianggap kejam tidak etis terlebih untuk menggunakan hewan hidup asli yang berkemungkinan terbunuh saat praktik terapi dilakukan, dan disisi lain juga para terapis kesulitan untuk mendapatkan objek fobia untuk penterapian terutama objek hewan, terutama adalah objek hewan. Oleh karena itu untuk mempermudah para terapis serta menarik minat masyarakat pengidap fobia, perlu dikembangkanya penterapian melalui objek virtual dengan menggunakan Augmented Reality (AR). Augmented Reality sendiri sering digunakan sebagai media pembelajaran maupun hiburan yang ada pada smartphones. Dengan begitu praktik terapi akan lebih mudah dan menarik dengan adanya aplikasi yang dibangun. Tujuan dari dibentuknya penelitian ini adalah untuk mengembangkan aplikasi yang mampu menunjang pengobatan terapi dengan bantuan objek dari Augmented Reality. Hasil penelitian ini adalah aplikasi sebagai media terapi fobia Laba-laba berbasis Augmented Reality dengan menggunakan Markerless AR agar mempermudah terapis dalam melakukan terapi

    Using Haar-like feature classifiers for hand tracking in tabletop augmented reality

    Get PDF
    We propose in this paper a hand interaction approach to Augmented Reality Tabletop applications. We detect the user’s hands using haar-like feature classifiers and correlate its positions with the fixed markers on the table. This gives the user the possibility to move, rotate and resize the virtual objects located over the table with their bare hands.Postprint (published version

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Augmented Reality Trends to the Field of Business and Economics: A Review of 20 years of Research

    Get PDF
    Augmented Reality (AR) is emerging as a technology that is reshaping the current society, especially the fields of Business and Economics (B&E). Therefore, the scientific studies produced on AR call for an interdisciplinary systematic review of the knowledge generated to structure an organized framework. Three main questions are addressed: How has the production of AR scientific knowledge evolved? What user-related aspects does AR affect? Also, which set of subtopics is associated with each motivation to develop an AR solution? The content of 328 papers produced between 1997 and 2016 in the field of AR is analyzed, unveiling 58 coding categories. There are 13 digital media characteristics that assume instrumental roles in addressing four major motivations to develop AR solutions. Technological topics dominate the research focus over behavioral ones. The investigations on AR in mobile displays show the highest increase. This research identifies the main scientific topics that have led researchers' agenda. Consequently, they contributed to develop and to adopt AR solutions and to forecast its future application in the organizations' strategiesinfo:eu-repo/semantics/publishedVersio

    Object tracking in augmented reality remote access laboratories without fiducial markers

    Get PDF
    Remote Access Laboratories provide students with access to learning resources without the need to be in-situ (with the assets). The technology endows users with access to physical experiments anywhere and anytime, while also minimising or distributing the cost of operation for expensive laboratory equipment. Augmented Reality is a technology which provides interactive sensory feedback to users. The user experiences reality through a computer-based user interface with additional computer-generated information in the form applicable to the targeted senses. Recent advances in high definition video capture devices, video screens and mobile computers have driven resurgence in mainstream Augmented Reality technologies. Lower cost and greater processing power of microprocessors and memory place the resources in the hands of developers and users alike, allowing education institutes to invest in technologies that enhance the delivery of course content. This increase in pedagogical resources has already allowed the phenomenon of education at a distance to reach students from a wide range of demographics, improving access and outcomes in multiple disciplines. Incorporating Augmented Reality into Remote Access Laboratories resources has the benefit of improving overall user immersion into the remote experiment, thus improving student engagement and understanding of the delivered material. Visual implementations of Augmented Reality rely on providing the user with seamless integration of the current environment (through mobile device, desktop PC, or heads up display) with computer generated artificial visual artefacts. Virtual objects must appear in context to the current environment, and respond in a realistic period, or else the user suffers from a disjointed and confusing blend of real and virtual information. Understanding and interacting with the visual scene is controlled through Computer Vision algorithms, and are crucial in ensuring that the AR systems co-operate with the data discovered through the systems. While Augmented Reality has begun to expand in the educational environment, currently, there is still very little overlap of Augmented Reality technologies with Remote Access Laboratories. This research has investigated Computer Vision models that support Augmented Reality technologies such that live video streams from Remote Laboratories are enhanced by synthetic overlays pertinent to the experiments. Orientation of synthetic visual overlays requires knowledge of key reference points, often performed by fiducial markers. Removing the equipment’s need for fiducial markers and a priori knowledge simplifies and accelerates the uptake and expansion of the technology. These works uncover hybrid Computer Vision models which require no prior knowledge of the laboratory environment, including no fiducial markers or tags to track important objects and references. Developed models derive all relevant data from the live video stream and require no previous knowledge regarding the configuration of the physical scene. The new image analysis paradigms, (Two-Dimensional Colour Histograms and Neighbourhood Gradient Signature) improve the current state of markerless tracking through the unique attributes discovered within the sequential video frames. Novel methods are also established, with which to assess and measure the performance of Computer Vision models. Objective ground truth images minimise the level of subjective interference in measuring the efficacy of CV edge and corner detectors. Additionally, locating an effective method to contrast detected attributes associated with an image or object, has provided a means to measure the likelihood of an image match between video frames. In combination with existing material and new contributions, this research demonstrates effective object detection and tracking for Augmented Reality systems within a Remote Access Laboratory environment, with no requirement for fiducial markers, or prior knowledge of the environment. The models that have been proposed in the work can be generalised to be used in any cyber-physical environment that facilitates peripherals such as cameras and other sensors

    Optical and hyperspectral image analysis for image-guided surgery

    Get PDF

    SiTAR: Situated Trajectory Analysis for In-the-Wild Pose Error Estimation

    Full text link
    Virtual content instability caused by device pose tracking error remains a prevalent issue in markerless augmented reality (AR), especially on smartphones and tablets. However, when examining environments which will host AR experiences, it is challenging to determine where those instability artifacts will occur; we rarely have access to ground truth pose to measure pose error, and even if pose error is available, traditional visualizations do not connect that data with the real environment, limiting their usefulness. To address these issues we present SiTAR (Situated Trajectory Analysis for Augmented Reality), the first situated trajectory analysis system for AR that incorporates estimates of pose tracking error. We start by developing the first uncertainty-based pose error estimation method for visual-inertial simultaneous localization and mapping (VI-SLAM), which allows us to obtain pose error estimates without ground truth; we achieve an average accuracy of up to 96.1% and an average F1 score of up to 0.77 in our evaluations on four VI-SLAM datasets. Next we present our SiTAR system, implemented for ARCore devices, combining a backend that supplies uncertainty-based pose error estimates with a frontend that generates situated trajectory visualizations. Finally, we evaluate the efficacy of SiTAR in realistic conditions by testing three visualization techniques in an in-the-wild study with 15 users and 13 diverse environments; this study reveals the impact both environment scale and the properties of surfaces present can have on user experience and task performance.Comment: To appear in Proceedings of IEEE ISMAR 202
    • …
    corecore