1,481 research outputs found

    The Arena: An indoor mixed reality space

    Get PDF
    ln this paper, we introduce the Arena, an indoor space for mobile mixed reality interaction. The Arena includes a new user tracking system appropriate for AR/MR applications and a new Too/kit oriented to the augmented and mixed reality applications developer, the MX Too/kit. This too/kit is defined at a somewhat higher abstraction levei, by hiding from the programmer low-level implementation details and facilitating ARJMR object-oriented programming. The system handles, uniformly, video input, video output (for headsets and monitors), sound aurelisation and Multimodal Human-Computer Interaction in ARJMR, including, tangible interfaces, speech recognition and gesture recognition.info:eu-repo/semantics/publishedVersio

    Enhancing the museum experience with a sustainable solution based on contextual information obtained from an on-line analysis of users’ behaviour

    Get PDF
    Human computer interaction has evolved in the last years in order to enhance users’ experiences and provide more intuitive and usable systems. A major leap through in this scenario is obtained by embedding, in the physical environment, sensors capable of detecting and processing users’ context (position, pose, gaze, ...). Feeded by the so collected information flows, user interface paradigms may shift from stereotyped gestures on physical devices, to more direct and intuitive ones that reduce the semantic gap between the action and the corresponding system reaction or even anticipate the user’s needs, thus limiting the overall learning effort and increasing user satisfaction. In order to make this process effective, the context of the user (i.e. where s/he is, what is s/he doing, who s/he is, what are her/his preferences and also actual perception and needs) must be properly understood. While collecting data on some aspects can be easy, interpreting them all in a meaningful way in order to improve the overall user experience is much harder. This is more evident when we consider informal learning environments like museums, i.e. places that are designed to elicit visitor response towards the artifacts on display and the cultural themes proposed. In such a situation, in fact, the system should adapt to the attention paid by the user choosing the appropriate content for the user’s purposes, presenting an intuitive interface to navigate it. My research goal is focused on collecting, in a simple,unobtrusive, and sustainable way, contextual information about the visitors with the purpose of creating more engaging and personalized experiences

    Ultrasonic-Based Environmental Perception for Mobile 5G-Oriented XR Applications

    Get PDF
    One of the sectors that is expected to significantly benefit from 5G network deployment is eXtended Reality (XR). Besides the very high bandwidth, reliability, and Quality of Service (QoS) to be delivered to end users, XR also requires accurate environmental perception for safety reasons: this is fundamental when a user, wearing XR equipment, is immersed in a “virtual” world, but moves in a “real” environment. To overcome this limitation (especially when using low-cost XR equipments, such as cardboards worn by the end user), it is possible to exploit the potentialities offered by Internet of Things (IoT) nodes with sensing/actuating capabilities. In this paper, we rely on ultrasonic sensor-based IoT systems to perceive the surrounding environment and to provide “side information” to XR systems, then performing a preliminary experimental characterization campaign with different ultrasonic IoT system configurations worn by the end user. The combination of the information flows associated with XR and IoT components is enabled by 5G technology. An illustrative experimental scenario, relative to a “Tourism 4.0” IoT-aided VR application deployed by Vodafone in Milan, Italy, is presented

    비가청 고주파음역을 활용한 전시가이드 시스템의 구현

    Get PDF
    학위논문 (석사)-- 서울대학교 융합과학기술대학원 : 융합과학부(디지털정보융합전공), 2014. 2. 이교구.This thesis presents a new method for exhibition guide systems, an application for mobile devices that utilizes near-ultrasonic sound waves as communication signals. This system substitutes existing museum or gallery guide systems that use technologies such as infrared sensors, QR codes, RFID, or any other manual input. In the proposed system, a near-field tweeter speaker stands near each piece of artwork and transmits mixed tones in the inaudible frequency range. The receiver application filters interfering noise and pinpoints the signal coming from the nearest artwork, identifies the artwork, and requests the corresponding information from the data server. This process is done automatically and seamlessly, requiring no input from the user. Experiments show that the method is highly accurate and robust to noise, indicating its potential applications to other areas such as indoor positioning systems. In addition, a case study shows that the proposed system is favorable in many aspects compared to existing guide systems.Chapter 1 Introduction 1.1 Background 1.2 Motivation and Objectives 1.3 Thesis Organization Chapter 2 Related Work 2.1 Mobile-Based Exhibition Guides 2.2 Methods Utilizing Near-Ultrasound 2.3 Exhibition Guide System Utilizing Near-Ultrasound Chapter 3 Research Method 3.1 Near-Ultrasound 3.2 System Configuration 3.3 Implementation Chapter 4 Experiments 4.1 Performance Test 4.1.1 Distance Experiment 4.1.2 Position Experiment 4.2 Case Study 4.2.1 Experiment Setup 4.2.2 Case Study Results Chapter 5 Discussion and Future Work Chapter 6 Conclusion Bibliography Abstract (Korean)Maste

    Navigation system based in motion tracking sensor for percutaneous renal access

    Get PDF
    Tese de Doutoramento em Engenharia BiomédicaMinimally-invasive kidney interventions are daily performed to diagnose and treat several renal diseases. Percutaneous renal access (PRA) is an essential but challenging stage for most of these procedures, since its outcome is directly linked to the physician’s ability to precisely visualize and reach the anatomical target. Nowadays, PRA is always guided with medical imaging assistance, most frequently using X-ray based imaging (e.g. fluoroscopy). Thus, radiation on the surgical theater represents a major risk to the medical team, where its exclusion from PRA has a direct impact diminishing the dose exposure on both patients and physicians. To solve the referred problems this thesis aims to develop a new hardware/software framework to intuitively and safely guide the surgeon during PRA planning and puncturing. In terms of surgical planning, a set of methodologies were developed to increase the certainty of reaching a specific target inside the kidney. The most relevant abdominal structures for PRA were automatically clustered into different 3D volumes. For that, primitive volumes were merged as a local optimization problem using the minimum description length principle and image statistical properties. A multi-volume Ray Cast method was then used to highlight each segmented volume. Results show that it is possible to detect all abdominal structures surrounding the kidney, with the ability to correctly estimate a virtual trajectory. Concerning the percutaneous puncturing stage, either an electromagnetic or optical solution were developed and tested in multiple in vitro, in vivo and ex vivo trials. The optical tracking solution aids in establishing the desired puncture site and choosing the best virtual puncture trajectory. However, this system required a line of sight to different optical markers placed at the needle base, limiting the accuracy when tracking inside the human body. Results show that the needle tip can deflect from its initial straight line trajectory with an error higher than 3 mm. Moreover, a complex registration procedure and initial setup is needed. On the other hand, a real-time electromagnetic tracking was developed. Hereto, a catheter was inserted trans-urethrally towards the renal target. This catheter has a position and orientation electromagnetic sensor on its tip that function as a real-time target locator. Then, a needle integrating a similar sensor is used. From the data provided by both sensors, one computes a virtual puncture trajectory, which is displayed in a 3D visualization software. In vivo tests showed a median renal and ureteral puncture times of 19 and 51 seconds, respectively (range 14 to 45 and 45 to 67 seconds). Such results represent a puncture time improvement between 75% and 85% when comparing to state of the art methods. 3D sound and vibrotactile feedback were also developed to provide additional information about the needle orientation. By using these kind of feedback, it was verified that the surgeon tends to follow a virtual puncture trajectory with a reduced amount of deviations from the ideal trajectory, being able to anticipate any movement even without looking to a monitor. Best results show that 3D sound sources were correctly identified 79.2 ± 8.1% of times with an average angulation error of 10.4º degrees. Vibration sources were accurately identified 91.1 ± 3.6% of times with an average angulation error of 8.0º degrees. Additionally to the EMT framework, three circular ultrasound transducers were built with a needle working channel. One explored different manufacture fabrication setups in terms of the piezoelectric materials, transducer construction, single vs. multi array configurations, backing and matching material design. The A-scan signals retrieved from each transducer were filtered and processed to automatically detect reflected echoes and to alert the surgeon when undesirable anatomical structures are in between the puncture path. The transducers were mapped in a water tank and tested in a study involving 45 phantoms. Results showed that the beam cross-sectional area oscillates around the ceramics radius and it was possible to automatically detect echo signals in phantoms with length higher than 80 mm. Hereupon, it is expected that the introduction of the proposed system on the PRA procedure, will allow to guide the surgeon through the optimal path towards the precise kidney target, increasing surgeon’s confidence and reducing complications (e.g. organ perforation) during PRA. Moreover, the developed framework has the potential to make the PRA free of radiation for both patient and surgeon and to broad the use of PRA to less specialized surgeons.Intervenções renais minimamente invasivas são realizadas diariamente para o tratamento e diagnóstico de várias doenças renais. O acesso renal percutâneo (ARP) é uma etapa essencial e desafiante na maior parte destes procedimentos. O seu resultado encontra-se diretamente relacionado com a capacidade do cirurgião visualizar e atingir com precisão o alvo anatómico. Hoje em dia, o ARP é sempre guiado com recurso a sistemas imagiológicos, na maior parte das vezes baseados em raios-X (p.e. a fluoroscopia). A radiação destes sistemas nas salas cirúrgicas representa um grande risco para a equipa médica, aonde a sua remoção levará a um impacto direto na diminuição da dose exposta aos pacientes e cirurgiões. De modo a resolver os problemas existentes, esta tese tem como objetivo o desenvolvimento de uma framework de hardware/software que permita, de forma intuitiva e segura, guiar o cirurgião durante o planeamento e punção do ARP. Em termos de planeamento, foi desenvolvido um conjunto de metodologias de modo a aumentar a eficácia com que o alvo anatómico é alcançado. As estruturas abdominais mais relevantes para o procedimento de ARP, foram automaticamente agrupadas em volumes 3D, através de um problema de optimização global com base no princípio de “minimum description length” e propriedades estatísticas da imagem. Por fim, um procedimento de Ray Cast, com múltiplas funções de transferência, foi utilizado para enfatizar as estruturas segmentadas. Os resultados mostram que é possível detetar todas as estruturas abdominais envolventes ao rim, com a capacidade para estimar corretamente uma trajetória virtual. No que diz respeito à fase de punção percutânea, foram testadas duas soluções de deteção de movimento (ótica e eletromagnética) em múltiplos ensaios in vitro, in vivo e ex vivo. A solução baseada em sensores óticos ajudou no cálculo do melhor ponto de punção e na definição da melhor trajetória a seguir. Contudo, este sistema necessita de uma linha de visão com diferentes marcadores óticos acoplados à base da agulha, limitando a precisão com que a agulha é detetada no interior do corpo humano. Os resultados indicam que a agulha pode sofrer deflexões à medida que vai sendo inserida, com erros superiores a 3 mm. Por outro lado, foi desenvolvida e testada uma solução com base em sensores eletromagnéticos. Para tal, um cateter que integra um sensor de posição e orientação na sua ponta, foi colocado por via trans-uretral junto do alvo renal. De seguida, uma agulha, integrando um sensor semelhante, é utilizada para a punção percutânea. A partir da diferença espacial de ambos os sensores, é possível gerar uma trajetória de punção virtual. A mediana do tempo necessário para puncionar o rim e ureter, segundo esta trajetória, foi de 19 e 51 segundos, respetivamente (variações de 14 a 45 e 45 a 67 segundos). Estes resultados representam uma melhoria do tempo de punção entre 75% e 85%, quando comparados com o estado da arte dos métodos atuais. Além do feedback visual, som 3D e feedback vibratório foram explorados de modo a fornecer informações complementares da posição da agulha. Verificou-se que com este tipo de feedback, o cirurgião tende a seguir uma trajetória de punção com desvios mínimos, sendo igualmente capaz de antecipar qualquer movimento, mesmo sem olhar para o monitor. Fontes de som e vibração podem ser corretamente detetadas em 79,2 ± 8,1% e 91,1 ± 3,6%, com erros médios de angulação de 10.4º e 8.0 graus, respetivamente. Adicionalmente ao sistema de navegação, foram também produzidos três transdutores de ultrassom circulares com um canal de trabalho para a agulha. Para tal, foram exploradas diferentes configurações de fabricação em termos de materiais piezoelétricos, transdutores multi-array ou singulares e espessura/material de layers de suporte. Os sinais originados em cada transdutor foram filtrados e processados de modo a detetar de forma automática os ecos refletidos, e assim, alertar o cirurgião quando existem variações anatómicas ao longo do caminho de punção. Os transdutores foram mapeados num tanque de água e testados em 45 phantoms. Os resultados mostraram que o feixe de área em corte transversal oscila em torno do raio de cerâmica, e que os ecos refletidos são detetados em phantoms com comprimentos superiores a 80 mm. Desta forma, é expectável que a introdução deste novo sistema a nível do ARP permitirá conduzir o cirurgião ao longo do caminho de punção ideal, aumentado a confiança do cirurgião e reduzindo possíveis complicações (p.e. a perfuração dos órgãos). Além disso, de realçar que este sistema apresenta o potencial de tornar o ARP livre de radiação e alarga-lo a cirurgiões menos especializados.The present work was only possible thanks to the support by the Portuguese Science and Technology Foundation through the PhD grant with reference SFRH/BD/74276/2010 funded by FCT/MEC (PIDDAC) and by Fundo Europeu de Desenvolvimento Regional (FEDER), Programa COMPETE - Programa Operacional Factores de Competitividade (POFC) do QREN

    Asynchronous Ultrasonic Trilateration for Indoor Positioning of Mobile Phones

    Get PDF
    Spatial awareness is fast becoming the key feature on today‟s mobile devices. While accurate outdoor navigation has been widely available for some time through Global Positioning Systems (GPS), accurate indoor positioning is still largely an unsolved problem. One major reason for this is that GPS and other Global Navigation Satellite Systems (GNSS) systems offer accuracy of a scale far different to that required for effective indoor navigation. Indoor positioning is also hindered by poor GPS signal quality, a major issue when developing dedicated indoor locationing systems. In addition, many indoor systems use specialized hardware to calculate accurate device position, as readily available wireless protocols have so far not delivered sufficient levels of accuracy. This research aims to investigate how the mobile phone‟s innate ability to produce sound (notably ultrasound) can be utilised to deliver more accurate indoor positioning than current methods. Experimental work covers limitations of mobile phone speakers in regard to generation of high frequencies, propagation patternsof ultrasound and their impact on maximum range, and asynchronous trilateration. This is followed by accuracy and reliability tests of an ultrasound positioning system prototype.This thesis proposes a new method of positioning a mobile phone indoors with accuracy substantially better than other contemporary positioning systems available on off-theshelf mobile devices. Given that smartphones can be programmed to correctly estimate direction, this research outlines a potentially significant advance towards a practical platform for indoor Location Based Services. Also a novel asynchronous trilateration algorithm is proposed that eliminates the need for synchronisation between the mobile device and the positioning infrastructure

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwärtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwärtige Oberflächen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum über den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die während einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die Oberfläche zu identifizieren. Darüber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener Oberflächen besonders geeignet ist, um vielfältige Interaktionsmodalitäten zu realisieren. Bei der Auswahl der Sensoren müssen jedoch Datenschutzaspekte berücksichtigt werden, und der Kontext kann entscheidend dafür sein, ob und welche Interaktion durchgeführt werden soll

    Location-based technologies for learning

    Get PDF
    Emerging technologies for learning report - Article exploring location based technologies and their potential for educatio
    corecore