11,708 research outputs found

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Combining inertial and visual sensing for human action recognition in tennis

    Get PDF
    In this paper, we present a framework for both the automatic extraction of the temporal location of tennis strokes within a match and the subsequent classification of these as being either a serve, forehand or backhand. We employ the use of low-cost visual sensing and low-cost inertial sensing to achieve these aims, whereby a single modality can be used or a fusion of both classification strategies can be adopted if both modalities are available within a given capture scenario. This flexibility allows the framework to be applicable to a variety of user scenarios and hardware infrastructures. Our proposed approach is quantitatively evaluated using data captured from elite tennis players. Results point to the extremely accurate performance of the proposed approach irrespective of input modality configuration

    Vision based interactive toys environment

    Get PDF

    Augmented reality system with application in physical rehabilitation

    Get PDF
    The aging phenomenon causes increased physiotherapy services requirements, with increased costs associated with long rehabilitation periods. Traditional rehabilitation methods rely on the subjective assessment of physiotherapists without supported training data. To overcome the shortcoming of traditional rehabilitation method and improve the efficiency of rehabilitation, AR (Augmented Reality) which represents a promissory technology that provides an immersive interaction with real and virtual objects is used. The AR devices may assure the capture body posture and scan the real environment that conducted to a growing number of AR applications focused on physical rehabilitation. In this MSc thesis, an AR platform used to materialize a physical rehabilitation plan for stroke patients is presented. Gait training is a significant part of physical rehabilitation for stroke patients. AR represents a promissory solution for training assessment providing information to the patients and physiotherapists about exercises to be done and the reached results. As part of MSc work an iOS application was developed in unity 3D platform. This application immersing patients in a mixed environment that combine real-world and virtual objects. The human computer interface is materialized by an iPhone as head-mounted 3D display and a set of wireless sensors for physiological and motion parameters measurement. The position and velocity of the patient are recorded by a smart carpet that includes capacitive sensors connected to a computation unit characterized by Wi-Fi communication capabilities. AR training scenario and the corresponding experimental results are part of the thesis.O envelhecimento causa um aumento das necessidades dos serviços de fisioterapia, com aumento dos custos associados a longos períodos de reabilitação. Os métodos tradicionais de reabilitação dependem da avaliação subjetiva de fisioterapeutas sem registo automatizado de dados de treino. Com o principal objetivo de superar os problemas do método tradicional e melhorar a eficiência da reabilitação, é utilizada a RA (Realidade Aumentada), que representa uma tecnologia promissora, que fornece uma interação imersiva com objetos reais e virtuais. Os dispositivos de RA são capazes de garantir uma postura correta do corpo de capturar e verificar o ambiente real, o que levou a um número crescente de aplicações de RA focados na reabilitação física. Neste projeto, é apresentada uma plataforma de RA, utilizada para materializar um plano de reabilitação física para pacientes que sofreram AVC. O treino de marcha é uma parte significativa da reabilitação física para pacientes com AVC. A RA apresenta-se como uma solução promissora para a avaliação do treino, fornecendo informações aos pacientes e aos profissionais de fisioterapia sobre os exercícios a serem realizados e os resultados alcançados. Como parte deste projeto, uma aplicação iOS foi desenvolvida na plataforma Unity 3D. Esta aplicação fornece aos pacientes um ambiente imersivo que combina objetos reais e virtuais. A interface de RA é materializada por um iPhone montado num suporte de cabeça do utilizador, assim como um conjunto de sensores sem fios para medição de parâmetros fisiológicos e de movimento. A posição e a velocidade do paciente são registadas por um tapete inteligente que inclui sensores capacitivos conectados a uma unidade de computação, caracterizada por comunicação via Wi-Fi. O cenário de treino em RA e os resultados experimentais correspondentes fazem parte desta dissertação

    The design-by-adaptation approach to universal access: learning from videogame technology

    Get PDF
    This paper proposes an alternative approach to the design of universally accessible interfaces to that provided by formal design frameworks applied ab initio to the development of new software. This approach, design-byadaptation, involves the transfer of interface technology and/or design principles from one application domain to another, in situations where the recipient domain is similar to the host domain in terms of modelled systems, tasks and users. Using the example of interaction in 3D virtual environments, the paper explores how principles underlying the design of videogame interfaces may be applied to a broad family of visualization and analysis software which handles geographical data (virtual geographic environments, or VGEs). One of the motivations behind the current study is that VGE technology lags some way behind videogame technology in the modelling of 3D environments, and has a less-developed track record in providing the variety of interaction methods needed to undertake varied tasks in 3D virtual worlds by users with varied levels of experience. The current analysis extracted a set of interaction principles from videogames which were used to devise a set of 3D task interfaces that have been implemented in a prototype VGE for formal evaluation
    corecore