4,587 research outputs found

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Mobile Robot Localization Using Bar Codes as Artificial Landmarks

    Get PDF
    "Where am I' is the central question in mobile robot navigation. Robust and reliable localization are of vital importance for an autonomous mobile robot because the ability to constantly monitor its position in an unpredictable, unstructured, and dynamic environment is the essential prerequisite to build up and/or maintain environmental maps consistently and to perform path planning. Thus, selflocalization as precondition for goal-oriented behavior is a fundamental property an autonomous mobile robot needs to be equipped with. Accurate, flexible and low-cost localization are important issues for achieving autonomous and cooperative motions of mobile robots. Mobile robots usually perform self-localization by combining position estimates obtained from odometry or inertial navigation with external sensor data. The objective of the thesis is to present a pragmatic idea which utilizes a camera-based bar code recognition technique in order to support mobile robot localization In indoor environments. The idea is to further improve already existing localization capabilities, obtained from dead-reckoning, by furnishing relevant environmental spots such as doors, stairs, etc. with semantic information. In order to facilitate the detection of these landmarks the employment of bar codes is proposed. The important contribution of the thesis is the designing of two software programs. The first program is the bar code generation program which is able to generate five types of bar code labels that play a major role in the proposed localization method. The second program is the bar code recognition program that analyzes image files looking for a bar code label. If a label is found the program recognizes it and display both the information it contains and its coding type. Results concerning the generation of five types of bar code labels which are code 2 of 5, code 3 of9 , codabar code, code 128 and code 2 of 5 interleaved and the detection and identification of these labels from image files are obtained. In conclusion the thesis proposes a solution to mobile robot self-localization problem, which is the central significant for implementing an autonomous mobile robot, by utilizing a camera-based bar code recognition technique to support the basic localization capabilities obtained from a dead-reckoning method in an indoor environment

    Design and evaluation of an integrated GPS/INS system for shallow-water AUV Navigation

    Get PDF
    The major problem addressed by this research is the large and/or expensive equipment required by a conventional navigation system to accurately determine the position of an Autonomous Underwater Vehicle (AUV) during all phases of an underwater search or mapping mission. The approach taken was to prototype an integrated navigation system which combines Global Positioning System (OPS) and Inertial Measurement Unit (IMU), waterspeed and heading information using Kalman filtering techniques. Actual implementation was preceded by a computer simulation to test where the unit would fit into a larger hardware and software hierarchy of an AUV. The system was then evaluated in experiments which began with land based cart tests and progressed to open water trials where the unit was placed in a towed body behind a boat and alternately submerged and surfaced to provide periodic OPS updates to the Inertial Navigation System (INS). Test results and qualitative error estimates indicate that submerged navigation accuracy comparable to that of differential OPS may be attainable for periods of 30 seconds or more with low cost components of a small physical size.http://archive.org/details/designndevaluati1094535102NANAU.S. Navy (U.S.N.) authors

    Perception and intelligent localization for autonomous driving

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaVisão por computador e fusão sensorial são temas relativamente recentes, no entanto largamente adoptados no desenvolvimento de robôs autónomos que exigem adaptabilidade ao seu ambiente envolvente. Esta dissertação foca-se numa abordagem a estes dois temas para alcançar percepção no contexto de condução autónoma. O uso de câmaras para atingir este fim é um processo bastante complexo. Ao contrário dos meios sensoriais clássicos que fornecem sempre o mesmo tipo de informação precisa e atingida de forma determinística, as sucessivas imagens adquiridas por uma câmara estão repletas da mais variada informação e toda esta ambígua e extremamente difícil de extrair. A utilização de câmaras como meio sensorial em robótica é o mais próximo que chegamos na semelhança com aquele que é o de maior importância no processo de percepção humana, o sistema de visão. Visão por computador é uma disciplina científica que engloba àreas como: processamento de sinal, inteligência artificial, matemática, teoria de controlo, neurobiologia e física. A plataforma de suporte ao estudo desenvolvido no âmbito desta dissertação é o ROTA (RObô Triciclo Autónomo) e todos os elementos que consistem o seu ambiente. No contexto deste, são descritas abordagens que foram introduzidas com fim de desenvolver soluções para todos os desafios que o robô enfrenta no seu ambiente: detecção de linhas de estrada e consequente percepção desta, detecção de obstáculos, semáforos, zona da passadeira e zona de obras. É também descrito um sistema de calibração e aplicação da remoção da perspectiva da imagem, desenvolvido de modo a mapear os elementos percepcionados em distâncias reais. Em consequência do sistema de percepção, é ainda abordado o desenvolvimento de auto-localização integrado numa arquitectura distribuída incluindo navegação com planeamento inteligente. Todo o trabalho desenvolvido no decurso da dissertação é essencialmente centrado no desenvolvimento de percepção robótica no contexto de condução autónoma.Computer vision and sensor fusion are subjects that are quite recent, however widely adopted in the development of autonomous robots that require adaptability to their surrounding environment. This thesis gives an approach on both in order to achieve perception in the scope of autonomous driving. The use of cameras to achieve this goal is a rather complex subject. Unlike the classic sensorial devices that provide the same type of information with precision and achieve this in a deterministic way, the successive images acquired by a camera are replete with the most varied information, that this ambiguous and extremely dificult to extract. The use of cameras for robotic sensing is the closest we got within the similarities with what is of most importance in the process of human perception, the vision system. Computer vision is a scientific discipline that encompasses areas such as signal processing, artificial intelligence, mathematics, control theory, neurobiology and physics. The support platform in which the study within this thesis was developed, includes ROTA (RObô Triciclo Autónomo) and all elements comprising its environment. In its context, are described approaches that introduced in the platform in order to develop solutions for all the challenges facing the robot in its environment: detection of lane markings and its consequent perception, obstacle detection, trafic lights, crosswalk and road maintenance area. It is also described a calibration system and implementation for the removal of the image perspective, developed in order to map the elements perceived in actual real world distances. As a result of the perception system development, it is also addressed self-localization integrated in a distributed architecture that allows navigation with long term planning. All the work developed in the course of this work is essentially focused on robotic perception in the context of autonomous driving

    The Automated Wingman: An Airborne Companion for Users of DIS Compatible Flight Simulators

    Get PDF
    A major problem encountered by users of distributed virtual environments is the lack of simulators available to populate these environments. This problem is usually remedied by using computer generated entities. Unfortunately, these entities often lack adequate human behavior and are readily identified as non-human. This violates the realism premise of distributed virtual reality and is a major problem, especially in training situations. This thesis addresses the problem by presenting a computer generated entity called the Automated Wingman. The Automated Wingman is a semi-automated computer generated aircraft simulator that operates under the control of a designated lead simulator and integrates distributed virtual environments with intelligence. Access to distributed virtual environments is provided through the DIS protocol suite while human behavior is obtained through the use of a fuzzy expert system and a voice interface. The fuzzy expert system is designed around a hierarchy of knowledgebases. Each of these knowledge bases contains a set of fuzzy logic based linguistic variables that control the actions of the Automated Wingman. The voice interface allows the pilot of the lead simulator to direct the activity of the Automated Wingman. This thesis describes the design of the Automated Wingman and presents the current status of its implementation

    An inertial motion capture framework for constructing body sensor networks

    Get PDF
    Motion capture is the process of measuring and subsequently reconstructing the movement of an animated object or being in virtual space. Virtual reconstructions of human motion play an important role in numerous application areas such as animation, medical science, ergonomics, etc. While optical motion capture systems are the industry standard, inertial body sensor networks are becoming viable alternatives due to portability, practicality and cost. This thesis presents an innovative inertial motion capture framework for constructing body sensor networks through software environments, smartphones and web technologies. The first component of the framework is a unique inertial motion capture software environment aimed at providing an improved experimentation environment, accompanied by programming scaffolding and a driver development kit, for users interested in studying or engineering body sensor networks. The software environment provides a bespoke 3D engine for kinematic motion visualisations and a set of tools for hardware integration. The software environment is used to develop the hardware behind a prototype motion capture suit focused on low-power consumption and hardware-centricity. Additional inertial measurement units, which are available commercially, are also integrated to demonstrate the functionality the software environment while providing the framework with additional sources for motion data. The smartphone is the most ubiquitous computing technology and its worldwide uptake has prompted many advances in wearable inertial sensing technologies. Smartphones contain gyroscopes, accelerometers and magnetometers, a combination of sensors that is commonly found in inertial measurement units. This thesis presents a mobile application that investigates whether the smartphone is capable of inertial motion capture by constructing a novel omnidirectional body sensor network. This thesis proposes a novel use for web technologies through the development of the Motion Cloud, a repository and gateway for inertial data. Web technologies have the potential to replace motion capture file formats with online repositories and to set a new standard for how motion data is stored. From a single inertial measurement unit to a more complex body sensor network, the proposed architecture is extendable and facilitates the integration of any inertial hardware configuration. The Motion Cloud’s data can be accessed through an application-programming interface or through a web portal that provides users with the functionality for visualising and exporting the motion data

    Advanced Navigation for Planetary Vehicles Applying an Approximate Mapping Technique

    Get PDF
    This thesis provides a method for compressing the information provided by JPL Mars rover obstacle sensors by creating an approximate map of the terrain around the vehicle. This thesis demonstrates that this method provides adequate information for a human operator to negotiate complex obstacles fields. By dividing the area around the vehicle into regions and classifying each region as to how dangerous (impassable), the sensor data can be accumulated with minimal overhead. The terrain in each region has a number between zero and one, with zero meaning completely passable and one meaning completely impassable. A continuum of possible values between the extremes classify in the sense of fuzzy set theory. This process allows obstacles to be represented in the map as an abstraction of the data instead of being arduously tracked individually, requiring much memory and complex processing. The map concept is also valuable in the respect that via translation of the vehicle information is passed to regions without direct sensor inputs. This allows the system to track obstacles to the side and to some extent behind the vehicle. The system, therefore, could potentially deal with complex situations where this information would be valuable such as a situation where it needs to recognize and back out of a trap. This thesis includes the development of the approximate mapping algorithm, explanation of the integration with a test bed vehicle, demonstration of the algorithm using the test bed vehicle, and ix ground work for the development of an automatic decision making scheme, which will constitute the continuing research effort

    Intelligent mobile sensor system for drum inspection and monitoring: Topical report, October 1, 1993--April 22, 1995

    Full text link
    corecore