761 research outputs found

    Development of Extrospective Systems for Mobile Robotic Vehicles.

    No full text
    Extrospection is the process of receiving knowledge of the outside world through the senses. On robotic platforms this is primarily focussed on determining distances to objects of interest and is achieved through the use of ranging sensors. Any hardware implemented on mobile robotic platforms, including sensors, must ideally be small in size and weight, have good power efficiency, be self-contained and interface easily with the existing platform hardware. The development of stable, expandable and interchangeable mobile robot based sensing systems is crucial to the establishment of platforms on which complex robotic research can be conducted and evaluated in real world situations. This thesis details the design and development of two extrospective systems for incorporation in the Victoria University of Wellington's fleet of mobile robotic platforms. The first system is a generic intelligent sensor network. Fundamental to this system has been the development of network architecture and protocols that provide a stable scheme for connecting a large number of sensors to a mobile robotic platform with little or no dependence on the existing hardware configuration of the platform. A prototype sensor network comprising fourteen infrared position sensitive detectors providing a short to medium distance ranging system (0.2 - 3 m) with a 360' field of view has been successfully developed and tested. The second system is a redesign of an existing prototype full-field image ranger system. The redesign has yielded a smaller, mobile version of the prototype system capable of ranging medium to long distances (0 - 15 m) with a 22.2' - 16.5' field-of-view. This ranger system can now be incorporated onto mobile robotic platforms for further research into the capabilities of full-field image ranging as a form of extrospection on a mobile platform

    Light sensor development for ARA platform

    Get PDF
    Some years ago Google announced the ARA initiative. This consist on a modular phone where parts of the phone, like cameras, sensors or networks can be changed. So when a new feature appears or requiered by the user it is not needed to change the mobile phone, just to buy the modules with the functionality. See https://www.youtube.com/watch?v=2pr9cV6lvws for further information. The Wireless Networks Group will receive in December a developement kit (http://projectara.com/s/ProjectAraSpiral1DeveloperHardwareManual.pdf), to start working with it on January. The PFC or MasteDuring the last years, Visible Light Communication (VLC), a novel technology that enables standard Light-Emitting-Diodes (LEDs) to transmit data, is gaining significant attention. In the near future, this technology could enable devices containing LEDs – such as car lights, city lights, screens and home appliances – to carry information or data to the end-users, using their smartphone. However, VLC is currently limited by the end-point receiver, such as a the mobile camera, or a peripheral connected through the jack input and to unleash the full potential of VLC, more advanced receiver are required. On other, few year ago, Google ATAP - the Google innovation department - announced the ARA initiative. This consist on a modular phone where parts of the phone, like cameras, sensors or networks can be changed. So when a new feature appears or required by the user it is not needed to change the mobile phone, just to buy the modules with the functionality. This Master Thesis presents the design and development of a simple module that will support communication by light (VLC) using the ARA Module Developer Kit provided by Google. It consists on building a front-end circuit, connecting a photodiode that receives the level of light and use it as data carrier, in order to receive and display data inside a custom Android application on the ARA smartphone

    An Inertial-Optical Tracking System for Quantitative, Freehand, 3D Ultrasound

    Get PDF
    Three dimensional (3D) ultrasound has become an increasingly popular medical imaging tool over the last decade. It offers significant advantages over Two Dimensional (2D) ultrasound, such as improved accuracy, the ability to display image planes that are physically impossible with 2D ultrasound, and reduced dependence on the skill of the sonographer. Among 3D medical imaging techniques, ultrasound is the only one portable enough to be used by first responders, on the battlefield, and in rural areas. There are three basic methods of acquiring 3D ultrasound images. In the first method, a 2D array transducer is used to capture a 3D volume directly, using electronic beam steering. This method is mainly used for echocardiography. In the second method, a linear array transducer is mechanically actuated, giving a slower and less expensive alternative to the 2D array. The third method uses a linear array transducer that is moved by hand. This method is known as freehand 3D ultrasound. Whether using a 2D array or a mechanically actuated linear array transducer, the position and orientation of each image is known ahead of time. This is not the case for freehand scanning. To reconstruct a 3D volume from a series of 2D ultrasound images, assumptions must be made about the position and orientation of each image, or a mechanism for detecting the position and orientation of each image must be employed. The most widely used method for freehand 3D imaging relies on the assumption that the probe moves along a straight path with constant orientation and speed. This method requires considerable skill on the part of the sonographer. Another technique uses features within the images themselves to form an estimate of each image\u27s relative location. However, these techniques are not well accepted for diagnostic use because they are not always reliable. The final method for acquiring position and orientation information is to use a six Degree-of-Freedom (6 DoF) tracking system. Commercially available 6 DoF tracking systems use magnetic fields, ultrasonic ranging, or optical tracking to measure the position and orientation of a target. Although accurate, all of these systems have fundamental limitations in that they are relatively expensive and they all require sensors or transmitters to be placed in fixed locations to provide a fixed frame of reference. The goal of the work presented here is to create a probe tracking system for freehand 3D ultrasound that does not rely on any fixed frame of reference. This system tracks the ultrasound probe using only sensors integrated into the probe itself. The advantages of such a system are that it requires no setup before it can be used, it is more portable because no extra equipment is required, it is immune from environmental interference, and it is less expensive than external tracking systems. An ideal tracking system for freehand 3D ultrasound would track in all 6 DoF. However, current sensor technology limits this system to five. Linear transducer motion along the skin surface is tracked optically and transducer orientation is tracked using MEMS gyroscopes. An optical tracking system was developed around an optical mouse sensor to provide linear position information by tracking the skin surface. Two versions were evaluated. One included an optical fiber bundle and the other did not. The purpose of the optical fiber is to allow the system to integrate more easily into existing probes by allowing the sensor and electronics to be mounted away from the scanning end of the probe. Each version was optimized to track features on the skin surface while providing adequate Depth Of Field (DOF) to accept variation in the height of the skin surface. Orientation information is acquired using a 3 axis MEMS gyroscope. The sensor was thoroughly characterized to quantify performance in terms of accuracy and drift. This data provided a basis for estimating the achievable 3D reconstruction accuracy of the complete system. Electrical and mechanical components were designed to attach the sensor to the ultrasound probe in such a way as to simulate its being embedded in the probe itself. An embedded system was developed to perform the processing necessary to translate the sensor data into probe position and orientation estimates in real time. The system utilizes a Microblaze soft core microprocessor and a set of peripheral devices implemented in a Xilinx Spartan 3E field programmable gate array. The Xilinx Microkernel real time operating system performs essential system management tasks and provides a stable software platform for implementation of the inertial tracking algorithm. Stradwin 3D ultrasound software was used to provide a user interface and perform the actual 3D volume reconstruction. Stradwin retrieves 2D ultrasound images from the Terason t3000 portable ultrasound system and communicates with the tracking system to gather position and orientation data. The 3D reconstruction is generated and displayed on the screen of the PC in real time. Stradwin also provides essential system features such as storage and retrieval of data, 3D data interaction, reslicing, manual 3D segmentation, and volume calculation for segmented regions. The 3D reconstruction performance of the system was evaluated by freehand scanning a cylindrical inclusion in a CIRS model 044 ultrasound phantom. Five different motion profiles were used and each profile was repeated 10 times. This entire test regimen was performed twice, once with the optical tracking system using the optical fiber bundle, and once with the optical tracking system without the optical fiber bundle. 3D reconstructions were performed with and without the position and orientation data to provide a basis for comparison. Volume error and surface error were used as the performance metrics. Volume error ranged from 1.3% to 5.3% with tracking information versus 15.6% to 21.9% without for the version of the system without the optical fiber bundle. Volume error ranged from 3.7% to 7.6% with tracking information versus 8.7% to 13.7% without for the version of the system with the optical fiber bundle. Surface error ranged from 0.319 mm RMS to 0.462 mm RMS with tracking information versus 0.678 mm RMS to 1.261 mm RMS without for the version of the system without the optical fiber bundle. Surface error ranged from 0.326 mm RMS to 0.774 mm RMS with tracking information versus 0.538 mm RMS to 1.657 mm RMS without for the version of the system with the optical fiber bundle. The prototype tracking system successfully demonstrated that accurate 3D ultrasound volumes can be generated from 2D freehand data using only sensors integrated into the ultrasound probe. One serious shortcoming of this system is that it only tracks 5 of the 6 degrees of freedom required to perform complete 3D reconstructions. The optical system provides information about linear movement but because it tracks a surface, it cannot measure vertical displacement. Overcoming this limitation is the most obvious candidate for future research using this system. The overall tracking platform, meaning the embedded tracking computer and the PC software, developed and integrated in this work, is ready to take advantage of vertical displacement data, should a method be developed for sensing it

    Design, modeling and analysis of object localization through acoustical signals for cognitive electronic travel aid for blind people

    Full text link
    El objetivo de la tesis consiste en el estudio y análisis de la localización de objetos en el entorno real mediante sonidos, así como la posterior integración y ensayo de un dispositivo real basado en tal técnica y destinado a personas con discapacidad visual. Con el propósito de poder comprender y analizar la localización de objetos se ha realizado un profundo estado de arte sobre los Sistemas de Navegación desarrollados durante las últimas décadas y orientados a personas con distintos grados de discapacidad visual. En el citado estado del arte, se han analizado y estructurado los dispositivos de navegación existentes, clasificándolos de acuerdo con los componentes de adquisición de datos del entorno utilizados. A este respecto, hay que señalar que, hasta el momento, se conocen tres clases de dispositivos de navegación: 'detectores de obstáculos', que se basan en dispositivos de ultrasonidos y sensores instalados en los dispositivos electrónicos de navegación con el objetivo de detectar los objetos que aparecen en el área de trabajo del sistema; 'sensores del entorno' - que tienen como objetivo la detección del objeto y del usuario. Esta clase de dispositivos se instalan en las estaciones de autobús, metro, tren, pasos de peatones etc., de forma que cuando el sensor del usuario penetra en el área de alcance de los sensores instalados en la estación, éstos informan al usuario sobre la presencia de la misma. Asimismo, el sensor del usuario detecta también los medios de transporte que tienen instalado el correspondiente dispositivo basado en láser o ultrasonidos, ofreciendo al usuario información relativa a número de autobús, ruta etc La tercera clase de sistemas electrónicos de navegación son los 'dispositivos de navegación'. Estos elementos se basan en dispositivos GPS, indicando al usuario tanto su locación, como la ruta que debe seguir para llegar a su punto de destino. Tras la primera etapa de elaboración del estaDunai ., L. (2010). Design, modeling and analysis of object localization through acoustical signals for cognitive electronic travel aid for blind people [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8441Palanci

    Joint University Program for Air Transportation Research, 1981

    Get PDF
    Navigation, guidance, control and display concepts, and hardware, with special emphasis on applications to general aviation aircraft are discussed

    Small business innovation research. Abstracts of completed 1987 phase 1 projects

    Get PDF
    Non-proprietary summaries of Phase 1 Small Business Innovation Research (SBIR) projects supported by NASA in the 1987 program year are given. Work in the areas of aeronautical propulsion, aerodynamics, acoustics, aircraft systems, materials and structures, teleoperators and robotics, computer sciences, information systems, spacecraft systems, spacecraft power supplies, spacecraft propulsion, bioastronautics, satellite communication, and space processing are covered

    Micro/Nano Structures and Systems

    Get PDF
    Micro/Nano Structures and Systems: Analysis, Design, Manufacturing, and Reliability is a comprehensive guide that explores the various aspects of micro- and nanostructures and systems. From analysis and design to manufacturing and reliability, this reprint provides a thorough understanding of the latest methods and techniques used in the field. With an emphasis on modern computational and analytical methods and their integration with experimental techniques, this reprint is an invaluable resource for researchers and engineers working in the field of micro- and nanosystems, including micromachines, additive manufacturing at the microscale, micro/nano-electromechanical systems, and more. Written by leading experts in the field, this reprint offers a complete understanding of the physical and mechanical behavior of micro- and nanostructures, making it an essential reference for professionals in this field

    Multi-access laser communications terminal

    Get PDF
    The Optical Multi-Access (OMA) Terminal is capable of establishing up to six simultaneous high-data-rate communication links between low-Earth-orbit satellites and a host satellite at synchronous orbit with only one 16-inch-diameter antenna on the synchronous satellite. The advantage over equivalent RF systems in space weight, power, and swept volume is great when applied to NASA satellite communications networks. A photograph of the 3-channel prototype constructed under the present contract to demonstrate the feasibility of the concept is presented. The telescope has a 10-inch clear aperture and a 22 deg full field of view. It consists of 4 refractive elements to achieve a telecentric focus, i.e., the focused beam is normal to the focal plane at all field angles. This feature permits image pick-up optics in the focal plane to track satellite images without tilting their optic axes to accommodate field angle. The geometry of the imager-pick-up concept and the coordinate system of the swinging arm and disk mechanism for image pick-up are shown. Optics in the arm relay the telescope focus to a communications and tracking receiver and introduce the transmitted beacon beam on a path collinear with the receive path. The electronic circuits for the communications and tracking receivers are contained on the arm and disk assemblies and relay signals to an associated PC-based operator's console for control of the arm and disk motor drive through a flexible cable which permits +/- 240 deg travel for each arm and disk assembly. Power supplies and laser transmitters are mounted in the cradle for the telescope. A single-mode fiber in the cable is used to carry the laser transmitter signal to the arm optics. The promise of the optical multi-access terminal towards which the prototype effort worked is shown. The emphasis in the prototype development was the demonstration of the unique aspect of the concept, and where possible, cost avoidance compromises were implemented in areas already proven on other programs. The design details are described in section 2, the prototype test results in section 3, additional development required in section 4, and conclusions in section 5

    Aural Multitasking in Personal Media Devices

    Get PDF
    Henkilökohtaisten medialaitteiden (personal media device, PMD) käyttö liikenteessä saattaa johtaa onnettomuuksiin. Tämä johtuu kyseisten laitteiden käytön aikana tapahtuvasta visuaalisen huomiokyvyn jakamisesta laitteen ja ympäristön välillä. Tämä diplomityö käsittelee äänenvaraista monikäyttöä (auditory multitasking) PMD-laitteissa käyttäen lähtökohtanaan äänikäyttöliittymiä. äänenvarainen monikäyttö viittaa useiden samanaikaisten tehtävien suorittamiseen käyttäen ääntä ensisijaisena modaliteettina. Jotta tähän tavoitteiseen päästäisiin, on ratkaistava useita perustavanlaatuisia ongelmia monilähteisen ääni-informaation esittämiseen ja interaktioon liittyen. Tämä diplomityö koostuu kolmesta aiheesta. Ensimmäinen aihe esittelee eleisiin perustuvan ohjaustavan äänikäyttöliittymille. Ohjain käyttää äänentunnistusta neljän haptisen eleen luokitteluun. Tästä johtuen ohjainta voidaan käyttää esimerkiksi taskun läpi. Toinen aihepiiri esittelee monikerroksisen äänikäyttöliittymän, joka hyödyntää ns. ympäristöön sulautuvien näyttöjen (ambient display) ideoita ja luo henkilökohtaisen, kerrostetun äänimaiseman. Tarkoituksena on luoda äänimaisema, jossa käyttäjä pystyy keskittämään huomiokykynsä haluamaansa äänivirtaan. Kyseisessä toteutuksessa äänilähteet jaotellaan etu- ja taustakerroksiin niiden prioriteettien perusteella. Viimeinen aihe esittelee nopean head-related transfer function -pohjaisen (HRTF) tilaäänijärjestelmän personalisointimetodin. Metodi voidaan toteuttaa äänipelinä ja se ei vaadi kuulokkeiden lisäksi erillisiä laitteita.The use of personal media devices (PMDs) in traffic can lead to safety critical situations. This is due to divided visual attention between the device and the interface. This thesis considers the use of auditory interfaces for multitasking in PMDs. Aural multitasking refers to performing several simultaneous tasks by using sound as the primary display modality. In order to create such an eyes-free multitasking interface, the problems of presenting information from various sound sources and issues regarding the interaction must be solved. This thesis consists of three distinct topics. The first topic presents a gesture controller for auditory interfaces. The controller uses acoustic classification to recognize four tactile gestures and it can be operated for example through a pocket. The second topic presents a multilayer auditory interface. The multilayer interface incorporates ideas from ambient displays and creates a personal, layered, soundscape that enables auditory attention managing. The method divides the information and tasks into foreground and background streams according to their priorities. The last topic presents a rapid head-related transfer function (HRTF) personalization method for PMD usage. The method is implemented as an auditory game and it does not require additional accessories besides the headphones

    Preliminary study and design of the avionics system for an eVTOL aircraft.

    Get PDF
    The project consists of the study, creation, implementation, and development of the avionics system of an electric Vertical Take-Off and Landing (eVTOL) airplane for an ongoing project from the company ONAEROSPACE. The plane is intended to be for 7 passengers and 1 pilot, with a maximum range of 1000+ km. The fuselage will be formed of carbon fiber composite to reduce weight and it will use electric motors powered by batteries. The avionics system will provide the aircraft with communication and navigation systems, an autonomous Take-Off (T/O) and landing system, as well as the development of cockpit management systems. This document is divided into two parts. The first part begins with the study of all the necessary tools for communication and navigation systems. That means all compulsory antennas and sensors to obtain information about the surroundings (weather, obstacles, other planes¿). The intern communication network to send data from these sensors and antennas to main flight management systems is also studied in this first part. The second part of the project is dedicated to cabin cockpit systems and the study for the future development of autonomous systems. The cabin will have a full-glass cockpit, with touchable screens and an intelligent voice assistant. It will be very ergonomic and simple, with a lot of space in the cabin. In order to have an idea of the cost of the implementation of all the systems for the aircraft, a weight and cost estimation analysis are done at the end of each section. The last part of the project consists of the study of the design of a virtual intelligent voice assistant and the implementation of autonomous systems. Nowadays, the virtual intelligent voice assistant is an artificial intelligence system that works as a pilot monitoring system which assists the pilot in order to decrease the pilot¿s workload. The future idea is that the pilot could tell commands to the voice assistant and do nothing with the hands, just control that everything works correctly. Regarding the autonomous system, the conclusion is that with the existent technology is not possible today. Nevertheless, in the future, when fully autonomous aircraft are possible, the idea is that although being fully autonomous, the pilot can take the control of the aircraft at any moment.OutgoingObjectius de Desenvolupament Sostenible::9 - Indústria, Innovació i InfraestructuraObjectius de Desenvolupament Sostenible::11 - Ciutats i Comunitats Sostenible
    corecore