79 research outputs found
Amélioration de la précision et réduction des zones masquéees de capteurs à ultrasons grâce à l'utilistation de séquences complémentaires de Golay
Les systèmes sensoriels basĂ©s sur les capteurs Ă ultrasons prĂ©sentent certaines contraintes qui ont tendance Ă
réduire leur performance. Parmi les problèmes typiques, nous pouvons citer la faible précision des mesures et
l'existence à l'avant des capteurs de zones masquées, zones où les réflecteurs ne peuvent pas être détectés si les
capteurs sont utilisés comme émetteurs et récepteurs. Dans le but d'améliorer la précision des mesures réalisées par
ces capteurs, les émissions sont le plus souvent codées par des séquences binaires. Les distances peuvent être alors
déterminées en faisant appel à des techniques de corrélation.
Dans ce cadre, ce travail présente les résultats obtenus lors de l'utilisation de séquences complémentaires de Golay,
lesquelles permettent d'augmenter de manière significative la précision obtenue à partir d'un capteur à ultrasons isolé.
Elles permettent également d'éliminer les zones masquées précédemment citées grâce à leurs caractéristiques d'autocorrélation.
Ces deux fonctions seront très intéressantes dans l'optique d'une utilisation de capteurs à ultrasons dans le
cadre d'associations sensorielles plus complexes.Sensorial systems based on ultrasonic transducers present some constraints, which reduce their
performances. Typical problems are low precision in measurements, and the existence of a blind zone in
front of transducers, where reflectors can not be detected if transducers are used as emitters and receivers.
In order to improve the precision achieved by these transducers, binary sequences are often used to code
the emission, so afterwards, distances can be determined using correlation techniques. The usage of Golay
complementary sequences allows to increase remarkably the precision obtained by an isolated ultrasonic
transducer. Also, it permits to eliminate the mentioned blind zone, thanks to their auto-correlation
characteristics. These two features are very interesting in order to use ultrasonic transducers in more
complex sensorial associations
Recommended from our members
Autonomous mobility scooters as assistive tools for the elderly
The aim of this research is to investigate the development of an autonomous navigation system that could be used as an assistive tool for elderly and disabled people in their activities of daily living. The navigation environment is an urban environment and the platform is a Mobility Scooter (MoS). To achieve this aim, a differentially steered MoS was modifed to receive motion commands from a computer and outfitted with onboard sensors that included a Global Positioning System (GPS) receiver and two 2D planar laser range sensors. Perception methods were developed to detect the presence of an outdoor pedestrian walkway. These methods achieved this by processing the range data produced by the laser sensors to identify features that are typically found around walkways like curbs, low vegetation, walls and barriers. A method that utilises GPS localisation information to plan and navigate a route in an outdoor urban environment was also developed. Extensive experimental work was conducted to test the accuracy, repeatability and usefulness of the sensory devices. The developed perception methodologies were evaluated in real world environments while the navigation algorithms were predominantly tested in virtual environments. A navigation system that plans a route in an urban environment and follows it using behaviours arranged in a hierarchy is presented and shown to have the ability to safely navigate an MoS along an outdoor pedestrian path
Mechatronic Systems
Mechatronics, the synergistic blend of mechanics, electronics, and computer science, has evolved over the past twenty five years, leading to a novel stage of engineering design. By integrating the best design practices with the most advanced technologies, mechatronics aims at realizing high-quality products, guaranteeing at the same time a substantial reduction of time and costs of manufacturing. Mechatronic systems are manifold and range from machine components, motion generators, and power producing machines to more complex devices, such as robotic systems and transportation vehicles. With its twenty chapters, which collect contributions from many researchers worldwide, this book provides an excellent survey of recent work in the field of mechatronics with applications in various fields, like robotics, medical and assistive technology, human-machine interaction, unmanned vehicles, manufacturing, and education. We would like to thank all the authors who have invested a great deal of time to write such interesting chapters, which we are sure will be valuable to the readers. Chapters 1 to 6 deal with applications of mechatronics for the development of robotic systems. Medical and assistive technologies and human-machine interaction systems are the topic of chapters 7 to 13.Chapters 14 and 15 concern mechatronic systems for autonomous vehicles. Chapters 16-19 deal with mechatronics in manufacturing contexts. Chapter 20 concludes the book, describing a method for the installation of mechatronics education in schools
Proceedings of the 1st European conference on disability, virtual reality and associated technologies (ECDVRAT 1996)
The proceedings of the conferenc
Advanced Knowledge Application in Practice
The integration and interdependency of the world economy leads towards the creation of a global market that offers more opportunities, but is also more complex and competitive than ever before. Therefore widespread research activity is necessary if one is to remain successful on the market. This book is the result of research and development activities from a number of researchers worldwide, covering concrete fields of research
Proceedings of the 2nd European conference on disability, virtual reality and associated technologies (ECDVRAT 1998)
The proceedings of the conferenc
Human robot interaction in a crowded environment
Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3].
Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person.
Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]
Ayuda tĂ©cnica para la autonomĂa en el desplazamiento
The project developed in this thesis involves the design, implementation and evaluation of a
new technical assistance aiming to ease the mobility of people with visual impairments. By
using processing and sounds synthesis, the users can hear the sonification protocol (through
bone conduction) informing them, after training, about the position and distance of the
various obstacles that may be on their way, avoiding eventual accidents.
In this project, surveys were conducted with experts in the field of rehabilitation, blindness
and techniques of image processing and sound, which defined the user requirements that
served as guideline for the design.
The thesis consists of three self-contained blocks: (i) image processing, where 4 processing
algorithms are proposed for stereo vision, (ii) sonification, which details the proposed sound
transformation of visual information, and (iii) a final central chapter on integrating the above
and sequentially evaluated in two versions or implementation modes (software and
hardware).
Both versions have been tested with both sighted and blind participants, obtaining qualitative
and quantitative results, which define future improvements to the project. ---------------------------------------------------------------------------------------------------------------------------------------------El proyecto desarrollado en la presente tesis doctoral consiste en el diseño, implementación y
evaluación de una nueva ayuda técnica orientada a facilitar la movilidad de personas con
discapacidad visual.
El sistema propuesto consiste en un procesador de estereovisiĂłn y un sintetizador de sonidos,
mediante los cuales, las usuarias y los usuarios pueden escuchar un cĂłdigo de sonidos
mediante transmisiĂłn Ăłsea que les informa, previo entrenamiento, de la posiciĂłn y distancia
de los distintos obstáculos que pueda haber en su camino, evitando accidentes.
En dicho proyecto, se han realizado encuestas a expertos en el campo de la rehabilitaciĂłn, la
ceguera y en las tĂ©cnicas y tecnologĂas de procesado de imagen y sonido, mediante las cuales
se definieron unos requisitos de usuario que sirvieron como guĂa de propuesta y diseño.
La tesis está compuesta de tres grandes bloques autocontenidos: (i) procesado de imagen,
donde se proponen 4 algoritmos de procesado de visión estéreo, (ii) sonificación, en el cual se
detalla la propuesta de transformaciĂłn a sonido de la informaciĂłn visual, y (iii) un Ăşltimo
capĂtulo central sobre integraciĂłn de todo lo anterior en dos versiones evaluadas
secuencialmente, una software y otra hardware.
Ambas versiones han sido evaluadas con usuarios tanto videntes como invidentes, obteniendo
resultados cualitativos y cuantitativos que permiten definir mejoras futuras sobre el proyecto
finalmente implementado
- …