328 research outputs found
Recommended from our members
Sensory Augmentation for Navigation in Difficult Urban Environments by People With Visual Impairment
Independent mobility in completing such tasks as walking through a town centre is taken for granted by well-bodied individuals. However, for those with a disability such as impairment of vision, mobility and navigation can become challenging tasks not easily undertaken. The barriers to access for blind and partially sighted individuals are increased when familiar navigational cues are removed in difficult urban environments such as Shared Space. The research consisted of investigating methods of navigation employed by people with visual impairment and designing a device to restore confidence to this group so as to lower the barriers of access to such environments.
Investigation was carried out through the deployment of a questionnaire; discussions with groups representing blind and partially sighted people; and a site visit to Shared Space environments. Statistical analysis was carried out on the results of the questionnaire to ascertain the navigational habits of blind and partially sighted individuals in different environments. From the analysis and the results of the discussions and site visit it was established that it would be socially acceptable to design a secondary aid to navigation that would complement the primary aids of long cane or guide dog. A concept experiment was carried out to test the idea that knowledge about changes in surface colour could help with navigation.
A prototype device that could be used by individuals with visual impairment to increase their confidence when navigating a difficult environment was designed, built and tested. Different programming methods were researched and trialled to effectively use machine vision to provide a solution to analyse video feed from a passive camera and return useful information to a blind or partially sighted user.
The device was tested indoors and outdoors and found to be effective at detecting changes in surface colour. Further work is needed to run the software on a more compact platform such as a mobile phone, but initial results show that the concept is viable and that the barriers that present to blind and partially sighted people navigating difficult urban environments can be much reduced through the use of this technology
Navigation of Automatic Vehicle using AI Techniques
In the field of mobile robot navigation have been studied as important task for the new generation of mobile robot i.e. Corobot. For this mobile robot navigation has been viewed for unknown environment. We consider the 4-wheeled vehicle (Corobot) for Path Planning, an autonomous robot and an obstacle and collision avoidance to be used in sensor based robot. We propose that the predefined distance from the robot to target and make the robot follow the target at this distance and improve the trajectory tracking characteristics. The robot will then navigate among these obstacles without hitting them and reach the specified goal point. For these goal achieving we use different techniques radial basis function and back-propagation algorithm under the study of neural network. In this Corobot a robotic arm are assembled and the kinematic analyses of Corobot arm and help of Phidget Control Panel a wheeled to be moved in both forward and reverse direction by 2-motor controller have to be done. Under kinematic analysis propose the relationships between the positions and orientation of the links of a manipulator. In these studies an artificial techniques and their control strategy are shown with potential applications in the fields of industry, security, defense, investigation, and others. Here finally, the simulation result using the webot neural network has been done and this result is compared with experimental data for different training pattern
Flex-Ro: A Robotic High Throughput Field Phenotyping System
Research in agriculture is critical to developing techniques to meet the world’s demand for food, fuel, fiber, and feed. Optimization of crop production per unit of land requires scientists across disciplines to collaborate and investigate new areas of science and tools for data collection. The use of robotics has been adopted in several industries to supplement labor, and accurately perform repetitious tasks. However, the use of autonomous robots in commercial agricultural production is still limited. The Flex-Ro (Flexible structured Robotic platform) was developed for use in large area fields as a multipurpose tool to perform monotonous agricultural tasks.
This work presents the design and implementation of the control system for the Flex-Ro machine. The machine control architecture was developed for safe operation with redundant emergency stops and checks. Operators use the remote-control device to maneuver the machine in uncontrolled environments. Autonomous field coverage was developed using global positioning system (GPS) guidance. The guidance system tracked within 4 cm of the guidance line 95% of the time at a travel speed of 4 kph. Waypoint guidance was implemented and demonstrated such that Flex-Ro could be programmed to follow complex paths and curves.
High-throughput plant phenotyping is a continuously developing and evolving field of plant science. The methods used to collect phenotyping data include drones, satellites, manual measurement, and ground rovers. A suite of phenotyping sensors was installed onto the Flex-Ro to cover large field areas. The system was verified in soybean research plots at the University of Nebraska-Lincoln (UNL) Spidercam phenotyping facility. Positive correlations between the Spidercam and Flex-Ro phenotyping data were established. The Flex-Ro was able to statistically distinguish between soybean variety emergence and maturity differences. The late season phenotyping data showed statistical differences between the fully irrigated versus deficit plots. Basic economic calculations estimated the cost to operate the Flex-Ro machine for field phenotyping use at approximately $5.50/ha.
Advisor: Santosh K. Pitl
Flex-Ro: A Robotic High Throughput Field Phenotyping System
Research in agriculture is critical to developing techniques to meet the world’s demand for food, fuel, fiber, and feed. Optimization of crop production per unit of land requires scientists across disciplines to collaborate and investigate new areas of science and tools for data collection. The use of robotics has been adopted in several industries to supplement labor, and accurately perform repetitious tasks. However, the use of autonomous robots in commercial agricultural production is still limited. The Flex-Ro (Flexible structured Robotic platform) was developed for use in large area fields as a multipurpose tool to perform monotonous agricultural tasks.
This work presents the design and implementation of the control system for the Flex-Ro machine. The machine control architecture was developed for safe operation with redundant emergency stops and checks. Operators use the remote-control device to maneuver the machine in uncontrolled environments. Autonomous field coverage was developed using global positioning system (GPS) guidance. The guidance system tracked within 4 cm of the guidance line 95% of the time at a travel speed of 4 kph. Waypoint guidance was implemented and demonstrated such that Flex-Ro could be programmed to follow complex paths and curves.
High-throughput plant phenotyping is a continuously developing and evolving field of plant science. The methods used to collect phenotyping data include drones, satellites, manual measurement, and ground rovers. A suite of phenotyping sensors was installed onto the Flex-Ro to cover large field areas. The system was verified in soybean research plots at the University of Nebraska-Lincoln (UNL) Spidercam phenotyping facility. Positive correlations between the Spidercam and Flex-Ro phenotyping data were established. The Flex-Ro was able to statistically distinguish between soybean variety emergence and maturity differences. The late season phenotyping data showed statistical differences between the fully irrigated versus deficit plots. Basic economic calculations estimated the cost to operate the Flex-Ro machine for field phenotyping use at approximately $5.50/ha.
Advisor: Santosh K. Pitl
Autonomous Vehicles
This edited volume, Autonomous Vehicles, is a collection of reviewed and relevant research chapters, offering a comprehensive overview of recent developments in the field of vehicle autonomy. The book comprises nine chapters authored by various researchers and edited by an expert active in the field of study. All chapters are complete in itself but united under a common research study topic. This publication aims to provide a thorough overview of the latest research efforts by international authors, open new possible research paths for further novel developments, and to inspire the younger generations into pursuing relevant academic studies and professional careers within the autonomous vehicle field
Ubiquitous haptic feedback in human-computer interaction through electrical muscle stimulation
[no abstract
An Orientation & Mobility Aid for People with Visual Impairments
Orientierung&Mobilität (O&M) umfasst eine Reihe von Techniken für Menschen mit Sehschädigungen, die ihnen helfen, sich im Alltag zurechtzufinden. Dennoch benötigen sie einen umfangreichen und sehr aufwendigen Einzelunterricht mit O&M Lehrern, um diese Techniken in ihre täglichen Abläufe zu integrieren. Während einige dieser Techniken assistive Technologien benutzen, wie zum Beispiel den Blinden-Langstock, Points of Interest Datenbanken oder ein Kompass gestütztes Orientierungssystem, existiert eine unscheinbare Kommunikationslücke zwischen verfügbaren Hilfsmitteln und Navigationssystemen.
In den letzten Jahren sind mobile Rechensysteme, insbesondere Smartphones, allgegenwärtig geworden. Dies eröffnet modernen Techniken des maschinellen Sehens die Möglichkeit, den menschlichen Sehsinn bei Problemen im Alltag zu unterstützen, die durch ein nicht barrierefreies Design entstanden sind. Dennoch muss mit besonderer Sorgfalt vorgegangen werden, um dabei nicht mit den speziellen persönlichen Kompetenzen und antrainierten Verhaltensweisen zu kollidieren, oder schlimmstenfalls O&M Techniken sogar zu widersprechen.
In dieser Dissertation identifizieren wir eine räumliche und systembedingte Lücke zwischen Orientierungshilfen und Navigationssystemen für Menschen mit Sehschädigung. Die räumliche Lücke existiert hauptsächlich, da assistive Orientierungshilfen, wie zum Beispiel der Blinden-Langstock, nur dabei helfen können, die Umgebung in einem limitierten Bereich wahrzunehmen, während Navigationsinformationen nur sehr weitläufig gehalten sind. Zusätzlich entsteht diese Lücke auch systembedingt zwischen diesen beiden Komponenten — der Blinden-Langstock kennt die Route nicht, während ein Navigationssystem nahegelegene Hindernisse oder O&M Techniken nicht weiter betrachtet. Daher schlagen wir verschiedene Ansätze zum Schließen dieser Lücke vor, um die Verbindung und Kommunikation zwischen Orientierungshilfen und Navigationsinformationen zu verbessern und betrachten das Problem dabei aus beiden Richtungen. Um nützliche relevante Informationen bereitzustellen, identifizieren wir zuerst die bedeutendsten Anforderungen an assistive Systeme und erstellen einige Schlüsselkonzepte, die wir bei unseren Algorithmen und Prototypen beachten.
Existierende assistive Systeme zur Orientierung basieren hauptsächlich auf globalen Navigationssatellitensystemen. Wir versuchen, diese zu verbessern, indem wir einen auf Leitlinien basierenden Routing Algorithmus erstellen, der auf individuelle Bedürfnisse anpassbar ist und diese berücksichtigt. Generierte Routen sind zwar unmerklich länger, aber auch viel sicherer, gemäß den in Zusammenarbeit mit O&M Lehrern erstellten objektiven Kriterien. Außerdem verbessern wir die Verfügbarkeit von relevanten georeferenzierten Datenbanken, die für ein derartiges bedarfsgerechtes Routing benötigt werden. Zu diesem Zweck erstellen wir einen maschinellen Lernansatz, mit dem wir Zebrastreifen in Luftbildern erkennen, was auch über Ländergrenzen hinweg funktioniert, und verbessern dabei den Stand der Technik.
Um den Nutzen von Mobilitätsassistenz durch maschinelles Sehen zu optimieren, erstellen wir O&M Techniken nachempfundene Ansätze, um die räumliche Wahrnehmung der unmittelbaren Umgebung zu erhöhen. Zuerst betrachten wir dazu die verfügbare Freifläche und informieren auch über mögliche Hindernisse. Weiterhin erstellen wir einen neuartigen Ansatz, um die verfügbaren Leitlinien zu erkennen und genau zu lokalisieren, und erzeugen virtuelle Leitlinien, welche Unterbrechungen überbrücken und bereits frühzeitig Informationen über die nächste Leitlinie bereitstellen. Abschließend verbessern wir die Zugänglichkeit von Fußgängerübergängen, insbesondere Zebrastreifen und Fußgängerampeln, mit einem Deep Learning Ansatz.
Um zu analysieren, ob unsere erstellten Ansätze und Algorithmen einen tatsächlichen Mehrwert für Menschen mit Sehschädigung erzeugen, vollziehen wir ein kleines Wizard-of-Oz-Experiment zu unserem bedarfsgerechten Routing — mit einem sehr ermutigendem Ergebnis. Weiterhin führen wir eine umfangreichere Studie mit verschiedenen Komponenten und dem Fokus auf Fußgängerübergänge durch. Obwohl unsere statistischen Auswertungen nur eine geringfügige Verbesserung aufzeigen, beeinflußt durch technische Probleme mit dem ersten Prototypen und einer zu geringen Eingewöhnungszeit der Probanden an das System, bekommen wir viel versprechende Kommentare von fast allen Studienteilnehmern. Dies zeigt, daß wir bereits einen wichtigen ersten Schritt zum Schließen der identifizierten Lücke geleistet haben und Orientierung&Mobilität für Menschen mit Sehschädigung damit verbessern konnten
- …