138 research outputs found

    Biologically Inspired Visual Control of Flying Robots

    Get PDF
    Insects posses an incredible ability to navigate their environment at high speed, despite having small brains and limited visual acuity. Through selective pressure they have evolved computationally efficient means for simultaneously performing navigation tasks and instantaneous control responses. The insect’s main source of information is visual, and through a hierarchy of processes this information is used for perception; at the lowest level are local neurons for detecting image motion and edges, at the higher level are interneurons to spatially integrate the output of previous stages. These higher level processes could be considered as models of the insect's environment, reducing the amount of information to only that which evolution has determined relevant. The scope of this thesis is experimenting with biologically inspired visual control of flying robots through information processing, models of the environment, and flight behaviour. In order to test these ideas I developed a custom quadrotor robot and experimental platform; the 'wasp' system. All algorithms ran on the robot, in real-time or better, and hypotheses were always verified with flight experiments. I developed a new optical flow algorithm that is computationally efficient, and able to be applied in a regular pattern to the image. This technique is used later in my work when considering patterns in the image motion field. Using optical flow in the log-polar coordinate system I developed attitude estimation and time-to-contact algorithms. I find that the log-polar domain is useful for analysing global image motion; and in many ways equivalent to the retinotopic arrange- ment of neurons in the optic lobe of insects, used for the same task. I investigated the role of depth in insect flight using two experiments. In the first experiment, to study how concurrent visual control processes might be combined, I developed a control system using the combined output of two algorithms. The first algorithm was a wide-field optical flow balance strategy and the second an obstacle avoidance strategy which used inertial information to estimate the depth to objects in the environment - objects whose depth was significantly different to their surround- ings. In the second experiment I created an altitude control system which used a model of the environment in the Hough space, and a biologically inspired sampling strategy, to efficiently detect the ground. Both control systems were used to control the flight of a quadrotor in an indoor environment. The methods that insects use to perceive edges and control their flight in response had not been applied to artificial systems before. I developed a quadrotor control system that used the distribution of edges in the environment to regulate the robot height and avoid obstacles. I also developed a model that predicted the distribution of edges in a static scene, and using this prediction was able to estimate the quadrotor altitude

    Evaluation of gaming environments for mixed reality interfaces and human supervisory control in telerobotics

    No full text
    Telerobotics refers to a branch of technology that deals with controlling a robot from a distance. It is commonly used to access difficult environments, reduce operating costs, and to improve comfort and safety. However, difficulties have emerged in telerobotics development. Effective telerobotics requires maximising operator performance and previous research has identified issues which reduce operator performance, such as operator attention being divided across the numerous custom built interfaces and continuous operator involvement in a high workload situation potentially causing exhaustion and subsequent operator error. This thesis evaluates mixed reality and human supervisory control concepts in a gaming engine environment for telerobotics. This concept is proposed in order to improve the effectiveness of current technology in telerobotic interfaces. Four experiments are reported in this thesis which covers virtual gaming environments, mixed reality interfaces, and human supervisory control and aims to advance telerobotics technology. This thesis argues that gaming environments are useful for building telerobotic interfaces and examines the properties required for telerobotics. A useful feature provided by gaming environments is that of overlying video on virtual objects to support mixed reality interfaces. Experiments in this thesis show that mixed reality interfaces provide useful information without distracting the operator from the task. This thesis introduces two response models based on the planning process of human supervisory control: Adaptation and Queue response models. The experimental results show superior user performance under these two response models compared to direct/manual control. In the final experiment a large number of novice users, with a diversity of backgrounds, used a robot arm to push blocks into a hole by using these two response models. Further analyses on evaluating the user performance on the interfaces with two response models were found to be well fitted by a Weibull distribution. Operators preferred the interface with the Queue response model over the interface with the Adaptation response model, and human supervisory control over direct/manual control. It is expected that the increased sophistication of control commands in a production system will usually be greater than those that were tested in this thesis, where limited time was available for automation development. Where that is the case the increases in human productivity using human supervisory control found in this experiment can be expected to be greater. The research conducted here has shown that mixed reality in gaming environments, when combined with human supervisory control, offers a good route for overcoming limitations in current telerobotics technology. Practical applications would benefit by the application of these methods, making it possible for the operator to have the necessary information available in a convenient and non-distracting form, considerably improving productivity

    Augmented Reality for Space Applications

    Get PDF
    Future space exploration will inevitably require astronauts to have a higher degree of autonomy in decision-making and contingency identification and resolution. Space robotics will eventually become a major aspect of this new challenge, therefore the ability to access digital information will become crucial for mission success. In order to give suited astronauts the ability to operate robots and access all necessary information for nominal operations and contingencies, this thesis proposes the introduction of In-Field-Of-View Head Mounted Display Systems in current Extravehicular Activity Spacesuits. The system will be capable of feeding task specific information on request, and through Augmented Reality technology, recognize and overlay information on the real world for error checking and status purposes. The system will increase the astronaut's overall situational awareness and nominal task accuracy, reducing execution time and human error risk. The aim of this system is to relieve astronauts of trivial cognitive workload, by guiding and checking on them in their operations. Secondary objectives of the system will be the introduction of electronic checklists, and the ability to display the status of the suit and surrounding systems as well as interaction capabilities. Features which could be introduced are endless due the nature of the system, allowing extreme flexibility and future evolution without major design changes. This work will focus on the preliminary design of an experimental Head Mounted Display and its testing for initial evaluation and comparison with existing information feed methods. The system will also be integrated and tested in the University of Maryland Space Systems Laboratory MX-2 experimental spacesuit analogue

    3D visualization of in-flight recorded data.

    Get PDF
    Human being can easily acquire information by showing the object than reading the description of it. Our brain stores images that the eyes are seeing and by the brain mapping, people can analyze information by imagination in the brain. This is the reason why visualization is important and powerful. It helps people remember the scene later. Visualization transforms the symbolic into the geometric, enabling researchers to observe their simulations and computations (Flurchick, 2001). As a consequence, many computer scientists and programmers take their time to build better visualization of the data for users. For the flight data from an aircraft, it is better to understand data in 3D computer graphics rather than to look at mere numbers. The flight data consists of several fields such as elapsed time, latitude, longitude, altitude, ground speed, roll angle, pitch angle, heading, wind speed, and so on. With these data variables, filtering is the first process for visualization in order to gather important information. The collection of processed data is transformed to 3D graphics form to be rendered by generating Keyhole Mark-up Language (KML) files in the system. KML is an XML grammar and file format for modeling and storing geographic features such as points, lines, images, polygons, and models for display in Google Earth or Google Maps. Like HTML, KML has a tag-based structure with names and attributes used for specific display purposes. In the present work, new approaches to visualize flight using Google Earth are developed. Because of the limitation of the Google Earth API, the Great Circle Distance calculation and trigonometric functions are implemented to handle the position, angles of roll and pitch, and a range of the camera positions to generate several points of view. Currently, visual representation of flight data depends on 2D graphics although an aircraft flies in a 3D space. The graphical interface allows flight analysts to create ground traces in 2D, and flight ribbons and flight paths with altitude in 3D. Additionally, by incorporating weather information, fog and clouds can also be generated as part of the animation effects. With 3D stereoscopic technique, a realistic visual representation of the flights is realized

    Augmented Reality Navigation Interfaces Improve Human Performance In End-Effector Controlled Telerobotics

    Get PDF
    On the International Space Station (ISS) and space shuttles, the National Aeronautics and Space Administration (NASA) has used robotic manipulators extensively to perform payload handling and maintenance tasks. Teleoperating robots require expert skills and optimal performance is crucial to mission completion and crew safety. Degradation in performance is observed when manual control is mediated through remote camera views, resulting in poor end-effector navigation quality and extended task completion times. This thesis explores the application of three-dimensional augmented reality (AR) interfaces specifically designed to improve human performance during end-effector controlled teleoperations. A modular telerobotic test bed was developed for this purpose and several experiments were conducted. In the first experiment, the effect of camera placement on end-effector manipulation performance was evaluated. Results show that increasing misalignment between the displayed end-effector and hand-controller axes (display-control misalignments) increases the time required to process a movement input. Simple AR movement cues were found to mitigate the adverse effects of camera-based teleoperation and made performance invariant to misalignment. Applying these movement cues to payload transport tasks correspondingly demonstrated improvements in free-space navigation quality over conventional end-effector control using multiple cameras. Collision-free teleoperations are also a critical requirement in space. To help the operators guide robots safely, a novel method was evaluated. Navigation plans computed by a planning agent are presented to the operator sequentially through an AR interface. The plans in combination with the interface allow the operator to guide the end-effector through collision-free regions in the remote environment safely. Experimental results show significant benefits in control performance including reduced path deviation and travel distance. Overall, the results show that AR interfaces can improve performance during manual control of remote robots and have tremendous potential in current and future teleoperated space robotic systems; as well as in contemporary military and surgical applications

    Augmented reality device for first response scenarios

    Get PDF
    A prototype of a wearable computer system is proposed and implemented using commercial off-shelf components. The system is designed to allow the user to access location-specific information about an environment, and to provide capability for user tracking. Areas of applicability include primarily first response scenarios, with possible applications in maintenance or construction of buildings and other structures. Necessary preparation of the target environment prior to system\u27s deployment is limited to noninvasive labeling using optical fiducial markers. The system relies on computational vision methods for registration of labels and user position. With the system the user has access to on-demand information relevant to a particular real-world location. Team collaboration is assisted by user tracking and real-time visualizations of team member positions within the environment. The user interface and display methods are inspired by Augmented Reality1 (AR) techniques, incorporating a video-see-through Head Mounted Display (HMD) and fingerbending sensor glove.*. 1Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the addition of computer generated graphics. Advanced research includes the use of motion tracking data, fiducial marker recognition using machine vision, and the construction of controlled environments containing any number of sensors and actuators. (Source: Wikipedia) *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation). The CD requires the following system requirements: Adobe Acrobat; Microsoft Office; Windows MediaPlayer or RealPlayer

    DIVE on the internet

    Get PDF
    This dissertation reports research and development of a platform for Collaborative Virtual Environments (CVEs). It has particularly focused on two major challenges: supporting the rapid development of scalable applications and easing their deployment on the Internet. This work employs a research method based on prototyping and refinement and promotes the use of this method for application development. A number of the solutions herein are in line with other CVE systems. One of the strengths of this work consists in a global approach to the issues raised by CVEs and the recognition that such complex problems are best tackled using a multi-disciplinary approach that understands both user and system requirements. CVE application deployment is aided by an overlay network that is able to complement any IP multicast infrastructure in place. Apart from complementing a weakly deployed worldwide multicast, this infrastructure provides for a certain degree of introspection, remote controlling and visualisation. As such, it forms an important aid in assessing the scalability of running applications. This scalability is further facilitated by specialised object distribution algorithms and an open framework for the implementation of novel partitioning techniques. CVE application development is eased by a scripting language, which enables rapid development and favours experimentation. This scripting language interfaces many aspects of the system and enables the prototyping of distribution-related components as well as user interfaces. It is the key construct of a distributed environment to which components, written in different languages, connect and onto which they operate in a network abstracted manner. The solutions proposed are exemplified and strengthened by three collaborative applications. The Dive room system is a virtual environment modelled after the room metaphor and supporting asynchronous and synchronous cooperative work. WebPath is a companion application to a Web browser that seeks to make the current history of page visits more visible and usable. Finally, the London travel demonstrator supports travellers by providing an environment where they can explore the city, utilise group collaboration facilities, rehearse particular journeys and access tourist information data

    Automatic Plant Annotation Using 3D Computer Vision

    Get PDF

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams
    corecore