585 research outputs found

    Coordination of an Unmanned Vehicle with Active Suspension over Extreme Terrain

    Full text link
    Active suspension is now a well-tried technology in road vehicles. It has been installed on a HMMV and demonstrated to significantly improve performance in rough road conditions1. This capability presents an opportunity for improved mobility in off-road conditions. The challenge is to devise a means of translating the desired trajectory of the vehicle into commands to the suspension actuators and the traction motors in an optimal, or near optimal manner. In this paper we describe part of a software architecture that was developed to enable such performance from a six-wheeled vehicle with active suspension and independent wheel drives. The vehicle was a concept developed under the DARPA Unmanned Ground Combat Vehicle Program

    SARSCEST (human factors)

    Get PDF
    People interact with the processes and products of contemporary technology. Individuals are affected by these in various ways and individuals shape them. Such interactions come under the label 'human factors'. To expand the understanding of those to whom the term is relatively unfamiliar, its domain includes both an applied science and applications of knowledge. It means both research and development, with implications of research both for basic science and for development. It encompasses not only design and testing but also training and personnel requirements, even though some unwisely try to split these apart both by name and institutionally. The territory includes more than performance at work, though concentration on that aspect, epitomized in the derivation of the term ergonomics, has overshadowed human factors interest in interactions between technology and the home, health, safety, consumers, children and later life, the handicapped, sports and recreation education, and travel. Two aspects of technology considered most significant for work performance, systems and automation, and several approaches to these, are discussed

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Marine Vessel Inspection as a Novel Field for Service Robotics: A Contribution to Systems, Control Methods and Semantic Perception Algorithms.

    Get PDF
    This cumulative thesis introduces a novel field for service robotics: the inspection of marine vessels using mobile inspection robots. In this thesis, three scientific contributions are provided and experimentally verified in the field of marine inspection, but are not limited to this type of application. The inspection scenario is merely a golden thread to combine the cumulative scientific results presented in this thesis. The first contribution is an adaptive, proprioceptive control approach for hybrid leg-wheel robots, such as the robot ASGUARD described in this thesis. The robot is able to deal with rough terrain and stairs, due to the control concept introduced in this thesis. The proposed system is a suitable platform to move inside the cargo holds of bulk carriers and to deliver visual data from inside the hold. Additionally, the proposed system also has stair climbing abilities, allowing the system to move between different decks. The robot adapts its gait pattern dynamically based on proprioceptive data received from the joint motors and based on the pitch and tilt angle of the robot's body during locomotion. The second major contribution of the thesis is an independent ship inspection system, consisting of a magnetic wall climbing robot for bulkhead inspection, a particle filter based localization method, and a spatial content management system (SCMS) for spatial inspection data representation and organization. The system described in this work was evaluated in several laboratory experiments and field trials on two different marine vessels in close collaboration with ship surveyors. The third scientific contribution of the thesis is a novel approach to structural classification using semantic perception approaches. By these methods, a structured environment can be semantically annotated, based on the spatial relationships between spatial entities and spatial features. This method was verified in the domain of indoor perception (logistics and household environment), for soil sample classification, and for the classification of the structural parts of a marine vessel. The proposed method allows the description of the structural parts of a cargo hold in order to localize the inspection robot or any detected damage. The algorithms proposed in this thesis are based on unorganized 3D point clouds, generated by a LIDAR within a ship's cargo hold. Two different semantic perception methods are proposed in this thesis. One approach is based on probabilistic constraint networks; the second approach is based on Fuzzy Description Logic and spatial reasoning using a spatial ontology about the environment

    Haptic robot-environment interaction for self-supervised learning in ground mobility

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia Eletrotécnica e de ComputadoresThis dissertation presents a system for haptic interaction and self-supervised learning mechanisms to ascertain navigation affordances from depth cues. A simple pan-tilt telescopic arm and a structured light sensor, both fitted to the robot’s body frame, provide the required haptic and depth sensory feedback. The system aims at incrementally develop the ability to assess the cost of navigating in natural environments. For this purpose the robot learns a mapping between the appearance of objects, given sensory data provided by the sensor, and their bendability, perceived by the pan-tilt telescopic arm. The object descriptor, representing the object in memory and used for comparisons with other objects, is rich for a robust comparison and simple enough to allow for fast computations. The output of the memory learning mechanism allied with the haptic interaction point evaluation prioritize interaction points to increase the confidence on the interaction and correctly identifying obstacles, reducing the risk of the robot getting stuck or damaged. If the system concludes that the object is traversable, the environment change detection system allows the robot to overcome it. A set of field trials show the ability of the robot to progressively learn which elements of environment are traversable

    A novel method of sensing and classifying terrain for autonomous unmanned ground vehicles

    Get PDF
    Unmanned Ground Vehicles (UGVs) play a vital role in preserving human life during hostile military operations and extend our reach by exploring extraterrestrial worlds during space missions. These systems generally have to operate in unstructured environments which contain dynamic variables and unpredictable obstacles, making the seemingly simple task of traversing from A-B extremely difficult. Terrain is one of the biggest obstacles within these environments as it could potentially cause a vehicle to become stuck and render it useless, therefore autonomous systems must possess the ability to directly sense terrain conditions. Current autonomous vehicles use look-ahead vision systems and passive laser scanners to navigate a safe path around obstacles; however these methods lack detail when considering terrain as they make predictions using estimations of the terrain’s appearance alone. This study establishes a more accurate method of measuring, classifying and monitoring terrain in real-time. A novel instrument for measuring direct terrain features at the wheel-terrain contact interface is presented in the form of the Force Sensing Wheel (FSW). Additionally a classification method using unique parameters of the wheel-terrain interaction is used to identify and monitor terrain conditions in real-time. The combination of both the FSW and real-time classification method facilitates better traversal decisions, creating a more Terrain Capable system

    An Intelligent Architecture for Legged Robot Terrain Classification Using Proprioceptive and Exteroceptive Data

    Get PDF
    In this thesis, we introduce a novel architecture called Intelligent Architecture for Legged Robot Terrain Classification Using Proprioceptive and Exteroceptive Data (iARTEC ) . The proposed architecture integrates different terrain characterization and classification with other robotic system components. Within iARTEC , we consider the problem of having a legged robot autonomously learn to identify different terrains. Robust terrain identification can be used to enhance the capabilities of legged robot systems, both in terms of locomotion and navigation. For example, a robot that has learned to differentiate sand from gravel can autonomously modify (or even select a different) path in favor of traversing over a better terrain. The same knowledge of the terrain type can also be used to guide a robot in order to avoid specific terrains. To tackle this problem, we developed four approaches for terrain characterization, classification, path planning, and control for a mobile legged robot. We developed a particle system inspired approach to estimate the robot footâ ground contact interaction forces. The approach is derived from the well known Bekkerâ s theory to estimate the contact forces based on its point contact model concepts. It is realistically model real-time 3-dimensional contact behaviors between rigid body objects and the soil. For a real-time capable implementation of this approach, its reformulated to use a lookup table generated from simple contact experiments of the robot foot with the terrain. Also, we introduced a short-range terrain classifier using the robot embodied data. The classifier is based on a supervised machine learning approach to optimize the classifier parameters and terrain it using proprioceptive sensor measurements. The learning framework preprocesses sensor data through channel reduction and filtering such that the classifier is trained on the feature vectors that are closely associated with terrain class. For the long-range terrain type prediction using the robot exteroceptive data, we present an online visual terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs). In addition, we described a terrain dependent navigation and path planning approach that is based on E* planer and employs a proposed metric that specifies the navigation costs associated terrain types. This generated path naturally avoids obstacles and favors terrains with lower values of the metric. At the low level, a proportional input-scaling controller is designed and implemented to autonomously steer the robot to follow the desired path in a stable manner. iARTEC performance was tested and validated experimentally using several different sensing modalities (proprioceptive and exteroceptive) and on the six legged robotic platform CREX. The results show that the proposed architecture integrating the aforementioned approaches with the robotic system allowed the robot to learn both robot-terrain interaction and remote terrain perception models, as well as the relations linking those models. This learning mechanism is performed according to the robot own embodied data. Based on the knowledge available, the approach makes use of the detected remote terrain classes to predict the most probable navigation behavior. With the assigned metric, the performance of the robot on a given terrain is predicted. This allows the navigation of the robot to be influenced by the learned models. Finally, we believe that iARTEC and the methods proposed in this thesis can likely also be implemented on other robot types (such as wheeled robots), although we did not test this option in our work

    A one decade survey of autonomous mobile robot systems

    Get PDF
    Recently, autonomous mobile robots have gained popularity in the modern world due to their relevance technology and application in real world situations. The global market for mobile robots will grow significantly over the next 20 years. Autonomous mobile robots are found in many fields including institutions, industry, business, hospitals, agriculture as well as private households for the purpose of improving day-to-day activities and services. The development of technology has increased in the requirements for mobile robots because of the services and tasks provided by them, like rescue and research operations, surveillance, carry heavy objects and so on. Researchers have conducted many works on the importance of robots, their uses, and problems. This article aims to analyze the control system of mobile robots and the way robots have the ability of moving in real-world to achieve their goals. It should be noted that there are several technological directions in a mobile robot industry. It must be observed and integrated so that the robot functions properly: Navigation systems, localization systems, detection systems (sensors) along with motion and kinematics and dynamics systems. All such systems should be united through a control unit; thus, the mission or work of mobile robots are conducted with reliability

    Planetary Rover Inertial Navigation Applications: Pseudo Measurements and Wheel Terrain Interactions

    Get PDF
    Accurate localization is a critical component of any robotic system. During planetary missions, these systems are often limited by energy sources and slow spacecraft computers. Using proprioceptive localization (e.g., using an inertial measurement unit and wheel encoders) without external aiding is insufficient for accurate localization. This is mainly due to the integrated and unbounded errors of the inertial navigation solutions and the drifted position information from wheel encoders caused by wheel slippage. For this reason, planetary rovers often utilize exteroceptive (e.g., vision-based) sensors. On the one hand, localization with proprioceptive sensors is straightforward, computationally efficient, and continuous. On the other hand, using exteroceptive sensors for localization slows rover driving speed, reduces rover traversal rate, and these sensors are sensitive to the terrain features. Given the advantages and disadvantages of both methods, this thesis focuses on two objectives. First, improving the proprioceptive localization performance without significant changes to the rover operations. Second, enabling adaptive traversability rate based on the wheel-terrain interactions while keeping the localization reliable. To achieve the first objective, we utilized the zero-velocity, zero-angular rate updates, and non-holonomicity of a rover to improve rover localization performance even with the limited available sensor usage in a computationally efficient way. Pseudo-measurements generated from proprioceptive sensors when the rover is stationary conditions and the non-holonomic constraints while traversing can be utilized to improve the localization performance without any significant changes to the rover operations. Through this work, it is observed that a substantial improvement in localization performance, without the aid of additional exteroceptive sensor information. To achieve the second objective, the relationship between the estimation of localization uncertainty and wheel-terrain interactions through slip-ratio was investigated. This relationship was exposed with a Gaussian process with time series implementation by using the slippage estimation while the rover is moving. Then, it is predicted when to change from moving to stationary conditions by mapping the predicted slippage into localization uncertainty prediction. Instead of a periodic stopping framework, the method introduced in this work is a slip-aware localization method that enables the rover to stop more frequently in high-slip terrains whereas stops rover less frequently for low-slip terrains while keeping the proprioceptive localization reliable
    • …
    corecore