175 research outputs found

    Applications of artificial intelligence in ship berthing: A review

    Get PDF
    Ship berthing operations in restricted waters such as ports requires the accurate use of onboard-vessel equipment such as rudder, thrusters, and main propulsions. For big ships, the assistance of exterior supports such as tugboats are necessary, however with the advancement of technology, we may hypothesize that the use of artificial intelligence to support ship berthing safely at ports without the dependency on the tugboats may be a reality. In this paper we comprehensively assessed and analyzed several literatures regarding this topic. Through this review, we seek out to present a better understanding of the use of artificial intelligence in ship berthing especially neural networks and collision avoidance algorithms. We discovered that the use of global and local path planning combined with Artificial Neural Network (ANN) may help to achieve collision avoidance while completing ship berthing operations

    Applications of artificial intelligence in ship berthing: A review

    Get PDF
    855-863Ship berthing operations in restricted waters such as ports requires the accurate use of onboard-vessel equipment such as rudder, thrusters, and main propulsions. For big ships, the assistance of exterior supports such as tugboats are necessary, however with the advancement of technology, we may hypothesize that the use of artificial intelligence to support ship berthing safely at ports without the dependency on the tugboats may be a reality. In this paper we comprehensively assessed and analyzed several literatures regarding this topic. Through this review, we seek out to present a better understanding of the use of artificial intelligence in ship berthing especially neural networks and collision avoidance algorithms. We discovered that the use of global and local path planning combined with Artificial Neural Network (ANN) may help to achieve collision avoidance while completing ship berthing operations

    Review of Anthropomorphic Head Stabilisation and Verticality Estimation in Robots

    Get PDF
    International audienceIn many walking, running, flying, and swimming animals, including mammals, reptiles, and birds, the vestibular system plays a central role for verticality estimation and is often associated with a head sta-bilisation (in rotation) behaviour. Head stabilisation, in turn, subserves gaze stabilisation, postural control, visual-vestibular information fusion and spatial awareness via the active establishment of a quasi-inertial frame of reference. Head stabilisation helps animals to cope with the computational consequences of angular movements that complicate the reliable estimation of the vertical direction. We suggest that this strategy could also benefit free-moving robotic systems, such as locomoting humanoid robots, which are typically equipped with inertial measurements units. Free-moving robotic systems could gain the full benefits of inertial measurements if the measurement units are placed on independently orientable platforms, such as a human-like heads. We illustrate these benefits by analysing recent humanoid robots design and control approaches

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    Insect inspired visual motion sensing and flying robots

    Get PDF
    International audienceFlying insects excellently master visual motion sensing techniques. They use dedicated motion processing circuits at a low energy and computational costs. Thanks to observations obtained on insect visual guidance, we developed visual motion sensors and bio-inspired autopilots dedicated to flying robots. Optic flow-based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots. In this chapter, we present how we designed and constructed local motion sensors and how we implemented bio-inspired visual guidance scheme on-board several micro-aerial vehicles. An hyperacurate sensor in which retinal micro-scanning movements are performed via a small piezo-bender actuator was mounted onto a miniature aerial robot. The OSCAR II robot is able to track a moving target accurately by exploiting the microscan-ning movement imposed to its eye's retina. We also present two interdependent control schemes driving the eye in robot angular position and the robot's body angular position with respect to a visual target but without any knowledge of the robot's orientation in the global frame. This "steering-by-gazing" control strategy, which is implemented on this lightweight (100 g) miniature sighted aerial robot, demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy

    Towards Target-Driven Visual Navigation in Indoor Scenes via Generative Imitation Learning

    Full text link
    We present a target-driven navigation system to improve mapless visual navigation in indoor scenes. Our method takes a multi-view observation of a robot and a target as inputs at each time step to provide a sequence of actions that move the robot to the target without relying on odometry or GPS at runtime. The system is learned by optimizing a combinational objective encompassing three key designs. First, we propose that an agent conceives the next observation before making an action decision. This is achieved by learning a variational generative module from expert demonstrations. We then propose predicting static collision in advance, as an auxiliary task to improve safety during navigation. Moreover, to alleviate the training data imbalance problem of termination action prediction, we also introduce a target checking module to differentiate from augmenting navigation policy with a termination action. The three proposed designs all contribute to the improved training data efficiency, static collision avoidance, and navigation generalization performance, resulting in a novel target-driven mapless navigation system. Through experiments on a TurtleBot, we provide evidence that our model can be integrated into a robotic system and navigate in the real world. Videos and models can be found in the supplementary material.Comment: 11 pages, accepted by IEEE Robotics and Automation Letter

    Design and Modeling of Smartphone Controlled Vehicle

    Get PDF
    While many have worked on the transition phases of more popular hybrid aerial vehicle configurations, In this paper, we explore a novel multi-mode hybrid Unmanned Aerial Vehicle (UAV). Due to its expanded flying range and adaptability, hybrid aerial vehicles—which integrates two or more operating configurations—have become more and more widespread. The stages of transition between these modes are reasonably important whether there are two or more flight forms present. Whereas numerous have worked on the early stages of more widely used hybrid aerial vehicle types, in this paper a brand-new multi-mode hybrid UAV will be investigated. In order to fully exploit the vehicle's propulsion equipment and aerodynamic surfaces in both a horizontal cruising configuration and a vertical hovering configuration, we combine a tailless fixed-wing with a four-wing monocopter. By increasing construction integrity over the whole operational range, this lowers drag and wasteful mass when the aircraft is in motion in both modes. The transformation between the two flight states can be carried out in midair with just its current flying actuators and sensors. Through a ground controller, this vehicle may be operated by an Android device

    Social robot navigation tasks: combining machine learning techniques and social force model

    Get PDF
    © 2021 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/)Social robot navigation in public spaces, buildings or private houses is a difficult problem that is not well solved due to environmental constraints (buildings, static objects etc.), pedestrians and other mobile vehicles. Moreover, robots have to move in a human-aware manner—that is, robots have to navigate in such a way that people feel safe and comfortable. In this work, we present two navigation tasks, social robot navigation and robot accompaniment, which combine machine learning techniques with the Social Force Model (SFM) allowing human-aware social navigation. The robots in both approaches use data from different sensors to capture the environment knowledge as well as information from pedestrian motion. The two navigation tasks make use of the SFM, which is a general framework in which human motion behaviors can be expressed through a set of functions depending on the pedestrians’ relative and absolute positions and velocities. Additionally, in both social navigation tasks, the robot’s motion behavior is learned using machine learning techniques: in the first case using supervised deep learning techniques and, in the second case, using Reinforcement Learning (RL). The machine learning techniques are combined with the SFM to create navigation models that behave in a social manner when the robot is navigating in an environment with pedestrians or accompanying a person. The validation of the systems was performed with a large set of simulations and real-life experiments with a new humanoid robot denominated IVO and with an aerial robot. The experiments show that the combination of SFM and machine learning can solve human-aware robot navigation in complex dynamic environments.This research was supported by the grant MDM-2016-0656 funded by MCIN/AEI / 10.13039/501100011033, the grant ROCOTRANSP PID2019-106702RB-C21 funded by MCIN/AEI/ 10.13039/501100011033 and the grant CANOPIES H2020-ICT-2020-2-101016906 funded by the European Union.Peer ReviewedPostprint (published version
    corecore