17 research outputs found

    Social Interaction-Aware Dynamical Models and Decision Making for Autonomous Vehicles

    Full text link
    Interaction-aware Autonomous Driving (IAAD) is a rapidly growing field of research that focuses on the development of autonomous vehicles (AVs) that are capable of interacting safely and efficiently with human road users. This is a challenging task, as it requires the autonomous vehicle to be able to understand and predict the behaviour of human road users. In this literature review, the current state of IAAD research is surveyed in this work. Commencing with an examination of terminology, attention is drawn to challenges and existing models employed for modelling the behaviour of drivers and pedestrians. Next, a comprehensive review is conducted on various techniques proposed for interaction modelling, encompassing cognitive methods, machine learning approaches, and game-theoretic methods. The conclusion is reached through a discussion of potential advantages and risks associated with IAAD, along with the illumination of pivotal research inquiries necessitating future exploration

    Evaluation of Compressor Blade Profiles characteristics at different operating conditions

    Get PDF
    With CFD analysis an investigation of the blade profile performance has been studied in order to characterize the aerodynamic behaviour of an axial compressor in a 2D approach. This investigation has been carried out in two different operating conditions. The first is dry working where loss coefficient and flow deflection has been investigated in function of Mach number and incidence angle varitations. The second is wet working where the collision of water droplets on the blade walls is studie

    Deep Learning assisted closed chain inverse dynamics for Biomechanical analysis during object manipulation

    No full text
    To design efficient and safe systems capable of interacting with people, many researchers and engineers are interested in the development of accurate human-machine interface models that could serve the better design of such systems. These models often include humans. Therefore it is important to have suitable tools to perform a biomechanical analysis of the human body so that the ergonomic risk associated to a task can be classified. Such an analysis can be tailored to specific applications that range from the human-robot interaction to the assessment of the ergonomic risk served by technologies and methodologies of robotics. Recent advances in wearable technologies allow for accurate and fast motion tracking. This is an enabling technology that makes it possible to obtain a biomechanical analysis of the human based on an inverse dynamics approach. The remaining critical problems are the determination of external loads and the assessment of the kinematic loops, which add internal loads to be considered in the analysis. This thesis proposes a method to analyze the biomechanics of a human based on the information gathered from wearable sensors to estimate both the kinematic variables and the external loads needed to solve the inverse dynamics of the human. In this thesis, the musculoskeletal system is considered as a tree in which bones are considered as rigid bodies, and the action of muscles is summarized by wrenches applied at the spherical joints which connect bodies. Based on the data coming from a network of wearable inertial sensors that capture the human motion and the video stream of an egocentric camera, a procedure for computing the inverse dynamics of the human body was developed. This procedure takes into account four ground support cases, i.e. single, double and no support, and four typical load conditions, under the assumption that the applied external load is due to an object carried by the user. For external loads recognition, a computer vision approach based on deep learning techniques was used. A camera with a sufficient field of view and resolution to perform image classification was selected. To recognize the carried object, a set of images was recorded and labeled, and a state-of-the-art convolutional neural network (Yolo) was trained on this set. A dictionary containing the inertial properties of each object class of this set was created to provide the necessary information to the inverse dynamics algorithm. In addition to the information about the carried object, this algorithm takes as inputs the positions, velocities, and accelerations of the human bodies (links). These positions are obtained from the inertial sensors, whereas joint velocities and accelerations are obtained through filtering. To facilitate the integration of the proposed procedure with robot models in a human-robot interaction task, a modular environment i.e. ROS was selected. In the perspective of using this procedure in human-robot interaction tasks which could require real-time constraints, and after a few tests with a simple robotic arm, it was noted that Gazebo's dynamics simulator was a suitable choice to solve the inverse dynamics of the human body. Therefore, an algorithm for the biomechanical analysis of the human was studied. This algorithm is implemented in C++ and it exploits the ROS/OROCOS KDL libraries (package). Since Orocos-KDL requires URDF files for the definition of the mechanism and the data coming from the inertial sensors are in a BVH format, MATLAB scripts to perform the porting from BVH to URDF were created. Although the implemented algorithms are applied to the human under the assumption of no contacts with the external world, except for the feet, the proposed method applies to more complex situations. Indeed this method allows for applying loads to any of the bodies composing the mechanism (not necessarily a human) while allowing the user to provide policies to solve the redundancies, that occur when multiple contact points with the environment are present. For the development and validation of the method, three activities were carried out. In the first, images of the objects to be recognized were recorded and labeled to train and test the object detection algorithm. In the second, human motion was simulated by applying a known trajectory to test the inverse dynamics algorithm. In the last, a participant was equipped with the body sensor network and gathered data were used as input of the proposed method. Object detection proved to be robust and solid in the (laboratory) experimental condition, with percentages of correct associations higher than 77% for all the classes and a false negative rate smaller than 6%. The human motion simulation test showed that the proposed method provides the correct wrenches in the different support conditions while taking into account the humans' dynamics. Finally, the final tests showed that the upper body wrenches are correctly computed, whereas the lower body wrenches suffer from a gait segmentation problem that was not addressed in this prototyping phase

    L'altare del dio Bolgolivs dalla Pieve di Santa Maria del Bigolio a Orzivecchi (Bs)

    No full text
    The 2018 archeological survey, which was conducted at Pieve near Orzivecchi (Brescia), revealed a diverse array of archeological materials ranging from several historical periods. In particular, an altar dedicated to the indigenous god Bolgolius by Tertius Donnedo Tertulli f. was discovered. According to linguistic theory, the theonym Bolgolius probably has Celtic origins and perhaps has a connection with the god Mercurius

    Interaction-aware Decision-making for Automated Vehicles using Social Value Orientation

    Get PDF
    Motion control algorithms in the presence of pedestrians are critical for the development of safe and reliable Autonomous Vehicles (AVs). Traditional motion control algorithms rely on manually designed decision-making policies which neglect the mutual interactions between AVs and pedestrians. On the other hand, recent advances in Deep Reinforcement Learning allow for the automatic learning of policies without manual designs. To tackle the problem of decision-making in the presence of pedestrians, the authors introduce a framework based on Social Value Orientation and Deep Reinforcement Learning (DRL) that is capable of generating decision-making policies with different driving styles. The policy is trained using stateof- the-art DRL algorithms in a simulated environment. A novel computationally-efficient pedestrian model that is suitable for DRL training is also introduced. We perform experiments to validate our framework and we conduct a comparative analysis of the policies obtained with two different model-free Deep Reinforcement Learning Algorithms. Simulations results show how the developed model exhibits natural driving behaviours, such as short-stopping, to facilitate the pedestrian’s crossing

    Interaction-aware Decision-making for Automated Vehicles using Social Value Orientation

    Get PDF
    Motion control algorithms in the presence of pedestrians are critical for the development of safe and reliable Autonomous Vehicles (AVs). Traditional motion control algorithms rely on manually designed decision-making policies which neglect the mutual interactions between AVs and pedestrians. On the other hand, recent advances in Deep Reinforcement Learning allow for the automatic learning of policies without manual designs. To tackle the problem of decision-making in the presence of pedestrians, the authors introduce a framework based on Social Value Orientation and Deep Reinforcement Learning (DRL) that is capable of generating decision-making policies with different driving styles. The policy is trained using stateof- the-art DRL algorithms in a simulated environment. A novel computationally-efficient pedestrian model that is suitable for DRL training is also introduced. We perform experiments to validate our framework and we conduct a comparative analysis of the policies obtained with two different model-free Deep Reinforcement Learning Algorithms. Simulations results show how the developed model exhibits natural driving behaviours, such as short-stopping, to facilitate the pedestrian’s crossing

    Human-centric Autonomous Driving in an AV-Pedestrian Interactive Environment Using SVO

    Get PDF
    As Autonomous Vehicles (AV) are becoming a reality, the design of efficient motion control algorithms will have to deal with the unpredictable and interactive nature of other road users. Current AV motion planning algorithms suffer from the freezing robot problem, as they often tend to overestimate collision risks. To tackle this problem and design AV that behave human-like, we integrate a concept from Psychology called Social Value Orientation into the Reinforcement Learning (RL) framework. The addition of a social term in the reward function design allows us to tune the AV behaviour towards the pedestrian from a more reckless to an extremely prudent one. We train the vehicle agent with a state of the art RL algorithm and show that Social Value Orientation is an effective tool to obtain pro-social AV behaviour

    Coupling intention and actions of vehicle-pedestrian interaction: A virtual reality experiment study

    No full text
    The interactions between vehicles and pedestrians are complex due to their interdependence and coupling. Understanding these interactions is crucial for the development of autonomous vehicles, as it enables accurate prediction of pedestrian crossing intentions, more reasonable decision-making, and human-like motion planning at unsignalized intersections. Previous studies have devoted considerable effort to analyzing vehicle and pedestrian behavior and developing models to forecast pedestrian crossing intentions. However, these studies have two limitations. First, they mainly focus on investigating variables that explain pedestrian crossing behavior rather than predicting pedestrian crossing intentions. Moreover, some factors such as age, sensation seeking and social value orientation, used to establish decision-making models in these studies are not easily accessible in real-world scenarios. In this paper, we explored the critical factors influencing the decision-making processes of human drivers and pedestrians respectively by using virtual reality technology. To do this, we considered available kinematic variables and analyzed the internal relationship between motion parameters and pedestrian behavior. The analysis results indicate that longitudinal distance and vehicle acceleration are the most influential factors in pedestrian decision-making, while pedestrian speed and longitudinal distance also play a crucial role in determining whether the vehicle yields or not. Furthermore, a mathematical relationship between a pedestrian's intention and kinematic variables is established for the first time, which can help dynamically assess when pedestrians desire to cross. Finally, the results obtained in driver-yielding behavior analysis provide valuable insights for autonomous vehicle decision-making and motion planning

    A virtual reality framework for human-driver interaction research: safe and cost-effective data collection

    No full text
    The advancement of automated driving technology has led to new challenges in the interaction between automated vehicles and human road users. However, there is currently no complete theory that explains how human road users interact with vehicles, and studying them in real-world settings is often unsafe and time-consuming. This study proposes a 3D Virtual Reality (VR) framework for studying how pedestrians interact with human-driven vehicles. The framework uses VR technology to collect data in a safe and cost-effective way, and deep learning methods are used to predict pedestrian trajectories. Specifically, graph neural networks have been used to model pedestrian future trajectories and the probability of crossing the road. The results of this study show that the proposed framework can be for collecting high-quality data on pedestrian-vehicle interactions in a safe and efficient manner. The data can then be used to develop new theories of human-vehicle interaction and aid the Autonomous Vehicles research
    corecore