184 research outputs found
A Linear Model of Magnetostrictive Actuators for Active Vibration Control
If there is one actuator technology that is almost exclusively linked to a single application, that is the magnetostrictive actuator, the application is active structural vibration control (AVC). Almost all the applications described in the literature on magnetostrictive actuators are related in one way or another to vibration suppression mechanisms. Magnetostrictive actuators (MA) deliver high-output forces and relatively high displacements (compared to other emerging actuator technologies) and can be driven at high frequencies. These characteristics make them suitable for a variety of vibration control applications. The use of this technology, however, requires an accurate knowledge of the dynamics of such actuators. The paper introduces a linear model of magnetostrictive actuators hold in a range of frequencies below 2 kHz useful in real time application as AVC. The hypotesis supporting the linearity of the systems are discussed and the theoretical model is presented. Finally the model is validated by testing two different models of magnetostrictive actuators and comparing experimental results with the theoretical ones
A Deep-Learning Framework to Predict the Dynamics of a Human-Driven Vehicle Based on the Road Geometry
Many trajectory forecasting methods, implementing deterministic and
stochastic models, have been presented in the last decade for automotive
applications. In this work, a deep-learning framework is proposed to model and
predict the evolution of the coupled driver-vehicle system dynamics.
Particularly, we aim to describe how the road geometry affects the actions
performed by the driver. Differently from other works, the problem is
formulated in such a way that the user may specify the features of interest.
Nonetheless, we propose a set of features that is commonly used for automotive
control applications to practically show the functioning of the algorithm. To
solve the prediction problem, a deep recurrent neural network based on Long
Short-Term Memory autoencoders is designed. It fuses the information on the
road geometry and the past driver-vehicle system dynamics to produce
context-aware predictions. Also, the complexity of the neural network is
constrained to favour its use in online control tasks. The efficacy of the
proposed approach was verified in a case study centered on motion cueing
algorithms, using a dataset collected during test sessions of a
non-professional driver on a dynamic driving simulator. A 3D track with complex
geometry was employed as driving environment to render the prediction task
challenging. Finally, the robustness of the neural network to changes in the
driver and track was investigated to set guidelines for future works.Comment: 10 pages, 9 figures, 3 tables. This work has been submitted to the
IEEE Transactions on Intelligent Transportation Systems for possible
publication. Copyright may be transferred without notice, after which this
version may no longer be accessibl
Learning to drive via Apprenticeship Learning and Deep Reinforcement Learning
With the implementation of reinforcement learning (RL) algorithms, current
state-of-art autonomous vehicle technology have the potential to get closer to
full automation. However, most of the applications have been limited to game
domains or discrete action space which are far from the real world driving.
Moreover, it is very tough to tune the parameters of reward mechanism since the
driving styles vary a lot among the different users. For instance, an
aggressive driver may prefer driving with high acceleration whereas some
conservative drivers prefer a safer driving style. Therefore, we propose an
apprenticeship learning in combination with deep reinforcement learning
approach that allows the agent to learn the driving and stopping behaviors with
continuous actions. We use gradient inverse reinforcement learning (GIRL)
algorithm to recover the unknown reward function and employ REINFORCE as well
as Deep Deterministic Policy Gradient algorithm (DDPG) to learn the optimal
policy. The performance of our method is evaluated in simulation-based scenario
and the results demonstrate that the agent performs human like driving and even
better in some aspects after training.Comment: 7 pages, 11 figures, conferenc
Optimal strategies to steer and control water waves
In this paper, we propose a new method for controlling surface water waves and their interaction with floating bodies. A floating target rigid body is surrounded by a control region where we design three control strategies of increasing complexity: an active strategy based on controlling the pressure at the air–water interface and two passive strategies where an additional controlled floating device is designed. Such device is modeled both as a membrane and as a thin plate and the effect of this modeling choice on the performance of the overall controlled system is analyzed. We frame this problem as an optimal control problem where the underlying state dynamics is represented by a system of coupled partial differential equations describing the interaction between the surface water waves and the floating target body in the frequency domain. An additional intermediate coupling is then added when considering the control floating device. The optimal control problem then aims at minimizing a cost functional which weights the unwanted motions of the floating body. A system of first-order necessary optimality conditions is derived and numerically solved using the finite element method. The efficacy of this new method for reducing hydrodynamic loads on floating objects has been shown through numerical simulations
Model Predictive Control Strategies for Electric Endurance Race Cars Accounting for Competitors Interactions
This paper presents model predictive control strategies for battery electric
endurance race cars accounting for interactions with the competitors. In
particular, we devise an optimization framework capturing the impact of the
actions of the ego vehicle when interacting with competitors in a probabilistic
fashion, jointly accounting for the optimal pit stop decision making, the
charge times and the driving style in the course of the race. We showcase our
method for a simulated 1h endurance race at the Zandvoort circuit, using
real-life data of internal combustion engine race cars from a previous event.
Our results show that optimizing both the race strategy as well as the decision
making during the race is very important, resulting in a significant 21s
advantage over an always overtake approach, whilst revealing the
competitiveness of e-race cars w.r.t. conventional ones.Comment: Submitted to L-CSS 202
RobustStateNet: Robust ego vehicle state estimation for Autonomous Driving
Control of an ego vehicle for Autonomous Driving (AD) requires an accurate definition of its state. Implementation of various model-based Kalman Filtering (KF) techniques for state estimation is prevalent in the literature. These algorithms use measurements from IMU and input signals from steering and wheel encoders for motion prediction with physics-based models, and a Global Navigation Satellite System(GNSS) for global localization. Such methods are widely investigated and majorly focus on increasing the accuracy of the estimation. Ego motion prediction in these approaches does not model the sensor failure modes and assumes completely known dynamics with motion and measurement model noises. In this work, we propose a novel Recurrent Neural Network (RNN) based motion predictor that parallelly models the sensor measurement dynamics and selectively fuses the features to increase the robustness of prediction, in particular in scenarios where we witness sensor failures. This motion predictor is integrated into a KF-like framework, RobustStateNet that takes a global position from the GNSS sensor and updates the predicted state. We demonstrate that the proposed state estimation routine outperforms the Model-Based KF and KalmanNet architecture in terms of estimation accuracy and robustness. The proposed algorithms are validated in the modified NuScenes CAN bus dataset, designed to simulate various types of sensor failures
Brain-computer interface for robot control with eye artifacts for assistive applications
Human-robot interaction is a rapidly developing field and robots have been taking more active roles in our daily lives. Patient care is one of the fields in which robots are becoming more present, especially for people with disabilities. People with neurodegenerative disorders might not consciously or voluntarily produce movements other than those involving the eyes or eyelids. In this context, Brain-Computer Interface (BCI) systems present an alternative way to communicate or interact with the external world. In order to improve the lives of people with disabilities, this paper presents a novel BCI to control an assistive robot with user's eye artifacts. In this study, eye artifacts that contaminate the electroencephalogram (EEG) signals are considered a valuable source of information thanks to their high signal-to-noise ratio and intentional generation. The proposed methodology detects eye artifacts from EEG signals through characteristic shapes that occur during the events. The lateral movements are distinguished by their ordered peak and valley formation and the opposite phase of the signals measured at F7 and F8 channels. This work, as far as the authors' knowledge, is the first method that used this behavior to detect lateral eye movements. For the blinks detection, a double-thresholding method is proposed by the authors to catch both weak blinks as well as regular ones, differentiating itself from the other algorithms in the literature that normally use only one threshold. Real-time detected events with their virtual time stamps are fed into a second algorithm, to further distinguish between double and quadruple blinks from single blinks occurrence frequency. After testing the algorithm offline and in realtime, the algorithm is implemented on the device. The created BCI was used to control an assistive robot through a graphical user interface. The validation experiments including 5 participants prove that the developed BCI is able to control the robot
- …