192,469 research outputs found
Parallel Multi-Hypothesis Algorithm for Criticality Estimation in Traffic and Collision Avoidance
Due to the current developments towards autonomous driving and vehicle active
safety, there is an increasing necessity for algorithms that are able to
perform complex criticality predictions in real-time. Being able to process
multi-object traffic scenarios aids the implementation of a variety of
automotive applications such as driver assistance systems for collision
prevention and mitigation as well as fall-back systems for autonomous vehicles.
We present a fully model-based algorithm with a parallelizable architecture.
The proposed algorithm can evaluate the criticality of complex, multi-modal
(vehicles and pedestrians) traffic scenarios by simulating millions of
trajectory combinations and detecting collisions between objects. The algorithm
is able to estimate upcoming criticality at very early stages, demonstrating
its potential for vehicle safety-systems and autonomous driving applications.
An implementation on an embedded system in a test vehicle proves in a
prototypical manner the compatibility of the algorithm with the hardware
possibilities of modern cars. For a complex traffic scenario with 11 dynamic
objects, more than 86 million pose combinations are evaluated in 21 ms on the
GPU of a Drive PX~2
Validating Predictions of Unobserved Quantities
The ultimate purpose of most computational models is to make predictions,
commonly in support of some decision-making process (e.g., for design or
operation of some system). The quantities that need to be predicted (the
quantities of interest or QoIs) are generally not experimentally observable
before the prediction, since otherwise no prediction would be needed. Assessing
the validity of such extrapolative predictions, which is critical to informed
decision-making, is challenging. In classical approaches to validation, model
outputs for observed quantities are compared to observations to determine if
they are consistent. By itself, this consistency only ensures that the model
can predict the observed quantities under the conditions of the observations.
This limitation dramatically reduces the utility of the validation effort for
decision making because it implies nothing about predictions of unobserved QoIs
or for scenarios outside of the range of observations. However, there is no
agreement in the scientific community today regarding best practices for
validation of extrapolative predictions made using computational models. The
purpose of this paper is to propose and explore a validation and predictive
assessment process that supports extrapolative predictions for models with
known sources of error. The process includes stochastic modeling, calibration,
validation, and predictive assessment phases where representations of known
sources of uncertainty and error are built, informed, and tested. The proposed
methodology is applied to an illustrative extrapolation problem involving a
misspecified nonlinear oscillator
Context-Aware Trajectory Prediction
Human motion and behaviour in crowded spaces is influenced by several
factors, such as the dynamics of other moving agents in the scene, as well as
the static elements that might be perceived as points of attraction or
obstacles. In this work, we present a new model for human trajectory prediction
which is able to take advantage of both human-human and human-space
interactions. The future trajectory of humans, are generated by observing their
past positions and interactions with the surroundings. To this end, we propose
a "context-aware" recurrent neural network LSTM model, which can learn and
predict human motion in crowded spaces such as a sidewalk, a museum or a
shopping mall. We evaluate our model on a public pedestrian datasets, and we
contribute a new challenging dataset that collects videos of humans that
navigate in a (real) crowded space such as a big museum. Results show that our
approach can predict human trajectories better when compared to previous
state-of-the-art forecasting models.Comment: Submitted to BMVC 201
A Learning-Based Framework for Two-Dimensional Vehicle Maneuver Prediction over V2V Networks
Situational awareness in vehicular networks could be substantially improved
utilizing reliable trajectory prediction methods. More precise situational
awareness, in turn, results in notably better performance of critical safety
applications, such as Forward Collision Warning (FCW), as well as comfort
applications like Cooperative Adaptive Cruise Control (CACC). Therefore,
vehicle trajectory prediction problem needs to be deeply investigated in order
to come up with an end to end framework with enough precision required by the
safety applications' controllers. This problem has been tackled in the
literature using different methods. However, machine learning, which is a
promising and emerging field with remarkable potential for time series
prediction, has not been explored enough for this purpose. In this paper, a
two-layer neural network-based system is developed which predicts the future
values of vehicle parameters, such as velocity, acceleration, and yaw rate, in
the first layer and then predicts the two-dimensional, i.e. longitudinal and
lateral, trajectory points based on the first layer's outputs. The performance
of the proposed framework has been evaluated in realistic cut-in scenarios from
Safety Pilot Model Deployment (SPMD) dataset and the results show a noticeable
improvement in the prediction accuracy in comparison with the kinematics model
which is the dominant employed model by the automotive industry. Both ideal and
nonideal communication circumstances have been investigated for our system
evaluation. For non-ideal case, an estimation step is included in the framework
before the parameter prediction block to handle the drawbacks of packet drops
or sensor failures and reconstruct the time series of vehicle parameters at a
desirable frequency
Optimal management of bio-based energy supply chains under parametric uncertainty through a data-driven decision-support framework
This paper addresses the optimal management of a multi-objective bio-based energy supply chain network subjected to multiple sources of uncertainty. The complexity to obtain an optimal solution using traditional uncertainty management methods dramatically increases with the number of uncertain factors considered. Such a complexity produces that, if tractable, the problem is solved after a large computational effort. Therefore, in this work a data-driven decision-making framework is proposed to address this issue. Such a framework exploits machine learning techniques to efficiently approximate the optimal management decisions considering a set of uncertain parameters that continuously influence the process behavior as an input. A design of computer experiments technique is used in order to combine these parameters and produce a matrix of representative information. These data are used to optimize the deterministic multi-objective bio-based energy network problem through conventional optimization methods, leading to a detailed (but elementary) map of the optimal management decisions based on the uncertain parameters. Afterwards, the detailed data-driven relations are described/identified using an Ordinary Kriging meta-model. The result exhibits a very high accuracy of the parametric meta-models for predicting the optimal decision variables in comparison with the traditional stochastic approach. Besides, and more importantly, a dramatic reduction of the computational effort required to obtain these optimal values in response to the change of the uncertain parameters is achieved. Thus the use of the proposed data-driven decision tool promotes a time-effective optimal decision making, which represents a step forward to use data-driven strategy in large-scale/complex industrial problems.Peer ReviewedPostprint (published version
Applying Exclusion Likelihoods from LHC Searches to Extended Higgs Sectors
LHC searches for non-standard Higgs bosons decaying into tau lepton pairs
constitute a sensitive experimental probe for physics beyond the Standard Model
(BSM), such as Supersymmetry (SUSY). Recently, the limits obtained from these
searches have been presented by the CMS collaboration in a nearly
model-independent fashion - as a narrow resonance model - based on the full 8
TeV dataset. In addition to publishing a 95% C.L. exclusion limit, the full
likelihood information for the narrow resonance model has been released. This
provides valuable information that can be incorporated into global BSM fits. We
present a simple algorithm that maps an arbitrary model with multiple neutral
Higgs bosons onto the narrow resonance model and derives the corresponding
value for the exclusion likelihood from the CMS search. This procedure has been
implemented into the public computer code HiggsBounds (version 4.2.0 and
higher). We validate our implementation by cross-checking against the official
CMS exclusion contours in three Higgs benchmark scenarios in the Minimal
Supersymmetric Standard Model (MSSM), and find very good agreement. Going
beyond validation, we discuss the combined constraints of the tau tau search
and the rate measurements of the SM-like Higgs at 125 GeV in a recently
proposed MSSM benchmark scenario, where the lightest Higgs boson obtains
SM-like couplings independently of the decoupling of the heavier Higgs states.
Technical details for how to access the likelihood information within
HiggsBounds are given in the appendix. The program is available at
http://higgsbounds.hepforge.org.Comment: 24 pages, 6 figures; The code can be downloaded from
http://higgsbounds.hepforge.or
- …