715 research outputs found
Algorithms and Insights for RaceTrack
We discuss algorithmic issues on the well-known paper-and-pencil game RaceTrack. On a very simple track called Indianapolis, we introduce the problem and simple approaches, that will be gradually refined. We present and experimentally evaluate efficient algorithms for single player scenarios. We also consider a variant where the parts of the track are known as soon as they become visible during the race
Recommended from our members
When decision support systems fail: insights for strategic information systems from Formula
Decision support systems (DSS) are sophisticated tools that increasingly take advantage of big data and are used to design and implement individual - and organization - level strategic decisions . Yet, when organizations excessively rely on their potential the outcome may be decision - making failure, particularly when such tools are applied under high pressure and turbulent conditions. Partial understanding and unidimensional interpretation can prevent learning from failure. Building on a practice perspective, we study an iconic case of strategic failure in Formula 1 racing. Our approach, which integrates the decision maker as well as the organizational and material context , identifies three interrelated sources of strategic failure that are worth investigation for decision - makers using DSS and big data: (1) t he situated nature and affordances of decision - making ; (2) t he distributed nature of cognition in decision - making; and (3) the performativity of the DSS. We outline specific research questions and their implications for firm performance and competitive advantage. Finally, we advance an agenda that can help close timely gaps in strategic IS research
Time-optimal Control Strategies for Electric Race Cars with Different Transmission Technologies
This paper presents models and optimization methods to rapidly compute the
achievable lap time of a race car equipped with a battery electric powertrain.
Specifically, we first derive a quasi-convex model of the electric powertrain,
including the battery, the electric machine, and two transmission technologies:
a single-speed fixed gear and a continuously variable transmission (CVT).
Second, assuming an expert driver, we formulate the time-optimal control
problem for a given driving path and solve it using an iterative convex
optimization algorithm. Finally, we showcase our framework by comparing the
performance achievable with a single-speed transmission and a CVT on the Le
Mans track. Our results show that a CVT can balance its lower efficiency and
higher weight with a higher-efficiency and more aggressive motor operation, and
significantly outperform a fixed single-gear transmission.Comment: 5 pages, 4 figures, submitted to the 2020 IEEE Vehicle Power and
Propulsion Conferenc
Benchmarking of a software stack for autonomous racing against a professional human race driver
The way to full autonomy of public road vehicles requires the step-by-step
replacement of the human driver, with the ultimate goal of replacing the driver
completely. Eventually, the driving software has to be able to handle all
situations that occur on its own, even emergency situations. These particular
situations require extreme combined braking and steering actions at the limits
of handling to avoid an accident or to diminish its consequences. An average
human driver is not trained to handle such extreme and rarely occurring
situations and therefore often fails to do so. However, professional race
drivers are trained to drive a vehicle utilizing the maximum amount of possible
tire forces. These abilities are of high interest for the development of
autonomous driving software. Here, we compare a professional race driver and
our software stack developed for autonomous racing with data analysis
techniques established in motorsports. The goal of this research is to derive
indications for further improvement of the performance of our software and to
identify areas where it still fails to meet the performance level of the human
race driver. Our results are used to extend our software's capabilities and
also to incorporate our findings into the research and development of public
road autonomous vehicles.Comment: Accepted at 2020 Fifteenth International Conference on Ecological
Vehicles and Renewable Energies (EVER
On the connection of probabilistic model checking, planning, and learning for system verification
This thesis presents approaches using techniques from the model checking, planning, and learning community to make systems more reliable and perspicuous. First, two heuristic search and dynamic programming algorithms are adapted to be able to check extremal reachability probabilities, expected accumulated rewards, and their bounded versions, on general Markov decision processes (MDPs). Thereby, the problem space originally solvable by these algorithms is enlarged considerably. Correctness and optimality proofs for the adapted algorithms are given, and in a comprehensive case study on established benchmarks it is shown that the implementation, called Modysh, is competitive with state-of-the-art model checkers and even outperforms them on very large state spaces. Second, Deep Statistical Model Checking (DSMC) is introduced, usable for quality assessment and learning pipeline analysis of systems incorporating trained decision-making agents, like neural networks (NNs). The idea of DSMC is to use statistical model checking to assess NNs resolving nondeterminism in systems modeled as MDPs. The versatility of DSMC is exemplified in a number of case studies on Racetrack, an MDP benchmark designed for this purpose, flexibly modeling the autonomous driving challenge. In a comprehensive scalability study it is demonstrated that DSMC is a lightweight technique tackling the complexity of NN analysis in combination with the state space explosion problem.Diese Arbeit präsentiert Ansätze, die Techniken aus dem Model Checking, Planning und Learning Bereich verwenden, um Systeme verlässlicher und klarer verständlich zu machen. Zuerst werden zwei Algorithmen für heuristische Suche und dynamisches Programmieren angepasst, um Extremwerte für Erreichbarkeitswahrscheinlichkeiten, Erwartungswerte für Kosten und beschränkte Varianten davon, auf generellen Markov Entscheidungsprozessen (MDPs) zu untersuchen. Damit wird der Problemraum, der ursprünglich mit diesen Algorithmen gelöst wurde, deutlich erweitert. Korrektheits- und Optimalitätsbeweise für die angepassten Algorithmen werden gegeben und in einer umfassenden Fallstudie wird gezeigt, dass die Implementierung, namens Modysh, konkurrenzfähig mit den modernsten Model Checkern ist und deren Leistung auf sehr großen Zustandsräumen sogar übertrifft. Als Zweites wird Deep Statistical Model Checking (DSMC) für die Qualitätsbewertung und Lernanalyse von Systemen mit integrierten trainierten Entscheidungsgenten, wie z.B. neuronalen Netzen (NN), eingeführt. Die Idee von DSMC ist es, statistisches Model Checking zur Bewertung von NNs zu nutzen, die Nichtdeterminismus in Systemen, die als MDPs modelliert sind, auflösen. Die Vielseitigkeit des Ansatzes wird in mehreren Fallbeispielen auf Racetrack gezeigt, einer MDP Benchmark, die zu diesem Zweck entwickelt wurde und die Herausforderung des autonomen Fahrens flexibel modelliert. In einer umfassenden Skalierbarkeitsstudie wird demonstriert, dass DSMC eine leichtgewichtige Technik ist, die die Komplexität der NN-Analyse in Kombination mit dem State Space Explosion Problem bewältigt
Quantized Non-Volatile Nanomagnetic Synapse based Autoencoder for Efficient Unsupervised Network Anomaly Detection
In the autoencoder based anomaly detection paradigm, implementing the
autoencoder in edge devices capable of learning in real-time is exceedingly
challenging due to limited hardware, energy, and computational resources. We
show that these limitations can be addressed by designing an autoencoder with
low-resolution non-volatile memory-based synapses and employing an effective
quantized neural network learning algorithm. We propose a ferromagnetic
racetrack with engineered notches hosting a magnetic domain wall (DW) as the
autoencoder synapses, where limited state (5-state) synaptic weights are
manipulated by spin orbit torque (SOT) current pulses. The performance of
anomaly detection of the proposed autoencoder model is evaluated on the NSL-KDD
dataset. Limited resolution and DW device stochasticity aware training of the
autoencoder is performed, which yields comparable anomaly detection performance
to the autoencoder having floating-point precision weights. While the limited
number of quantized states and the inherent stochastic nature of DW synaptic
weights in nanoscale devices are known to negatively impact the performance,
our hardware-aware training algorithm is shown to leverage these imperfect
device characteristics to generate an improvement in anomaly detection accuracy
(90.98%) compared to accuracy obtained with floating-point trained weights.
Furthermore, our DW-based approach demonstrates a remarkable reduction of at
least three orders of magnitude in weight updates during training compared to
the floating-point approach, implying substantial energy savings for our
method. This work could stimulate the development of extremely energy efficient
non-volatile multi-state synapse-based processors that can perform real-time
training and inference on the edge with unsupervised data
Racing Towards Reinforcement Learning based control of an Autonomous Formula SAE Car
With the rising popularity of autonomous navigation research, Formula Student
(FS) events are introducing a Driverless Vehicle (DV) category to their event
list. This paper presents the initial investigation into utilising Deep
Reinforcement Learning (RL) for end-to-end control of an autonomous FS race car
for these competitions. We train two state-of-the-art RL algorithms in
simulation on tracks analogous to the full-scale design on a Turtlebot2
platform. The results demonstrate that our approach can successfully learn to
race in simulation and then transfer to a real-world racetrack on the physical
platform. Finally, we provide insights into the limitations of the presented
approach and guidance into the future directions for applying RL toward
full-scale autonomous FS racing.Comment: Accepted at the Australasian Conference on Robotics and Automation
(ACRA 2022
Shiftsreduce: Minimizing shifts in racetrack memory 4.0
Racetrack memories (RMs) have significantly evolved since their conception in 2008, making them a serious contender in the field of emerging memory technologies. Despite key technological advancements, the access latency and energy consumption of an RM-based system are still highly influenced by the number of shift operations. These operations are required to move bits to the right positions in the racetracks. This article presents data-placement techniques for RMs that maximize the likelihood that consecutive references access nearby memory locations at runtime, thereby minimizing the number of shifts. We present an integer linear programming (ILP) formulation for optimal data placement in RMs, and we revisit existing offset assignment heuristics, originally proposed for random-access memories. We introduce a novel heuristic tailored to a realistic RM and combine it with a genetic search to further improve the solution. We show a reduction in the number of shifts of up to 52.5%, outperforming the state of the art by up to 16.1%
Comprehensive Training and Evaluation on Deep Reinforcement Learning for Automated Driving in Various Simulated Driving Maneuvers
Developing and testing automated driving models in the real world might be
challenging and even dangerous, while simulation can help with this, especially
for challenging maneuvers. Deep reinforcement learning (DRL) has the potential
to tackle complex decision-making and controlling tasks through learning and
interacting with the environment, thus it is suitable for developing automated
driving while not being explored in detail yet. This study carried out a
comprehensive study by implementing, evaluating, and comparing the two DRL
algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO),
for training automated driving on the highway-env simulation platform.
Effective and customized reward functions were developed and the implemented
algorithms were evaluated in terms of onlane accuracy (how well the car drives
on the road within the lane), efficiency (how fast the car drives), safety (how
likely the car is to crash into obstacles), and comfort (how much the car makes
jerks, e.g., suddenly accelerates or brakes). Results show that the TRPO-based
models with modified reward functions delivered the best performance in most
cases. Furthermore, to train a uniform driving model that can tackle various
driving maneuvers besides the specific ones, this study expanded the
highway-env and developed an extra customized training environment, namely,
ComplexRoads, integrating various driving maneuvers and multiple road scenarios
together. Models trained on the designed ComplexRoads environment can adapt
well to other driving maneuvers with promising overall performance. Lastly,
several functionalities were added to the highway-env to implement this work.
The codes are open on GitHub at https://github.com/alaineman/drlcarsim-paper.Comment: 6 pages, 3 figures, accepted by the 26th IEEE International
Conference on Intelligent Transportation Systems (ITSC 2023
- …