663 research outputs found
HPC Platform for Railway Safety-Critical Functionalities Based on Artificial Intelligence
The automation of railroad operations is a rapidly growing industry. In 2023, a new European standard for the automated Grade of Automation (GoA) 2 over European Train Control System (ETCS) driving is anticipated. Meanwhile, railway stakeholders are already planning their research initiatives for driverless and unattended autonomous driving systems. As a result, the industry is particularly active in research regarding perception technologies based on Computer Vision (CV) and Artificial Intelligence (AI), with outstanding results at the application level. However, executing high-performance and safety-critical applications on embedded systems and in real-time is a challenge. There are not many commercially available solutions, since High-Performance Computing (HPC) platforms are typically seen as being beyond the business of safety-critical systems. This work proposes a novel safety-critical and high-performance computing platform for CV- and AI-enhanced technology execution used for automatic accurate stopping and safe passenger transfer railway functionalities. The resulting computing platform is compatible with the majority of widely-used AI inference methodologies, AI model architectures, and AI model formats thanks to its design, which enables process separation, redundant execution, and HW acceleration in a transparent manner. The proposed technology increases the portability of railway applications into embedded systems, isolates crucial operations, and effectively and securely maintains system resources.The novel approach presented in this work is being developed as a specific railway use case for autonomous train operation into SELENE European research project. This project has received funding from RIA—Research and Innovation action under grant agreement No. 871467
Recommended from our members
Real-time sensor data development for smart truck drivetrains
Heavy articulated transport vehicles have a poor reputation associated with dramatic road accidents with frequent fatalities for those in automobiles. The result of this work is a formal data flow structure to enhance real-time decision-making in complex mechanical systems to increase performance capability and responsiveness to human commands. This structure recognizes the multiple layers of highly non-linear mechanical components (actuators, wheel tire & ground surfaces, controllers, power supplies, human/machine interfaces, etc.) that must operate in unison (i.e., reduce conflicts) in real-time (in milli-seconds) to enhance operator (driver) control to maximize human choice. This work contains a discussion on dependable sensor data is vital in complex systems that rely on a suite of sensors for both control as well as condition monitoring purposes as well as discussion on real-time energy distribution analysis in high momentum mechanical systems. The focus will be on tractor trucks of class 7 & 8 that are outfitted with an array of low-cost redundant sensors leveraging advances in intelligent robotic systems. This work details many topics including: Most relevant sensor types and their technologies, Designing, implementing, and maintaining a multi-sensor system using feasible industry standards, Sensor signal integrity and data flow processing for decision making, Asynchronous data flow methods for operating decision making schemes in real-time, Multiple applications to enhance tractor trucks systems with multi-sensor systems for real-time decision making.Mechanical Engineerin
Intrinsically Evolvable Artificial Neural Networks
Dedicated hardware implementations of neural networks promise to provide faster, lower power operation when compared to software implementations executing on processors. Unfortunately, most custom hardware implementations do not support intrinsic training of these networks on-chip. The training is typically done using offline software simulations and the obtained network is synthesized and targeted to the hardware offline. The FPGA design presented here facilitates on-chip intrinsic training of artificial neural networks. Block-based neural networks (BbNN), the type of artificial neural networks implemented here, are grid-based networks neuron blocks. These networks are trained using genetic algorithms to simultaneously optimize the network structure and the internal synaptic parameters. The design supports online structure and parameter updates, and is an intrinsically evolvable BbNN platform supporting functional-level hardware evolution. Functional-level evolvable hardware (EHW) uses evolutionary algorithms to evolve interconnections and internal parameters of functional modules in reconfigurable computing systems such as FPGAs. Functional modules can be any hardware modules such as multipliers, adders, and trigonometric functions. In the implementation presented, the functional module is a neuron block. The designed platform is suitable for applications in dynamic environments, and can be adapted and retrained online. The online training capability has been demonstrated using a case study. A performance characterization model for RC implementations of BbNNs has also been presented
Edge-Centric Efficient Regression Analytics
We introduce an edge-centric parametric predictive analytics methodology, which contributes to real-time regression model caching and selective forwarding in the network edge where communication overhead is significantly reduced as only model's parameters and sufficient statistics are disseminated instead of raw data obtaining high analytics quality. Moreover, sophisticated model selection algorithms are introduced to combine diverse local models for predictive modeling without transferring and processing data at edge gateways. We provide mathematical modeling, performance and comparative assessment over real data showing its benefits in edge computing environments
Vertical Optimizations of Convolutional Neural Networks for Embedded Systems
L'abstract è presente nell'allegato / the abstract is in the attachmen
Recommended from our members
End-to-end deep reinforcement learning in computer systems
Abstract
The growing complexity of data processing systems has long led systems designers to imagine systems (e.g. databases, schedulers) which can self-configure and adapt based on environmental cues. In this context, reinforcement learning (RL) methods have since their inception appealed to systems developers. They promise to acquire complex decision policies from raw feedback signals. Despite their conceptual popularity, RL methods are scarcely found in real-world data processing systems. Recently, RL has seen explosive growth in interest due to high profile successes when utilising large neural networks (deep reinforcement learning). Newly emerging machine learning frameworks and powerful hardware accelerators have given rise to a plethora of new potential applications.
In this dissertation, I first argue that in order to design and execute deep RL algorithms efficiently, novel software abstractions are required which can accommodate the distinct computational patterns of communication-intensive and fast-evolving algorithms. I propose an architecture which decouples logical algorithm construction from local and distributed execution semantics. I further present RLgraph, my proof-of-concept implementation of this architecture. In RLgraph, algorithm developers can explore novel designs by constructing a high-level data flow graph through combination of logical components. This dataflow graph is independent of specific backend frameworks or notions of execution, and is only later mapped to execution semantics via a staged build process. RLgraph enables high-performing algorithm implementations while maintaining flexibility for rapid prototyping.
Second, I investigate reasons for the scarcity of RL applications in systems themselves. I argue that progress in applied RL is hindered by a lack of tools for task model design which bridge the gap between systems and algorithms, and also by missing shared standards for evaluation of model capabilities. I introduce Wield, a first-of-its-kind tool for incremental model design in applied RL. Wield provides a small set of primitives which decouple systems interfaces and deployment-specific configuration from representation. Core to Wield is a novel instructive experiment protocol called progressive randomisation which helps practitioners to incrementally evaluate different dimensions of non-determinism. I demonstrate how Wield and progressive randomisation can be used to reproduce and assess prior work, and to guide implementation of novel RL applications
Data Analytics and Machine Learning to Enhance the Operational Visibility and Situation Awareness of Smart Grid High Penetration Photovoltaic Systems
Electric utilities have limited operational visibility and situation awareness over grid-tied distributed photovoltaic systems (PV). This will pose a risk to grid stability when the PV penetration into a given feeder exceeds 60% of its peak or minimum daytime load. Third-party service providers offer only real-time monitoring but not accurate insights into system performance and prediction of productions. PV systems also increase the attack surface of distribution networks since they are not under the direct supervision and control of the utility security analysts.
Six key objectives were successfully achieved to enhance PV operational visibility and situation awareness: (1) conceptual cybersecurity frameworks for PV situation awareness at device, communications, applications, and cognitive levels; (2) a unique combinatorial approach using LASSO-Elastic Net regularizations and multilayer perceptron for PV generation forecasting; (3) applying a fixed-point primal dual log-barrier interior point method to expedite AC optimal power flow convergence; (4) adapting big data standards and capability maturity models to PV systems; (5) using K-nearest neighbors and random forests to impute missing values in PV big data; and (6) a hybrid data-model method that takes PV system deration factors and historical data to estimate generation and evaluate system performance using advanced metrics.
These objectives were validated on three real-world case studies comprising grid-tied commercial PV systems. The results and conclusions show that the proposed imputation approach improved the accuracy by 91%, the estimation method performed better by 75% and 10% for two PV systems, and the use of the proposed forecasting model improved the generalization performance and reduced the likelihood of overfitting. The application of primal dual log-barrier interior point method improved the convergence of AC optimal power flow by 0.7 and 0.6 times that of the currently used deterministic models. Through the use of advanced performance metrics, it is shown how PV systems of different nameplate capacities installed at different geographical locations can be directly evaluated and compared over both instantaneous as well as extended periods of time. The results of this dissertation will be of particular use to multiple stakeholders of the PV domain including, but not limited to, the utility network and security operation centers, standards working groups, utility equipment, and service providers, data consultants, system integrator, regulators and public service commissions, government bodies, and end-consumers
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
3D People Surveillance on Range Data Sequences of a Rotating Lidar
In this paper, we propose an approach on real-time 3D people surveillance, with probabilistic foreground modeling, multiple person tracking and on-line re-identification. Our principal aim is to demonstrate the capabilities of a special range sensor, called rotating multi-beam (RMB) Lidar, as a future possible surveillance camera. We present methodological contributions in two key issues. First, we introduce a hybrid 2D--3D method for robust foreground-background classification of the recorded RMB-Lidar point clouds, with eliminating spurious effects resulted by quantification error of the discretized view angle, non-linear position corrections of sensor calibration, and background flickering, in particularly due to motion of vegetation. Second, we propose a real-time method for moving pedestrian detection and tracking in RMB-Lidar sequences of dense surveillance scenarios, with short- and long-term object assignment. We introduce a novel person re-identification algorithm based on solely the Lidar measurements, utilizing in parallel the range and the intensity channels of the sensor, which provide biometric features. Quantitative evaluation is performed on seven outdoor Lidar sequences containing various multi-target scenarios displaying challenging outdoor conditions with low point density and multiple occlusions
- …