250 research outputs found
SoC-based FPGA architecture for image analysis and other highly demanding applications
Al giorno d'oggi, lo sviluppo di algoritmi si concentra su calcoli efficienti in termini di prestazioni ed efficienza energetica. Tecnologie come il field programmable gate array (FPGA) e il system on chip (SoC) basato su FPGA (FPGA/SoC) hanno dimostrato la loro capacità di accelerare applicazioni di calcolo intensive risparmiando al contempo il consumo energetico, grazie alla loro capacità di elevato parallelismo e riconfigurazione dell'architettura.
Attualmente, i cicli di progettazione esistenti per FPGA/SoC sono lunghi, a causa della complessità dell'architettura. Pertanto, per colmare il divario tra le applicazioni e le architetture FPGA/SoC e ottenere un design hardware efficiente per l'analisi delle immagini e altri applicazioni altamente demandanti utilizzando lo strumento di sintesi di alto livello, vengono prese in considerazione due strategie complementari: tecniche ad hoc e stima delle prestazioni.
Per quanto riguarda le tecniche ad-hoc, tre applicazioni molto impegnative sono state accelerate attraverso gli strumenti HLS: discriminatore di forme di impulso per i raggi cosmici, classificazione automatica degli insetti e re-ranking per il recupero delle informazioni, sottolineando i vantaggi quando questo tipo di applicazioni viene attraversato da tecniche di compressione durante il targeting dispositivi FPGA/SoC.
Inoltre, in questa tesi viene proposto uno stimatore delle prestazioni per l'accelerazione hardware per prevedere efficacemente l'utilizzo delle risorse e la latenza per FPGA/SoC, costruendo un ponte tra l'applicazione e i domini architetturali. Lo strumento integra modelli analitici per la previsione delle prestazioni e un motore design space explorer (DSE) per fornire approfondimenti di alto livello agli sviluppatori di hardware, composto da due motori indipendenti: DSE basato sull'ottimizzazione a singolo obiettivo e DSE basato sull'ottimizzazione evolutiva multiobiettivo.Nowadays, the development of algorithms focuses on performance-efficient and energy-efficient computations. Technologies such as field programmable gate array (FPGA) and system on chip (SoC) based on FPGA (FPGA/SoC) have shown their ability to accelerate intensive computing applications while saving power consumption, owing to their capability of high parallelism and reconfiguration of the architecture.
Currently, the existing design cycles for FPGA/SoC are time-consuming, owing to the complexity of the architecture. Therefore, to address the gap between applications and FPGA/SoC architectures and to obtain an efficient hardware design for image analysis and highly demanding applications using the high-level synthesis tool, two complementary strategies are considered: ad-hoc techniques and performance estimator.
Regarding ad-hoc techniques, three highly demanding applications were accelerated through HLS tools: pulse shape discriminator for cosmic rays, automatic pest classification, and re-ranking for information retrieval, emphasizing the benefits when this type of applications are traversed by compression techniques when targeting FPGA/SoC devices.
Furthermore, a comprehensive performance estimator for hardware acceleration is proposed in this thesis to effectively predict the resource utilization and latency for FPGA/SoC, building a bridge between the application and architectural domains. The tool integrates analytical models for performance prediction, and a design space explorer (DSE) engine for providing high-level insights to hardware developers, composed of two independent sub-engines: DSE based on single-objective optimization and DSE based on evolutionary multi-objective optimization
An Experimental Evaluation of Machine Learning Training on a Real Processing-in-Memory System
Training machine learning (ML) algorithms is a computationally intensive
process, which is frequently memory-bound due to repeatedly accessing large
training datasets. As a result, processor-centric systems (e.g., CPU, GPU)
suffer from costly data movement between memory units and processing units,
which consumes large amounts of energy and execution cycles. Memory-centric
computing systems, i.e., with processing-in-memory (PIM) capabilities, can
alleviate this data movement bottleneck.
Our goal is to understand the potential of modern general-purpose PIM
architectures to accelerate ML training. To do so, we (1) implement several
representative classic ML algorithms (namely, linear regression, logistic
regression, decision tree, K-Means clustering) on a real-world general-purpose
PIM architecture, (2) rigorously evaluate and characterize them in terms of
accuracy, performance and scaling, and (3) compare to their counterpart
implementations on CPU and GPU. Our evaluation on a real memory-centric
computing system with more than 2500 PIM cores shows that general-purpose PIM
architectures can greatly accelerate memory-bound ML workloads, when the
necessary operations and datatypes are natively supported by PIM hardware. For
example, our PIM implementation of decision tree is faster than a
state-of-the-art CPU version on an 8-core Intel Xeon, and faster
than a state-of-the-art GPU version on an NVIDIA A100. Our K-Means clustering
on PIM is and than state-of-the-art CPU and GPU
versions, respectively.
To our knowledge, our work is the first one to evaluate ML training on a
real-world PIM architecture. We conclude with key observations, takeaways, and
recommendations that can inspire users of ML workloads, programmers of PIM
architectures, and hardware designers & architects of future memory-centric
computing systems
Statistical learning of random probability measures
The study of random probability measures is a lively research topic that has
attracted interest from different fields in recent years. In this thesis, we consider
random probability measures in the context of Bayesian nonparametrics,
where the law of a random probability measure is used as prior distribution,
and in the context of distributional data analysis, where
the goal is to perform inference given avsample from the law of a random probability measure.
The contributions contained in this thesis can be subdivided according to three
different topics: (i) the use of almost surely discrete repulsive random measures
(i.e., whose support points are well separated) for Bayesian model-based
clustering, (ii) the proposal of new laws for collections of random probability
measures for Bayesian density estimation of partially
exchangeable data subdivided into different groups, and (iii) the study
of principal component analysis and regression models for probability distributions
seen as elements of the 2-Wasserstein space. Specifically, for point
(i) above we propose an efficient Markov chain Monte Carlo algorithm for
posterior inference, which sidesteps the need of split-merge reversible jump
moves typically associated with poor performance, we propose a model for
clustering high-dimensional data by introducing a novel class of anisotropic
determinantal point processes, and study the distributional properties of the
repulsive measures, shedding light on important theoretical results which enable
more principled prior elicitation and more efficient posterior simulation
algorithms. For point (ii) above, we consider several models suitable for clustering
homogeneous populations, inducing spatial dependence across groups of
data, extracting the characteristic traits common to all the data-groups, and
propose a novel vector autoregressive model to study of growth
curves of Singaporean kids. Finally, for point (iii), we propose a novel class of
projected statistical methods for distributional data analysis for measures
on the real line and on the unit-circle
Towards Intelligent Runtime Framework for Distributed Heterogeneous Systems
Scientific applications strive for increased memory and computing performance, requiring massive amounts of data and time to produce results. Applications utilize large-scale, parallel computing platforms with advanced architectures to accommodate their needs. However, developing performance-portable applications for modern, heterogeneous platforms requires lots of effort and expertise in both the application and systems domains. This is more relevant for unstructured applications whose workflow is not statically predictable due to their heavily data-dependent nature. One possible solution for this problem is the introduction of an intelligent Domain-Specific Language (iDSL) that transparently helps to maintain correctness, hides the idiosyncrasies of lowlevel hardware, and scales applications. An iDSL includes domain-specific language constructs, a compilation toolchain, and a runtime providing task scheduling, data placement, and workload balancing across and within heterogeneous nodes. In this work, we focus on the runtime framework. We introduce a novel design and extension of a runtime framework, the Parallel Runtime Environment for Multicore Applications. In response to the ever-increasing intra/inter-node concurrency, the runtime system supports efficient task scheduling and workload balancing at both levels while allowing the development of custom policies. Moreover, the new framework provides abstractions supporting the utilization of heterogeneous distributed nodes consisting of CPUs and GPUs and is extensible to other devices. We demonstrate that by utilizing this work, an application (or the iDSL) can scale its performance on heterogeneous exascale-era supercomputers with minimal effort. A future goal for this framework (out of the scope of this thesis) is to be integrated with machine learning to improve its decision-making and performance further. As a bridge to this goal, since the framework is under development, we experiment with data from Nuclear Physics Particle Accelerators and demonstrate the significant improvements achieved by utilizing machine learning in the hit-based track reconstruction process
Structured parallelism discovery with hybrid static-dynamic analysis and evaluation technique
Parallel computer architectures have dominated the computing landscape for the
past two decades; a trend that is only expected to continue and intensify, with increasing specialization and heterogeneity. This creates huge pressure across the software
stack to produce programming languages, libraries, frameworks and tools which will
efficiently exploit the capabilities of parallel computers, not only for new software, but
also revitalizing existing sequential code. Automatic parallelization, despite decades of
research, has had limited success in transforming sequential software to take advantage
of efficient parallel execution. This thesis investigates three approaches that use commutativity analysis as the enabler for parallelization. This has the potential to overcome
limitations of traditional techniques.
We introduce the concept of liveness-based commutativity for sequential loops.
We examine the use of a practical analysis utilizing liveness-based commutativity in a
symbolic execution framework. Symbolic execution represents input values as groups
of constraints, consequently deriving the output as a function of the input and enabling
the identification of further program properties. We employ this feature to develop an
analysis and discern commutativity properties between loop iterations. We study the
application of this approach on loops taken from real-world programs in the OLDEN
and NAS Parallel Benchmark (NPB) suites, and identify its limitations and related
overheads.
Informed by these findings, we develop Dynamic Commutativity Analysis (DCA), a
new technique that leverages profiling information from program execution with specific
input sets. Using profiling information, we track liveness information and detect loop
commutativity by examining the code’s live-out values. We evaluate DCA against almost
1400 loops of the NPB suite, discovering 86% of them as parallelizable. Comparing
our results against dependence-based methods, we match the detection efficacy of two
dynamic and outperform three static approaches, respectively. Additionally, DCA is
able to automatically detect parallelism in loops which iterate over Pointer-Linked
Data Structures (PLDSs), taken from wide range of benchmarks used in the literature,
where all other techniques we considered failed. Parallelizing the discovered loops, our
methodology achieves an average speedup of 3.6× across NPB (and up to 55×) and up
to 36.9× for the PLDS-based loops on a 72-core host. We also demonstrate that our
methodology, despite relying on specific input values for profiling each program, is able
to correctly identify parallelism that is valid for all potential input sets.
Lastly, we develop a methodology to utilize liveness-based commutativity, as implemented in DCA, to detect latent loop parallelism in the shape of patterns. Our approach
applies a series of transformations which subsequently enable multiple applications
of DCA over the generated multi-loop code section and match its loop commutativity
outcomes against the expected criteria for each pattern. Applying our methodology on
sets of sequential loops, we are able to identify well-known parallel patterns (i.e., maps,
reduction and scans). This extends the scope of parallelism detection to loops, such
as those performing scan operations, which cannot be determined as parallelizable by
simply evaluating liveness-based commutativity conditions on their original form
Auto-Parallelizing Large Models with Rhino: A Systematic Approach on Production AI Platform
We present Rhino, a system for accelerating tensor programs with automatic
parallelization on AI platform for real production environment. It transforms a
tensor program written for a single device into an equivalent distributed
program that is capable of scaling up to thousands of devices with no user
configuration. Rhino firstly works on a semantically independent intermediate
representation of tensor programs, which facilitates its generalization to
unprecedented applications. Additionally, it implements a task-oriented
controller and a distributed runtime for optimal performance. Rhino explores on
a complete and systematic parallelization strategy space that comprises all the
paradigms commonly employed in deep learning (DL), in addition to strided
partitioning and pipeline parallelism on non-linear models. Aiming to
efficiently search for a near-optimal parallel execution plan, our analysis of
production clusters reveals general heuristics to speed up the strategy search.
On top of it, two optimization levels are designed to offer users flexible
trade-offs between the search time and strategy quality. Our experiments
demonstrate that Rhino can not only re-discover the expert-crafted strategies
of classic, research and production DL models, but also identify novel
parallelization strategies which surpass existing systems for novel models
A Fully Parallelized and Budgeted Multi-level Monte Carlo Framework for Partial Differential Equations: From Mathematical Theory to Automated Large-Scale Computations
All collected data on any physical, technical or economical process is subject to uncertainty. By incorporating this uncertainty in the model and propagating it through the system, this data error can be controlled. This makes the predictions of the system more trustworthy and reliable. The multi-level Monte Carlo (MLMC) method has proven to be an effective uncertainty quantification tool, requiring little knowledge about the problem while being highly performant.
In this doctoral thesis we analyse, implement, develop and apply the MLMC method to partial differential equations (PDEs) subject to high-dimensional random input data. We set up a unified framework based on the software M++ to approximate solutions to elliptic and hyperbolic PDEs with a large selection of finite element methods. We combine this setup with a new variant of the MLMC method. In particular, we propose a budgeted MLMC (BMLMC) method which is capable to optimally invest reserved computing resources in order to minimize the model error while exhausting a given computational budget. This is achieved by developing a new parallelism based on a single distributed data structure, employing ideas of the continuation MLMC method and utilizing dynamic programming techniques. The final method is theoretically motivated, analyzed, and numerically well-tested in an automated benchmarking workflow for highly challenging problems like the approximation of wave equations in randomized media
TransPimLib: A Library for Efficient Transcendental Functions on Processing-in-Memory Systems
Processing-in-memory (PIM) promises to alleviate the data movement bottleneck
in modern computing systems. However, current real-world PIM systems have the
inherent disadvantage that their hardware is more constrained than in
conventional processors (CPU, GPU), due to the difficulty and cost of building
processing elements near or inside the memory. As a result, general-purpose PIM
architectures support fairly limited instruction sets and struggle to execute
complex operations such as transcendental functions and other hard-to-calculate
operations (e.g., square root). These operations are particularly important for
some modern workloads, e.g., activation functions in machine learning
applications.
In order to provide support for transcendental (and other hard-to-calculate)
functions in general-purpose PIM systems, we present \emph{TransPimLib}, a
library that provides CORDIC-based and LUT-based methods for trigonometric
functions, hyperbolic functions, exponentiation, logarithm, square root, etc.
We develop an implementation of TransPimLib for the UPMEM PIM architecture and
perform a thorough evaluation of TransPimLib's methods in terms of performance
and accuracy, using microbenchmarks and three full workloads (Blackscholes,
Sigmoid, Softmax). We open-source all our code and datasets
at~\url{https://github.com/CMU-SAFARI/transpimlib}.Comment: Our open-source software is available at
https://github.com/CMU-SAFARI/transpimli
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Automated machine learning (AutoML) seeks to build ML models with minimal
human effort. While considerable research has been conducted in the area of
AutoML in general, aiming to take humans out of the loop when building
artificial intelligence (AI) applications, scant literature has focused on how
AutoML works well in open-environment scenarios such as the process of training
and updating large models, industrial supply chains or the industrial
metaverse, where people often face open-loop problems during the search
process: they must continuously collect data, update data and models, satisfy
the requirements of the development and deployment environment, support massive
devices, modify evaluation metrics, etc. Addressing the open-environment issue
with pure data-driven approaches requires considerable data, computing
resources, and effort from dedicated data engineers, making current AutoML
systems and platforms inefficient and computationally intractable.
Human-computer interaction is a practical and feasible way to tackle the
problem of open-environment AI. In this paper, we introduce OmniForce, a
human-centered AutoML (HAML) system that yields both human-assisted ML and
ML-assisted human techniques, to put an AutoML system into practice and build
adaptive AI in open-environment scenarios. Specifically, we present OmniForce
in terms of ML version management; pipeline-driven development and deployment
collaborations; a flexible search strategy framework; and widely provisioned
and crowdsourced application algorithms, including large models. Furthermore,
the (large) models constructed by OmniForce can be automatically turned into
remote services in a few minutes; this process is dubbed model as a service
(MaaS). Experimental results obtained in multiple search spaces and real-world
use cases demonstrate the efficacy and efficiency of OmniForce
DAPHNE: An Open and Extensible System Infrastructure for Integrated Data Analysis Pipelines
Integrated data analysis (IDA) pipelines—that combine data management (DM) and query processing, high-performance computing
(HPC), and machine learning (ML) training and scoring—become
increasingly common in practice. Interestingly, systems of these
areas share many compilation and runtime techniques, and the
used—increasingly heterogeneous—hardware infrastructure converges as well. Yet, the programming paradigms, cluster resource
management, data formats and representations, as well as execution
strategies differ substantially. DAPHNE is an open and extensible
system infrastructure for such IDA pipelines, including language abstractions, compilation and runtime techniques, multi-level scheduling, hardware (HW) accelerators, and computational storage for
increasing productivity and eliminating unnecessary overheads. In
this paper, we make a case for IDA pipelines, describe the overall
DAPHNE system architecture, its key components, and the design
of a vectorized execution engine for computational storage, HW
accelerators, as well as local and distributed operations. Preliminary experiments that compare DAPHNE with MonetDB, Pandas,
DuckDB, and TensorFlow show promising results
- …