82 research outputs found
Bio-inspired computation for big data fusion, storage, processing, learning and visualization: state of the art and future directions
This overview gravitates on research achievements that have recently emerged from the confluence between Big Data technologies and bio-inspired computation. A manifold of reasons can be identified for the profitable synergy between these two paradigms, all rooted on the adaptability, intelligence and robustness that biologically inspired principles can provide to technologies aimed to manage, retrieve, fuse and process Big Data efficiently. We delve into this research field by first analyzing in depth the existing literature, with a focus on advances reported in the last few years. This prior literature analysis is complemented by an identification of the new trends and open challenges in Big Data that remain unsolved to date, and that can be effectively addressed by bio-inspired algorithms. As a second contribution, this work elaborates on how bio-inspired algorithms need to be adapted for their use in a Big Data context, in which data fusion becomes crucial as a previous step to allow processing and mining several and potentially heterogeneous data sources. This analysis allows exploring and comparing the scope and efficiency of existing approaches across different problems and domains, with the purpose of identifying new potential applications and research niches. Finally, this survey highlights open issues that remain unsolved to date in this research avenue, alongside a prescription of recommendations for future research.This work has received funding support from the Basque Government (Eusko Jaurlaritza) through the Consolidated
Research Group MATHMODE (IT1294-19), EMAITEK and ELK ARTEK programs. D. Camacho also acknowledges support from the Spanish Ministry of Science and Education under PID2020-117263GB-100 grant (FightDIS), the Comunidad Autonoma de Madrid under S2018/TCS-4566 grant (CYNAMON), and the CHIST ERA 2017 BDSI PACMEL Project (PCI2019-103623, Spain)
Recommended from our members
MapReduce network enabled algorithms for classification based on association rules
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.There is growing evidence that integrating classification and association rule mining can produce more efficient and accurate classifiers than traditional techniques. This thesis introduces a new MapReduce based association rule miner for extracting strong rules from large datasets. This miner is used later to develop a new large scale classifier. Also new MapReduce simulator was developed to evaluate the scalability of proposed algorithms on MapReduce clusters.
The developed associative rule miner inherits the MapReduce scalability to huge datasets and to thousands of processing nodes. For finding frequent itemsets, it uses hybrid approach between miners that uses counting methods on horizontal datasets, and miners that use set intersections on datasets of vertical formats. The new miner generates same rules that usually generated using apriori-like algorithms because it uses the same confidence and support thresholds definitions.
In the last few years, a number of associative classification algorithms have been proposed, i.e. CPAR, CMAR, MCAR, MMAC and others. This thesis also introduces a new MapReduce classifier that based MapReduce associative rule mining. This algorithm employs different approaches in rule discovery, rule ranking, rule pruning, rule prediction and rule evaluation methods. The new classifier works on multi-class datasets and is able to produce multi-label predications with probabilities for each predicted label. To evaluate the classifier 20 different datasets from the UCI data collection were used. Results show that the proposed approach is an accurate and effective classification technique, highly competitive and scalable if compared with other traditional and associative classification approaches.
Also a MapReduce simulator was developed to measure the scalability of MapReduce based applications easily and quickly, and to captures the behaviour of algorithms on cluster environments. This also allows optimizing the configurations of MapReduce clusters to get better execution times and hardware utilization
๋ณ๋ ฌํ ์ฉ์ดํ ํต๊ณ๊ณ์ฐ ๋ฐฉ๋ฒ๋ก ๊ณผ ํ๋ ๊ณ ์ฑ๋ฅ ์ปดํจํ ํ๊ฒฝ์์ ์ ์ฉ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ์์ฐ๊ณผํ๋ํ ํต๊ณํ๊ณผ, 2020. 8. ์์คํธ.Technological advances in the past decade, hardware and software alike, have made access to high-performance computing (HPC) easier than ever. In this dissertation, easily-parallelizable, inversion-free, and variable-separated algorithms and their implementation in statistical computing are discussed. The first part considers statistical estimation problems under structured sparsity posed as minimization of a sum of two or three convex functions, one of which is a composition of non-smooth and linear functions. Examples include graph-guided sparse fused lasso and overlapping group lasso. Two classes of inversion-free primal-dual algorithms are considered and unified from a perspective of monotone operator theory. From this unification, a continuum of preconditioned forward-backward operator splitting algorithms amenable to parallel and distributed computing is proposed. The unification is further exploited to introduce a continuum of accelerated algorithms on which the theoretically optimal asymptotic rate of convergence is obtained. For the second part, easy-to-use distributed matrix data structures in PyTorch and Julia are presented. They enable users to write code once and run it anywhere from a laptop to a workstation with multiple graphics processing units (GPUs) or a supercomputer in a cloud. With these data structures, various parallelizable statistical applications, including nonnegative matrix factorization, positron emission tomography, multidimensional scaling, and โ1-regularized Cox regression, are demonstrated. The examples scale up to an 8-GPU workstation and a 720-CPU-core cluster in a cloud. As a case in point, the onset of type-2 diabetes from the UK Biobank with 400,000 subjects and about 500,000 single nucleotide polymorphisms is analyzed using the HPC โ1-regularized Cox regression. Fitting a half-million variate model took about 50 minutes, reconfirming known associations. To my knowledge, the feasibility of a joint genome-wide association analysis of survival
outcomes at this scale is first demonstrated.์ง๋ 10๋
๊ฐ์ ํ๋์จ์ด์ ์ํํธ์จ์ด์ ๊ธฐ์ ์ ์ธ ๋ฐ์ ์ ๊ณ ์ฑ๋ฅ ์ปดํจํ
์ ์ ๊ทผ์ฅ๋ฒฝ์ ๊ทธ ์ด๋ ๋๋ณด๋ค ๋ฎ์ถ์๋ค. ์ด ํ์๋
ผ๋ฌธ์์๋ ๋ณ๋ ฌํ ์ฉ์ดํ๊ณ ์ญํ๋ ฌ ์ฐ์ฐ์ด ์๋ ๋ณ์ ๋ถ๋ฆฌ ์๊ณ ๋ฆฌ์ฆ๊ณผ ๊ทธ ํต๊ณ๊ณ์ฐ์์์ ๊ตฌํ์ ๋
ผ์ํ๋ค. ์ฒซ ๋ถ๋ถ์ ๋ณผ๋ก ํจ์ ๋ ๊ฐ ๋๋ ์ธ ๊ฐ์ ํฉ์ผ๋ก ๋ํ๋๋ ๊ตฌ์กฐํ๋ ํฌ์ ํต๊ณ ์ถ์ ๋ฌธ์ ์ ๋ํด ๋ค๋ฃฌ๋ค. ์ด ๋ ํจ์๋ค ์ค ํ๋๋ ๋นํํ ํจ์์ ์ ํ ํจ์์ ํฉ์ฑ์ผ๋ก ๋ํ๋๋ค. ๊ทธ ์์๋ก๋ ๊ทธ๋ํ ๊ตฌ์กฐ๋ฅผ ํตํด ์ ๋๋๋ ํฌ์ ์ตํฉ Lasso ๋ฌธ์ ์ ํ ๋ณ์๊ฐ ์ฌ๋ฌ ๊ทธ๋ฃน์ ์ํ ์ ์๋ ๊ทธ๋ฃน Lasso ๋ฌธ์ ๊ฐ ์๋ค. ์ด๋ฅผ ํ๊ธฐ ์ํด ์ญํ๋ ฌ ์ฐ์ฐ์ด ์๋ ๋ ์ข
๋ฅ์ ์์-์๋ (primal-dual) ์๊ณ ๋ฆฌ์ฆ์ ๋จ์กฐ ์ฐ์ฐ์ ์ด๋ก ๊ด์ ์์ ํตํฉํ๋ฉฐ ์ด๋ฅผ ํตํด ๋ณ๋ ฌํ ์ฉ์ดํ precondition๋ ์ ๋ฐฉ-ํ๋ฐฉ ์ฐ์ฐ์ ๋ถํ ์๊ณ ๋ฆฌ์ฆ์ ์งํฉ์ ์ ์ํ๋ค. ์ด ํตํฉ์ ์ ๊ทผ์ ์ผ๋ก ์ต์ ์๋ ด๋ฅ ์ ๊ฐ๋ ๊ฐ์ ์๊ณ ๋ฆฌ์ฆ์ ์งํฉ์ ๊ตฌ์ฑํ๋ ๋ฐ ํ์ฉ๋๋ค. ๋ ๋ฒ์งธ ๋ถ๋ถ์์๋ PyTorch์ Julia๋ฅผ ํตํด ์ฌ์ฉํ๊ธฐ ์ฌ์ด ๋ถ์ฐ ํ๋ ฌ ์๋ฃ ๊ตฌ์กฐ๋ฅผ ์ ์ํ๋ค. ์ด ๊ตฌ์กฐ๋ ์ฌ์ฉ์๋ค์ด ์ฝ๋๋ฅผ ํ ๋ฒ ์์ฑํ๋ฉด
์ด๊ฒ์ ๋
ธํธ๋ถ ํ ๋์์๋ถํฐ ์ฌ๋ฌ ๋์ ๊ทธ๋ํฝ ์ฒ๋ฆฌ ์ฅ์น (GPU)๋ฅผ ๊ฐ์ง ์ํฌ์คํ
์ด์
, ๋๋ ํด๋ผ์ฐ๋ ์์ ์๋ ์ํผ์ปดํจํฐ๊น์ง ๋ค์ํ ์ค์ผ์ผ์์ ์คํํ ์ ์๊ฒ ํด ์ค๋ค. ์์ธ๋ฌ, ์ด ์๋ฃ ๊ตฌ์กฐ๋ฅผ ๋น์ ํ๋ ฌ ๋ถํด, ์์ ์ ๋จ์ธต ์ดฌ์, ๋ค์ฐจ์ ์ฒ
๋๋ฒ, โ1-๋ฒ์ ํ Cox ํ๊ท ๋ถ์ ๋ฑ ๋ค์ํ ๋ณ๋ ฌํ ๊ฐ๋ฅํ ํต๊ณ์ ๋ฌธ์ ์ ์ ์ฉํ๋ค. ์ด ์์๋ค์ 8๋์ GPU๊ฐ ์๋ ์ํฌ์คํ
์ด์
๊ณผ 720๊ฐ์ ์ฝ์ด๊ฐ ์๋ ํด๋ผ์ฐ๋ ์์ ๊ฐ์ ํด๋ฌ์คํฐ์์ ํ์ฅ ๊ฐ๋ฅํ๋ค. ํ ์ฌ๋ก๋ก 400,000๋ช
์ ๋์๊ณผ 500,000๊ฐ์ ๋จ์ผ ์ผ๊ธฐ ๋คํ์ฑ ์ ๋ณด๊ฐ ์๋ UK Biobank ์๋ฃ์์์ ์ 2ํ ๋น๋จ๋ณ (T2D) ๋ฐ๋ณ ๋์ด๋ฅผ โ1-๋ฒ์ ํ Cox ํ๊ท ๋ชจํ์ ํตํด ๋ถ์ํ๋ค. 500,000๊ฐ์ ๋ณ์๊ฐ ์๋ ๋ชจํ์ ์ ํฉ์ํค๋ ๋ฐ 50๋ถ ๊ฐ๋์ ์๊ฐ์ด ๊ฑธ๋ ธ์ผ๋ฉฐ ์๋ ค์ง T2D ๊ด๋ จ ๋คํ์ฑ๋ค์ ์ฌํ์ธํ ์ ์์๋ค. ์ด๋ฌํ ๊ท๋ชจ์ ์ ์ ์ ์ฒด ๊ฒฐํฉ ์์กด ๋ถ์์ ์ต์ด๋ก ์๋๋ ๊ฒ์ด๋ค.Chapter1Prologue 1
1.1 Introduction 1
1.2 Accessible High-Performance Computing Systems 4
1.2.1 Preliminaries 4
1.2.2 Multiple CPU nodes: clusters, supercomputers, and clouds 7
1.2.3 Multi-GPU node 9
1.3 Highly Parallelizable Algorithms 12
1.3.1 MM algorithms 12
1.3.2 Proximal gradient descent 14
1.3.3 Proximal distance algorithm 16
1.3.4 Primal-dual methods 17
Chapter 2 Easily Parallelizable and Distributable Class of Algorithms for Structured Sparsity, with Optimal Acceleration 20
2.1 Introduction 20
2.2 Unification of Algorithms LV and CV (g โก 0) 30
2.2.1 Relation between Algorithms LV and CV 30
2.2.2 Unified algorithm class 34
2.2.3 Convergence analysis 35
2.3 Optimal acceleration 39
2.3.1 Algorithms 40
2.3.2 Convergence analysis 41
2.4 Stochastic optimal acceleration 45
2.4.1 Algorithm 45
2.4.2 Convergence analysis 47
2.5 Numerical experiments 50
2.5.1 Model problems 50
2.5.2 Convergence behavior 52
2.5.3 Scalability 62
2.6 Discussion 63
Chapter 3 Towards Unified Programming for High-Performance Statistical Computing Environments 66
3.1 Introduction 66
3.2 Related Software 69
3.2.1 Message-passing interface and distributed array interfaces 69
3.2.2 Unified array interfaces for CPU and GPU 69
3.3 Easy-to-use Software Libraries for HPC 70
3.3.1 Deep learning libraries and HPC 70
3.3.2 Case study: PyTorch versus TensorFlow 73
3.3.3 A brief introduction to PyTorch 76
3.3.4 A brief introduction to Julia 80
3.3.5 Methods and multiple dispatch 80
3.3.6 Multidimensional arrays 82
3.3.7 Matrix multiplication 83
3.3.8 Dot syntax for vectorization 86
3.4 Distributed matrix data structure 87
3.4.1 Distributed matrices in PyTorch: distmat 87
3.4.2 Distributed arrays in Julia: MPIArray 90
3.5 Examples 98
3.5.1 Nonnegative matrix factorization 100
3.5.2 Positron emission tomography 109
3.5.3 Multidimensional scaling 113
3.5.4 L1-regularized Cox regression 117
3.5.5 Genome-wide survival analysis of the UK Biobank dataset 121
3.6 Discussion 126
Chapter 4 Conclusion 131
Appendix A Monotone Operator Theory 134
Appendix B Proofs for Chapter II 139
B.1 Preconditioned forward-backward splitting 139
B.2 Optimal acceleration 147
B.3 Optimal stochastic acceleration 158
Appendix C AWS EC2 and ParallelCluster 168
C.1 Overview 168
C.2 Glossary 169
C.3 Prerequisites 172
C.4 Installation 173
C.5 Configuration 173
C.6 Creating, accessing, and destroying the cluster 178
C.7 Installation of libraries 178
C.8 Running a job 179
C.9 Miscellaneous 180
Appendix D Code for memory-efficient L1-regularized Cox proportional hazards model 182
Appendix E Details of SNPs selected in L1-regularized Cox regression 184
Bibliography 188
๊ตญ๋ฌธ์ด๋ก 212Docto
Recommended from our members
Personal mobile grids with a honeybee inspired resource scheduler
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The overall aim of the thesis has been to introduce Personal Mobile Grids (PMGrids)
as a novel paradigm in grid computing that scales grid infrastructures to mobile devices and extends grid entities to individual personal users. In this thesis, architectural designs as well as simulation models for PM-Grids are developed.
The core of any grid system is its resource scheduler. However, virtually all current conventional grid schedulers do not address the non-clairvoyant scheduling problem, where job information is not available before the end of execution. Therefore, this thesis proposes a honeybee inspired resource scheduling heuristic for PM-Grids (HoPe) incorporating a radical approach to grid resource scheduling to tackle this problem. A detailed design and implementation of HoPe with a decentralised self-management and adaptive policy are initiated.
Among the other main contributions are a comprehensive taxonomy of grid systems as well as a detailed analysis of the honeybee colony and its nectar acquisition process (NAP), from the resource scheduling perspective, which have not been presented in any previous work, to the best of our knowledge.
PM-Grid designs and HoPe implementation were evaluated thoroughly through a strictly controlled empirical evaluation framework with a well-established heuristic in high throughput computing, the opportunistic scheduling heuristic (OSH), as a benchmark algorithm. Comparisons with optimal values and worst bounds are conducted to gain a clear insight into HoPe behaviour, in terms of stability, throughput, turnaround time and speedup, under different running conditions of number of jobs and grid scales.
Experimental results demonstrate the superiority of HoPe performance where it
has successfully maintained optimum stability and throughput in more than 95%
of the experiments, with HoPe achieving three times better than the OSH under
extremely heavy loads. Regarding the turnaround time and speedup, HoPe has
effectively achieved less than 50% of the turnaround time incurred by the OSH, while doubling its speedup in more than 60% of the experiments.
These results indicate the potential of both PM-Grids and HoPe in realising futuristic grid visions. Therefore considering the deployment of PM-Grids in real life scenarios and the utilisation of HoPe in other parallel processing and high throughput computing systems are recommended
MapReduce network enabled algorithms for classification based on association rules
There is growing evidence that integrating classification and association rule mining can produce more efficient and accurate classifiers than traditional techniques. This thesis introduces a new MapReduce based association rule miner for extracting strong rules from large datasets. This miner is used later to develop a new large scale classifier. Also new MapReduce simulator was developed to evaluate the scalability of proposed algorithms on MapReduce clusters. The developed associative rule miner inherits the MapReduce scalability to huge datasets and to thousands of processing nodes. For finding frequent itemsets, it uses hybrid approach between miners that uses counting methods on horizontal datasets, and miners that use set intersections on datasets of vertical formats. The new miner generates same rules that usually generated using apriori-like algorithms because it uses the same confidence and support thresholds definitions. In the last few years, a number of associative classification algorithms have been proposed, i.e. CPAR, CMAR, MCAR, MMAC and others. This thesis also introduces a new MapReduce classifier that based MapReduce associative rule mining. This algorithm employs different approaches in rule discovery, rule ranking, rule pruning, rule prediction and rule evaluation methods. The new classifier works on multi-class datasets and is able to produce multi-label predications with probabilities for each predicted label. To evaluate the classifier 20 different datasets from the UCI data collection were used. Results show that the proposed approach is an accurate and effective classification technique, highly competitive and scalable if compared with other traditional and associative classification approaches. Also a MapReduce simulator was developed to measure the scalability of MapReduce based applications easily and quickly, and to captures the behaviour of algorithms on cluster environments. This also allows optimizing the configurations of MapReduce clusters to get better execution times and hardware utilization.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Functional programming languages in computing clouds: practical and theoretical explorations
Cloud platforms must integrate three pillars: messaging, coordination of workers and data. This research investigates whether functional programming languages have any special merit when it comes to the implementation of cloud computing platforms. This thesis presents the lightweight message queue CMQ and the DSL CWMWL for the coordination of workers that we use as artefact to proof or disproof the special merit of functional programming languages in computing clouds. We have detailed the design and implementation with the broad aim to match the notions and the requirements of computing clouds. Our approach to evaluate these aims is based on evaluation criteria that are based on a series of comprehensive rationales and specifics that allow the FPL Haskell to be thoroughly analysed. We find that Haskell is excellent for use cases that do not require the distribution of the application across the boundaries of (physical or virtual) systems, but not appropriate as a whole for the development of distributed cloud based workloads that require communication with the far side and coordination of decoupled workloads. However, Haskell may be able to qualify as a suitable vehicle in the future with future developments of formal mechanisms that embrace non-determinism in the underlying distributed environments leading to applications that are anti-fragile rather than applications that insist on strict determinism that can only be guaranteed on the local system or via slow blocking communication mechanisms
CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap
After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in
multimedia search engines, we have identified and analyzed gaps within European research effort during our second year.
In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio-
economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown
of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on
requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the
community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our
Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as
National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core
technological gaps that involve research challenges, and โenablersโ, which are not necessarily technical research
challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal
challenges
European Language Grid
This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 โ to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects
Building models from multiple point sets with kernel density estimation
One of the fundamental problems in computer vision is point set registration. Point
set registration finds use in many important applications and in particular can be considered
one of the crucial stages involved in the reconstruction of models of physical
objects and environments from depth sensor data. The problem of globally aligning
multiple point sets, representing spatial shape measurements from varying sensor viewpoints,
into a common frame of reference is a complex task that is imperative due to
the large number of critical functions that accurate and reliable model reconstructions
contribute to.
In this thesis we focus on improving the quality and feasibility of model and environment
reconstruction through the enhancement of multi-view point set registration
techniques. The thesis makes the following contributions: First, we demonstrate that
employing kernel density estimation to reason about the unknown generating surfaces
that range sensors measure allows us to express measurement variability, uncertainty
and also to separate the problems of model design and viewpoint alignment optimisation.
Our surface estimates define novel view alignment objective functions that inform
the registration process. Our surfaces can be estimated from point clouds in a datadriven
fashion. Through experiments on a variety of datasets we demonstrate that we
have developed a novel and effective solution to the simultaneous multi-view registration
problem.
We then focus on constructing a distributed computation framework capable of solving
generic high-throughput computational problems. We present a novel task-farming
model that we call Semi-Synchronised Task Farming (SSTF), capable of modelling and
subsequently solving computationally distributable problems that benefit from both
independent and dependent distributed components and a level of communication between
process elements. We demonstrate that this framework is a novel schema for
parallel computer vision algorithms and evaluate the performance to establish computational
gains over serial implementations. We couple this framework with an accurate
computation-time prediction model to contribute a novel structure appropriate for
addressing expensive real-world algorithms with substantial parallel performance and
predictable time savings.
Finally, we focus on a timely instance of the multi-view registration problem: modern
range sensors provide large numbers of viewpoint samples that result in an abundance
of depth data information. The ability to utilise this abundance of depth data in a
feasible and principled fashion is of importance to many emerging application areas
making use of spatial information. We develop novel methodology for the registration
of depth measurements acquired from many viewpoints capturing physical object
surfaces. By defining registration and alignment quality metrics based on our density
estimation framework we construct an optimisation methodology that implicitly considers
all viewpoints simultaneously. We use a non-parametric data-driven approach
to consider varying object complexity and guide large view-set spatial transform optimisations.
By aligning large numbers of partial, arbitrary-pose views we evaluate this
strategy quantitatively on large view-set range sensor data where we find that we can
improve registration accuracy over existing methods and contribute increased registration
robustness to the magnitude of coarse seed alignment. This allows large-scale
registration on problem instances exhibiting varying object complexity with the added
advantage of massive parallel efficiency
- โฆ