272,749 research outputs found
Adaptive clustering procedure for continuous gravitational wave searches
In hierarchical searches for continuous gravitational waves, clustering of
candidates is an important postprocessing step because it reduces the number of
noise candidates that are followed-up at successive stages [1][7][12]. Previous
clustering procedures bundled together nearby candidates ascribing them to the
same root cause (be it a signal or a disturbance), based on a predefined
cluster volume. In this paper, we present a procedure that adapts the cluster
volume to the data itself and checks for consistency of such volume with what
is expected from a signal. This significantly improves the noise rejection
capabilities at fixed detection threshold, and at fixed computing resources for
the follow-up stages, this results in an overall more sensitive search. This
new procedure was employed in the first Einstein@Home search on data from the
first science run of the advanced LIGO detectors (O1) [11].Comment: 11 pages, 9 figures, 2 tables; v1: initial submission; v2: journal
review, copyedited version; v3: fixed typo in Fig
Recommended from our members
Accelerated Iterative Algorithms with Asynchronous Accumulative Updates on a Heterogeneous Cluster
In recent years with the exponential growth in web-based applications the amount of data generated has increased tremendously. Quick and accurate analysis of this \u27big data\u27 is indispensable to make better business decisions and reduce operational cost. The challenges faced by modern day data centers to process big data are multi fold: to keep up the pace of processing with increased data volume and increased data velocity, deal with system scalability and reduce energy costs. Today\u27s data centers employ a variety of distributed computing frameworks running on a cluster of commodity hardware which include general purpose processors to process big data. Though better performance in terms of big data processing speed has been achieved with existing distributed computing frameworks, there is still an opportunity to increase processing speed further. FPGAs, which are designed for computationally intensive tasks, are promising processing elements that can increase processing speed. In this thesis, we discuss how FPGAs can be integrated into a cluster of general purpose processors running iterative algorithms and obtain high performance.
In this thesis, we designed a heterogeneous cluster comprised of FPGAs and CPUs and ran various benchmarks such as PageRank, Katz and Connected Components to measure the performance of the cluster. Performance improvement in terms of execution time was evaluated against a homogeneous cluster of general purpose processors and a homogeneous cluster of FPGAs. We built multiple four-node heterogeneous clusters with different configurations by varying the number of CPUs and FPGAs.
We studied the effects of load balancing between CPUs and FPGAs. We obtained a speedup of 20X, 11.5X and 2X for PageRank, Katz and Connected Components benchmarks on a cluster cluster configuration of 2 CPU + 2 FPGA for an unbalancing ratio against a 4-node homogeneous CPU cluster. We studied the effect of input graph partitioning, and showed that when the input is a Multilevel-KL partitioned graph we obtain an improvement of 11%, 26% and 9% over randomly partitioned graph for Katz, PageRank and Connected Components benchmarks on a 2 CPU + 2 FPGA cluster
The volume and Chern-Simons invariant of a Dehn-filled manifold
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ์์ฐ๊ณผํ๋ํ ์๋ฆฌ๊ณผํ๋ถ, 2019. 2. ๋ฐ์ข
์ผ.Based on the work of Neumann, Zickert gave a simplicial formula for computing the volume and Chern-Simons invariant of a boundary-parabolic \psl-representation of a compact 3-manifold with non-empty boundary. Main aim of this thesis is to introduce a notion of deformed Ptolemy assignments (or varieties) and generalize the formula of Zickert to a representation of a Dehn-filled manifold. We also generalize the potential function of Cho and Murakami by applying our formula to an octahedral decomposition of a link complement in the 3-sphere. Also, motivated from the work of Hikami and Inoue, we clarify the relation between Ptolemy assignments and cluster variables when a link is given in a braid position. The last work is a joint work with Jinseok Cho and Christian Zickert.1 Introduction 1
1.1 Deformed Ptolemy assignments . . . . . . . . . . . . . . . . . . . 1
1.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Potential functions . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Cluster variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Preliminaries 12
2.1 Cocycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Obstruction classes . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Ptolemy varieties 16
3.1 Formulas of Neumann . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Deformed Ptolemy varieties . . . . . . . . . . . . . . . . . . . . . 19
3.2.1 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.2 Pseudo-developing maps . . . . . . . . . . . . . . . . . . . 27
3.3 Flattenings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.1 Main theorem . . . . . . . . . . . . . . . . . . . . . . . . . 36
4 Potential functions 43
4.1 Generalized potential functions . . . . . . . . . . . . . . . . . . . 43
4.1.1 Proof of Theorem 4.1.1 . . . . . . . . . . . . . . . . . . . 45
4.2 Relation with a Ptolemy assignment . . . . . . . . . . . . . . . . 50
4.2.1 Proof of Theorem 4.2.1 . . . . . . . . . . . . . . . . . . . 54
4.3 Complex volume formula . . . . . . . . . . . . . . . . . . . . . . . 57
4.3.1 Proof of Theorem 4.3.1 . . . . . . . . . . . . . . . . . . . 59
5 Cluster variables 70
5.1 The Hikami-Inoue cluster variables . . . . . . . . . . . . . . . . . 70
5.1.1 The octahedral decomposition . . . . . . . . . . . . . . . 70
5.1.2 The Hikami-Inoue cluster variables . . . . . . . . . . . . . 71
5.1.3 The obstruction cocycle . . . . . . . . . . . . . . . . . . . 74
5.1.4 Proof of Theorem 1.3.2 . . . . . . . . . . . . . . . . . . . 75
5.2 The existence of a non-degenerate solution . . . . . . . . . . . . . 79
5.2.1 Proof of Proposition 5.2.1 . . . . . . . . . . . . . . . . . . 81
5.2.2 Explicit computation from a representation . . . . . . . . 83Docto
Performance Evaluation of Apache Spark MLlib Algorithms on an Intrusion Detection Dataset
The increase in the use of the Internet and web services and the advent of
the fifth generation of cellular network technology (5G) along with
ever-growing Internet of Things (IoT) data traffic will grow global internet
usage. To ensure the security of future networks, machine learning-based
intrusion detection and prevention systems (IDPS) must be implemented to detect
new attacks, and big data parallel processing tools can be used to handle a
huge collection of training data in these systems. In this paper Apache Spark,
a general-purpose and fast cluster computing platform is used for processing
and training a large volume of network traffic feature data. In this work, the
most important features of the CSE-CIC-IDS2018 dataset are used for
constructing machine learning models and then the most popular machine learning
approaches, namely Logistic Regression, Support Vector Machine (SVM), three
different Decision Tree Classifiers, and Naive Bayes algorithm are used to
train the model using up to eight number of worker nodes. Our Spark cluster
contains seven machines acting as worker nodes and one machine is configured as
both a master and a worker. We use the CSE-CIC-IDS2018 dataset to evaluate the
overall performance of these algorithms on Botnet attacks and distributed
hyperparameter tuning is used to find the best single decision tree parameters.
We have achieved up to 100% accuracy using selected features by the learning
method in our experimentsComment: Journal of Computing and Security (Isfahan University, Iran), Vol. 9,
No.1, 202
Cross-level Validation of Topological Quantum Circuits
Quantum computing promises a new approach to solving difficult computational
problems, and the quest of building a quantum computer has started. While the
first attempts on construction were succesful, scalability has never been
achieved, due to the inherent fragile nature of the quantum bits (qubits). From
the multitude of approaches to achieve scalability topological quantum
computing (TQC) is the most promising one, by being based on an flexible
approach to error-correction and making use of the straightforward
measurement-based computing technique. TQC circuits are defined within a large,
uniform, 3-dimensional lattice of physical qubits produced by the hardware and
the physical volume of this lattice directly relates to the resources required
for computation. Circuit optimization may result in non-intuitive mismatches
between circuit specification and implementation. In this paper we introduce
the first method for cross-level validation of TQC circuits. The specification
of the circuit is expressed based on the stabilizer formalism, and the
stabilizer table is checked by mapping the topology on the physical qubit
level, followed by quantum circuit simulation. Simulation results show that
cross-level validation of error-corrected circuits is feasible.Comment: 12 Pages, 5 Figures. Comments Welcome. RC2014, Springer Lecture Notes
on Computer Science (LNCS) 8507, pp. 189-200. Springer International
Publishing, Switzerland (2014), Y. Shigeru and M.Shin-ichi (Eds.
- โฆ