265,504 research outputs found
Realtime processing of LOFAR data for the detection of nano-second pulses from the Moon
The low flux of the ultra-high energy cosmic rays (UHECR) at the highest
energies provides a challenge to answer the long standing question about their
origin and nature. Even lower fluxes of neutrinos with energies above
eV are predicted in certain Grand-Unifying-Theories (GUTs) and e.g.\ models for
super-heavy dark matter (SHDM). The significant increase in detector volume
required to detect these particles can be achieved by searching for the
nano-second radio pulses that are emitted when a particle interacts in Earth's
moon with current and future radio telescopes.
In this contribution we present the design of an online analysis and trigger
pipeline for the detection of nano-second pulses with the LOFAR radio
telescope. The most important steps of the processing pipeline are digital
focusing of the antennas towards the Moon, correction of the signal for
ionospheric dispersion, and synthesis of the time-domain signal from the
polyphased-filtered signal in frequency domain. The implementation of the
pipeline on a GPU/CPU cluster will be discussed together with the computing
performance of the prototype.Comment: Proceedings of the 22nd International Conference on Computing in High
Energy and Nuclear Physics (CHEP2016), US
HyperTransport Over Ethernet - A Scalable, Commodity Standard for Resource Sharing in the Data Center
Future data center configurations are driven by total cost of ownership (TCO) for specific performance capabilities. Low-latency interconnects are central to performance, while the use of commodity interconnects is central to cost. This paper reports on an effort to combine a very high-performance, commodity interconnect (HyperTransport) with a high-volume interconnect (Ethernet). Previous approaches to extending Hyper-Transport (HT) over a cluster used custom FPGA cards [5] and proprietary extensions to coherence schemes [22], but these solutions mainly have been adopted for use in research-oriented clusters. The new HyperShare strategy from the HyperTransport Consortium proposes several new ways to create low-cost, commodity clusters that can support scalable high performance computing in either clusters or in the data center. HyperTransport over Ethernet (HToE) is the newest specification in the HyperShare strategy that aims to combine favorable market trends with a highbandwidth and low-latency hardware solution for noncoherent sharing of resources in a cluster. This paper illustrates the motivation behind using 10, 40, or 100 Gigabit Ethernet as an encapsulation layer for Hyper-Transport, the requirements for the HToE specification, and engineering solutions for implementing key portions of the specification
Cross-level Validation of Topological Quantum Circuits
Quantum computing promises a new approach to solving difficult computational
problems, and the quest of building a quantum computer has started. While the
first attempts on construction were succesful, scalability has never been
achieved, due to the inherent fragile nature of the quantum bits (qubits). From
the multitude of approaches to achieve scalability topological quantum
computing (TQC) is the most promising one, by being based on an flexible
approach to error-correction and making use of the straightforward
measurement-based computing technique. TQC circuits are defined within a large,
uniform, 3-dimensional lattice of physical qubits produced by the hardware and
the physical volume of this lattice directly relates to the resources required
for computation. Circuit optimization may result in non-intuitive mismatches
between circuit specification and implementation. In this paper we introduce
the first method for cross-level validation of TQC circuits. The specification
of the circuit is expressed based on the stabilizer formalism, and the
stabilizer table is checked by mapping the topology on the physical qubit
level, followed by quantum circuit simulation. Simulation results show that
cross-level validation of error-corrected circuits is feasible.Comment: 12 Pages, 5 Figures. Comments Welcome. RC2014, Springer Lecture Notes
on Computer Science (LNCS) 8507, pp. 189-200. Springer International
Publishing, Switzerland (2014), Y. Shigeru and M.Shin-ichi (Eds.
Cluster Evaluation of Density Based Subspace Clustering
Clustering real world data often faced with curse of dimensionality, where
real world data often consist of many dimensions. Multidimensional data
clustering evaluation can be done through a density-based approach. Density
approaches based on the paradigm introduced by DBSCAN clustering. In this
approach, density of each object neighbours with MinPoints will be calculated.
Cluster change will occur in accordance with changes in density of each object
neighbours. The neighbours of each object typically determined using a distance
function, for example the Euclidean distance. In this paper SUBCLU, FIRES and
INSCY methods will be applied to clustering 6x1595 dimension synthetic
datasets. IO Entropy, F1 Measure, coverage, accurate and time consumption used
as evaluation performance parameters. Evaluation results showed SUBCLU method
requires considerable time to process subspace clustering; however, its value
coverage is better. Meanwhile INSCY method is better for accuracy comparing
with two other methods, although consequence time calculation was longer.Comment: 6 pages, 15 figure
Efficient Resource Matching in Heterogeneous Grid Using Resource Vector
In this paper, a method for efficient scheduling to obtain optimum job
throughput in a distributed campus grid environment is presented; Traditional
job schedulers determine job scheduling using user and job resource attributes.
User attributes are related to current usage, historical usage, user priority
and project access. Job resource attributes mainly comprise of soft
requirements (compilers, libraries) and hard requirements like memory, storage
and interconnect. A job scheduler dispatches jobs to a resource if a job's hard
and soft requirements are met by a resource. In current scenario during
execution of a job, if a resource becomes unavailable, schedulers are presented
with limited options, namely re-queuing job or migrating job to a different
resource. Both options are expensive in terms of data and compute time. These
situations can be avoided, if the often ignored factor, availability time of a
resource in a grid environment is considered. We propose resource rank
approach, in which jobs are dispatched to a resource which has the highest rank
among all resources that match the job's requirement. The results show that our
approach can increase throughput of many serial / monolithic jobs.Comment: 10 page
- …