6,947 research outputs found
Spectroscopy of Giant Stars in the Pyxis Globular Cluster
The Pyxis globular cluster is a recently discovered globular cluster that
lies in the outer halo (R_{gc} ~ 40 kpc) of the Milky Way. Pyxis lies along one
of the proposed orbital planes of the Large Magellanic Cloud (LMC), and it has
been proposed to be a detached LMC globular cluster captured by the Milky Way.
We present the first measurement of the radial velocity of the Pyxis globular
cluster based on spectra of six Pyxis giant stars. The mean heliocentric radial
velocity is ~ 36 km/sec, and the corresponding velocity of Pyxis with respect
to a stationary observer at the position of the Sun is ~ -191 km/sec. This
radial velocity is a large enough fraction of the cluster's expected total
space velocity, assuming that it is bound to the Milky Way, that it allows
strict limits to be placed on the range of permissible transverse velocities
that Pyxis could have in the case that it still shares or nearly shares an
orbital pole with the LMC. We can rule out that Pyxis is on a near circular
orbit if it is Magellanic debris, but we cannot rule out an eccentric orbit
associated with the LMC. We have calculated the range of allowed proper motions
for the Pyxis globular cluster that result in the cluster having an orbital
pole within 15 degrees of the present orbital pole of the LMC and that are
consistent with our measured radial velocity, but verification of the tidal
capture hypothesis must await proper motion measurement from the Space
Interferometry Mission or HST. A spectroscopic metallicity estimate of [Fe/H] =
-1.4 +/- 0.1 is determined for Pyxis from several spectra of its brightest
giant; this is consistent with photometric determinations of the cluster
metallicity from isochrone fitting.Comment: 22 pages, 5 figures, aaspp4 style, accepted for publication in
October, 2000 issue of the PAS
Recommended from our members
Analyzing data properties using statistical sampling techniques – illustrated on scientific file formats and compression features
Understanding the characteristics of data stored in data centers helps computer scientists in identifying the most suitable storage infrastructure to deal with these workloads. For example, knowing the relevance of file formats allows optimizing the relevant formats but also helps in a procurement to define benchmarks that cover these formats. Existing studies that investigate performance improvements and techniques for data reduction such as deduplication and compression operate on a small set of data. Some of those studies claim the selected data is representative and scale their result to the scale of the data center. One hurdle of running novel schemes on the complete data is the vast amount of data stored and, thus, the resources required to analyze the complete data set. Even if this would be feasible, the costs for running many of those experiments must be justified.
This paper investigates stochastic sampling methods to compute and analyze quantities of interest on file numbers but also on the occupied storage space. It will be demonstrated that on our production system, scanning 1 % of files and data volume is sufficient to deduct conclusions. This speeds up the analysis process and reduces costs of such studies significantly. The contributions of this paper are: (1) the systematic investigation of the inherent analysis error when operating only on a subset of data, (2) the demonstration of methods that help future studies to mitigate this error, (3) the illustration of the approach on a study for scientific file types and compression for a data center
Recommended from our members
Analyzing data properties using statistical sampling: illustrated on scientific file formats
Understanding the characteristics of data stored in data centers helps computer scientists in identifying the most suitable storage infrastructure to deal with these workloads. For example, knowing the relevance of file formats allows optimizing the relevant formats but also helps in a procurement to define benchmarks that cover these formats. Existing studies that investigate performance improvements and techniques for data reduction such as deduplication and compression operate on a subset of data. Some of those studies claim the selected data is representative and scale their result to the scale of the data center. One hurdle of running novel schemes on the complete data is the vast amount of data stored and, thus, the resources required to analyze the complete data set. Even if this would be feasible, the costs for running many of those experiments must be justified.
This paper investigates stochastic sampling methods to compute and analyze quantities of interest on file numbers but also on the occupied storage space. It will be demonstrated that on our production system, scanning 1% of files and data volume is sufficient to deduct conclusions. This speeds up the analysis process and reduces costs of such studies significantly
Recommended from our members
Cellular and molecular mechanisms underlying muscular dystrophy
The muscular dystrophies are a group of heterogeneous genetic diseases characterized by progressive degeneration and weakness of skeletal muscle. Since the discovery of the first muscular dystrophy gene encoding dystrophin, a large number of genes have been identified that are involved in various muscle-wasting and neuromuscular disorders. Human genetic studies complemented by animal model systems have substantially contributed to our understanding of the molecular pathomechanisms underlying muscle degeneration. Moreover, these studies have revealed distinct molecular and cellular mechanisms that link genetic mutations to diverse muscle wasting phenotypes
Recommended from our members
Predicting I/O performance in HPC using artificial neural networks
The prediction of file access times is an important part for the modeling of supercomputer's storage systems. These models can be used to develop analysis tools which support the users to integrate efficient I/O behavior.
In this paper, we analyze and predict the access times of a Lustre file system from the client perspective. Therefore, we measure file access times in various test series and developed different models for predicting access times. The evaluation shows that in models utilizing artificial neural networks the average prediciton error is about 30% smaller than in linear models. A phenomenon in the distribution of file access times is of particular interest: File accesses with identical parameters show several typical access times.The typical access times usually differ by orders of magnitude and can be explained with a different processing of the file accesses in the storage system - an alternative I/O path. We investigate a method to automatically determine the alternative I/O path and quantify the significance of knowledge about the internal processing. It is shown that the prediction error is improved significantly with this approach
Recommended from our members
Potential of I/O aware workflows in climate and weather
The efficient, convenient, and robust execution of data-driven workflows and enhanced data
management are essential for productivity in scientific computing. In HPC, the concerns of storage
and computing are traditionally separated and optimised independently from each other and the
needs of the end-to-end user. However, in complex workflows, this is becoming problematic. These
problems are particularly acute in climate and weather workflows, which as well as becoming
increasingly complex and exploiting deep storage hierarchies, can involve multiple data centres.
The key contributions of this paper are: 1) A sketch of a vision for an integrated data-driven
approach, with a discussion of the associated challenges and implications, and 2) An architecture
and roadmap consistent with this vision that would allow a seamless integration into current
climate and weather workflows as it utilises versions of existing tools (ESDM, Cylc, XIOS, and
DDN’s IME).
The vision proposed here is built on the belief that workflows composed of data, computing, and communication-intensive tasks should drive interfaces and hardware configurations to
better support the programming models. When delivered, this work will increase the opportunity for smarter scheduling of computing by considering storage in heterogeneous storage systems.
We illustrate the performance-impact on an example workload using a model built on measured
performance data using ESDM at DKRZ
High-Density Genomewide Linkage Analysis of Exceptional Human Longevity Identifies Multiple Novel Loci
Background: Human lifespan is approximately 25 % heritable, and genetic factors may be particularly important for achieving exceptional longevity. Accordingly, siblings of centenarians have a dramatically higher probability of reaching extreme old age than the general population. Methodology/Principal Findings: To map the loci conferring a survival advantage, we performed the second genomewide linkage scan on human longevity and the first using a high-density marker panel of single nucleotide polymorphisms. By systematically testing a range of minimum age cutoffs in 279 families with multiple long-lived siblings, we identified a locus on chromosome 3p24-22 with a genomewide significant allele-sharing LOD score of 4.02 (empirical P = 0.037) and a locus on chromosome 9q31-34 with a highly suggestive LOD score of 3.89 (empirical P = 0.054). The empirical P value for the combined result was 0.002. A third novel locus with a LOD score of 4.05 on chromosome 12q24 was detected in a subset of the data, and we also obtained modest evidence for a previously reported interval on chromosome 4q22-25. Conclusions/Significance: Our linkage data should facilitate the discovery of both common and rare variants tha
The Developing Methodology for Analyzing Privacy Torts
The authors assert the need for a common method of analyzing privacy situations that can be applied consistently by practitioners, juries and courts. They contend that confusion exists as to the legal basis of privacy torts because the right of privacy, as originally conceived by Warren and Brandeis, was never adequately defined. Prosser\u27s analysis of privacy torts departs from the Warren and Brandeis formulation and, according to the authors, also can be criticized for lack of definition. The authors present a new methodology that analyzes privacy torts based upon the scope of consent standard. They maintain that the result will be the protection of the right of privacy as originally conceived by Warren and Brandeis
Continuous, Semi-discrete, and Fully Discretized Navier-Stokes Equations
The Navier--Stokes equations are commonly used to model and to simulate flow
phenomena. We introduce the basic equations and discuss the standard methods
for the spatial and temporal discretization. We analyse the semi-discrete
equations -- a semi-explicit nonlinear DAE -- in terms of the strangeness index
and quantify the numerical difficulties in the fully discrete schemes, that are
induced by the strangeness of the system. By analyzing the Kronecker index of
the difference-algebraic equations, that represent commonly and successfully
used time stepping schemes for the Navier--Stokes equations, we show that those
time-integration schemes factually remove the strangeness. The theoretical
considerations are backed and illustrated by numerical examples.Comment: 28 pages, 2 figure, code available under DOI: 10.5281/zenodo.998909,
https://doi.org/10.5281/zenodo.99890
- …