32 research outputs found

    Large Scale Structure of the Universe

    Full text link
    Galaxies are not uniformly distributed in space. On large scales the Universe displays coherent structure, with galaxies residing in groups and clusters on scales of ~1-3 Mpc/h, which lie at the intersections of long filaments of galaxies that are >10 Mpc/h in length. Vast regions of relatively empty space, known as voids, contain very few galaxies and span the volume in between these structures. This observed large scale structure depends both on cosmological parameters and on the formation and evolution of galaxies. Using the two-point correlation function, one can trace the dependence of large scale structure on galaxy properties such as luminosity, color, stellar mass, and track its evolution with redshift. Comparison of the observed galaxy clustering signatures with dark matter simulations allows one to model and understand the clustering of galaxies and their formation and evolution within their parent dark matter halos. Clustering measurements can determine the parent dark matter halo mass of a given galaxy population, connect observed galaxy populations at different epochs, and constrain cosmological parameters and galaxy evolution models. This chapter describes the methods used to measure the two-point correlation function in both redshift and real space, presents the current results of how the clustering amplitude depends on various galaxy properties, and discusses quantitative measurements of the structures of voids and filaments. The interpretation of these results with current theoretical models is also presented.Comment: Invited contribution to be published in Vol. 8 of book "Planets, Stars, and Stellar Systems", Springer, series editor T. D. Oswalt, volume editor W. C. Keel, v2 includes additional references, updated to match published versio

    On the security of consumer wearable devices in the Internet of Things

    Get PDF
    Miniaturization of computer hardware and the demand for network capable devices has resulted in the emergence of a new class of technology called wearable computing. Wearable devices have many purposes like lifestyle support, health monitoring, fitness monitoring, entertainment, industrial uses, and gaming. Wearable devices are hurriedly being marketed in an attempt to capture an emerging market. Owing to this, some devices do not adequately address the need for security. To enable virtualization and connectivity wearable devices sense and transmit data, therefore it is essential that the device, its data and the user are protected. In this paper the use of novel Integrated Circuit Metric (ICMetric) technology for the provision of security in wearable devices has been suggested. ICMetric technology uses the features of a device to generate an identification which is then used for the provision of cryptographic services. This paper explores how a device ICMetric can be generated by using the accelerometer and gyroscope sensor. Since wearable devices often operate in a group setting the work also focuses on generating a group identification which is then used to deliver services like authentication, confidentiality, secure admission and symmetric key generation. Experiment and simulation results prove that the scheme offers high levels of security without compromising on resource demands

    The expansion field: The value of H_0

    Full text link
    Any calibration of the present value of the Hubble constant requires recession velocities and distances of galaxies. While the conversion of observed velocities into true recession velocities has only a small effect on the result, the derivation of unbiased distances which rest on a solid zero point and cover a useful range of about 4-30 Mpc is crucial. A list of 279 such galaxy distances within v<2000 km/s is given which are derived from the tip of the red-giant branch (TRGB), from Cepheids, and from supernovae of type Ia (SNe Ia). Their random errors are not more than 0.15 mag as shown by intercomparison. They trace a linear expansion field within narrow margins from v=250 to at least 2000 km/s. Additional 62 distant SNe Ia confirm the linearity to at least 20,000 km/s. The dispersion about the Hubble line is dominated by random peculiar velocities, amounting locally to <100 km/s but increasing outwards. Due to the linearity of the expansion field the Hubble constant H_0 can be found at any distance >4.5 Mpc. RR Lyr star-calibrated TRGB distances of 78 galaxies above this limit give H_0=63.0+/-1.6 at an effective distance of 6 Mpc. They compensate the effect of peculiar motions by their large number. Support for this result comes from 28 independently calibrated Cepheids that give H_0=63.4+/-1.7 at 15 Mpc. This agrees also with the large-scale value of H_0=61.2+/-0.5 from the distant, Cepheid-calibrated SNe Ia. A mean value of H_0=62.3+/-1.3 is adopted. Because the value depends on two independent zero points of the distance scale its systematic error is estimated to be 6%. Typical errors of H_0 come from the use of a universal, yet unjustified P-L relation of Cepheids, the neglect of selection bias in magnitude-limited samples, or they are inherent to the adopted models.Comment: 44 pages, 4 figures, 6 tables, accepted for publication in the Astronony and Astrophysics Review 15

    Shedding Light on the Galaxy Luminosity Function

    Full text link
    From as early as the 1930s, astronomers have tried to quantify the statistical nature of the evolution and large-scale structure of galaxies by studying their luminosity distribution as a function of redshift - known as the galaxy luminosity function (LF). Accurately constructing the LF remains a popular and yet tricky pursuit in modern observational cosmology where the presence of observational selection effects due to e.g. detection thresholds in apparent magnitude, colour, surface brightness or some combination thereof can render any given galaxy survey incomplete and thus introduce bias into the LF. Over the last seventy years there have been numerous sophisticated statistical approaches devised to tackle these issues; all have advantages -- but not one is perfect. This review takes a broad historical look at the key statistical tools that have been developed over this period, discussing their relative merits and highlighting any significant extensions and modifications. In addition, the more generalised methods that have emerged within the last few years are examined. These methods propose a more rigorous statistical framework within which to determine the LF compared to some of the more traditional methods. I also look at how photometric redshift estimations are being incorporated into the LF methodology as well as considering the construction of bivariate LFs. Finally, I review the ongoing development of completeness estimators which test some of the fundamental assumptions going into LF estimators and can be powerful probes of any residual systematic effects inherent magnitude-redshift data.Comment: 95 pages, 23 figures, 3 tables. Now published in The Astronomy & Astrophysics Review. This version: bring in line with A&AR format requirements, also minor typo corrections made, additional citations and higher rez images adde

    Dark Matter in the Milky Way's Dwarf Spheroidal Satellites

    Full text link
    The Milky Way's dwarf spheroidal satellites include the nearest, smallest and least luminous galaxies known. They also exhibit the largest discrepancies between dynamical and luminous masses. This article reviews the development of empirical constraints on the structure and kinematics of dSph stellar populations and discusses how this phenomenology translates into constraints on the amount and distribution of dark matter within dSphs. Some implications for cosmology and the particle nature of dark matter are discussed, and some topics/questions for future study are identified.Comment: A version with full-resolution figures is available at http://www.cfa.harvard.edu/~mwalker/mwdsph_review.pdf; 70 pages, 22 figures; invited review article to be published in Vol. 5 of the book "Planets, Stars, and Stellar Systems", published by Springe

    Specialized pediatric palliative home care: a prospective evaluation.

    No full text
    Abstract Objectives: In Germany since 2007 children with advanced life-limiting diseases are eligible for Pediatric Palliative Home Care (PPHC), which is provided by newly established specialized PPHC teams. The objective of this study was to evaluate the acceptance and effectiveness of PPHC as perceived by the parents. Methods: Parents of children treated by the PPHC team based at the Munich University Hospital were eligible for this prospective nonrandomized study. The main topics of the two surveys (before and after involvement of the PPHC team) were the assessment of symptom control and quality of life (QoL) in children; and the parents' satisfaction with care, burden of patient care (Häusliche Pflegeskala, home care scale, HPS), anxiety and depression (Hospital Anxiety and Depression Scale, HADS), and QoL (Quality of Life in Life-Threatening Illness-Family Carer Version, QOLLTI-F). Results: Of 43 families newly admitted to PPHC between April 2011 and June 2012, 40 were included in the study. The median interval between the first and second interview was 8.0 weeks. The involvement of the PPHC team led to a significant improvement of children's symptoms and QoL (P&lt;0.001) as perceived by the parents; and the parents' own QoL and burden relief significantly increased (QOLLTI-F, P&lt;0.001; 7-point change on a 10-point scale), while their psychological distress and burden significantly decreased (HADS, P&lt;0.001; HPS, P&lt;0.001). Conclusions: The involvement of specialized PPHC appears to lead to a substantial improvement in QoL of children and their parents, as experienced by the parents, and to lower the burden of home care for the parents of severely ill children
    corecore