10,851 research outputs found

    Double lenses

    Full text link
    The analysis of the shear induced by a single cluster on the images of a large number of background galaxies is all centered around the curl-free character of a well-known vector field that can be derived from the data. Such basic property breaks down when the source galaxies happen to be observed through two clusters at different redshifts, partially aligned along the line of sight. In this paper we address the study of double lenses and obtain five main results. (i) First we generalize the procedure to extract the available information, contained in the observed shear field, from the case of a single lens to that of a double lens. (ii) Then we evaluate the possibility of detecting the signature of double lensing given the known properties of the distribution of clusters of galaxies. (iii) As a different astrophysical application, we demonstrate how the method can be used to detect the presence of a dark cluster that might happen to be partially aligned with a bright cluster studied in terms of statistical lensing. (iv) In addition, we show that the redshift distribution of the source galaxies, which in principle might also contribute to break the curl-free character of the shear field, actually produces systematic effects typically two orders of magnitude smaller than the double lensing effects we are focusing on. (v) Remarkably, a discussion of relevant contributions to the noise of the shear measurement has brought up an intrinsic limitation of weak lensing analyses, since one specific contribution, associated with the presence of a non-vanishing two-galaxy correlation function, turns out not to decrease with the density of source galaxies (and thus with the depth of the observations).Comment: 40 pages, 15 figures. Accepted for publication in ApJ main journa

    Smooth maps from clumpy data: Covariance analysis

    Get PDF
    Interpolation techniques play a central role in Astronomy, where one often needs to smooth irregularly sampled data into a smooth map. In a previous article (Lombardi & Schneider 2001), we have considered a widely used smoothing technique and we have evaluated the expectation value of the smoothed map under a number of natural hypotheses. Here we proceed further on this analysis and consider the variance of the smoothed map, represented by a two-point correlation function. We show that two main sources of noise contribute to the total error budget and we show several interesting properties of these two noise terms. The expressions obtained are also specialized to the limiting cases of low and high densities of measurements. A number of examples are used to show in practice some of the results obtained.Comment: 23 pages, 10 figures, final version, A&A in pres

    The noise of cluster mass reconstructions from a source redshift distribution

    Get PDF
    The parameter-free reconstruction of the surface-mass density of clusters of galaxies is one of the principal applications of weak gravitational lensing. From the observable ellipticities of images of background galaxies, the tidal gravitational field (shear) of the mass distribution is estimated, and the corresponding surface mass density is constructed. The noise of the resulting mass map is investigated here, generalizing previous work which included mainly the noise due to the intrinsic galaxy ellipticities. Whereas this dominates the noise budget if the lens is very weak, other sources of noise become important, or even dominant, for the medium-strong lensing regime close to the center of clusters. In particular, shot noise due to a Poisson distribution of galaxy images, and increased shot noise owing to the correlation of galaxies in angular position and redshift, can yield significantly larger levels of noise than that from the intrinsic ellipticities only. We estimate the contributions from these various effects for two widely used smoothing operations, showing that one of them effectively removes the Poisson and the correlation noises related to angular positions of galaxies. Noise sources due to the spread in redshift of galaxies are still present in the optimized estimator and are shown to be relevant in many cases. We show how (even approximate) redshift information can be profitably used to reduce the noise in the mass map. The dependence of the various noise terms on the relevant parameters (lens redshift, strength, smoothing length, redshift distribution of background galaxies) are explicitly calculated and simple estimates are provided.Comment: 18 pages, A&A in pres

    Smooth maps from clumpy data: generalizations

    Full text link
    In a series of papers (Lombardi & Schneider 2001, 2002) we studied in detail the statistical properties of an interpolation technique widely used in astronomy. In particular, we considered the average interpolated map and its covariance under the hypotheses that the map is obtained by smoothing unbiased measurements of an unknown field, and that the measurements are uniformly distributed on the sky. In this paper we generalize the results obtained to the case of observations carried out only on a finite field and distributed on the field with a non-uniform density. These generalizations, which are required in many astronomically relevant cases, still allow an exact, analytical solution of the problem. We also consider a number of properties of the interpolated map, and provide asymptotic expressions for the average map and the two-point correlation function which are valid at high densities.Comment: 9 pages, 3 figures, A&A in pres

    Variational Estimates using a Discrete Variable Representation

    Full text link
    The advantage of using a Discrete Variable Representation (DVR) is that the Hamiltonian of two interacting particles can be constructed in a very simple form. However the DVR Hamiltonian is approximate and, as a consequence, the results cannot be considered as variational ones. We will show that the variational character of the results can be restored by performing a reduced number of integrals. In practice, for a variational description of the lowest n bound states only n(n+1)/2 integrals are necessary whereas D(D+1)/2 integrals are enough for the scattering states (D is the dimension of the S matrix). Applications of the method to the study of dimers of He, Ne and Ar, for both bound and scattering states, are presented.Comment: 30 pages, 7 figures. Minor changes (title modified, typos corrected, 1 reference added). To be published in PR

    CMOS array design automation techniques

    Get PDF
    A low cost, quick turnaround technique for generating custom metal oxide semiconductor arrays using the standard cell approach was developed, implemented, tested and validated. Basic cell design topology and guidelines are defined based on an extensive analysis that includes circuit, layout, process, array topology and required performance considerations particularly high circuit speed

    El perro sin pelo del PerĂș

    Get PDF

    Medial opening wedge high tibial osteotomy: A retrospective review of patient outcomes over 10 years

    Get PDF
    Objectives: High tibial osteotomy (HTO) has become a well-established treatment for unicompartmental osteoarthritis of the knee. Over the last 30 years, various techniques have been introduced to advance this procedure. The purpose of this study is to review the outcomes of patients who received medial opening wedge HTO over the last ten years (2002-2012) using a modern, low profile, medially based fixation device. In addition, we sought to determine if obese patients had a less favorable outcome than their non-obese counterparts. Methods: Ninety-three patients were identified from a surgical database as having undergone a HTO for medial compartment osteoarthritis of the knee with varus mal-alignment. All procedures were performed by one of two fellowship trained orthopedic surgeons from 2002-2012 utilizing a low profile fixation device and identical surgical technique. Minimum follow-up was one year for inclusion in the study. Outcomes were measured using Lysholm and WOMAC scores. Radiographs were evaluated to determine delayed union or non-union at the osteotomy site and surveillance was undertaken to evaluate post operative complications. Results: 93 patients were identified from the database, 63 (70%) were available for follow-up and are included in this analysis. Average follow-up time was 48 months (range 17 to 137). There were 44 males and 19 females. The average age was 45 years old. The average final Lysholm and WOMAC scores were 66.4 (range: 13-100) and 18.6 (range: 0-86) respectively. There was no significant difference in reported Lysholm or WOMAC scores between obese (BMI \u3e30) and non-obese patients (p=.31;p=.69). Complications were as follows: 3 patients required a surgical lysis of adhesions, 2 patients developed an infection, and 1 patient experienced a delayed union. At final follow-up, 18 patients received additional treatment on the affected knee: 11 required removal of symptomatic hardware, 5 received viscosupplementation, 2 underwent a total knee replacement. Conclusion: Low profile, medial based devices used in the setting of HTO is an accepted treatment for unicompartmental osteoarthritis of the knee. At final follow-up, a majority of patients reported positive outcomes and few complications. 18 patients required additional treatment for osteoarthritis. In our analysis, obese patients faired equally as well as their non-obese counterparts, with no significant difference in outcomes scores or complication rate. Survivorship of high tibial osteotomy was excellent in this series, with only 2 patients having undergone total knee replacement at last follow-up. © The Author(s) 2015

    Is the decoherence of a system the result of its interaction with the environment?

    Get PDF
    According to a usual reading, decoherence is a process resulting from the interaction between a small system and its large environment where information and energy are dissipated. The particular models treated in the literature on the subject reinforce this idea since, in general, the behavior of a particle immersed in a large "bath" composed by many particles is studied. The aim of this letter is to warn against this usual simplified reading. By means of the analysis of a well-known model, we will show that decoherence may occur in a system interacting with an environment consisting of only one particle.Comment: 4 Pages, 5 Figure
    • 

    corecore