512 research outputs found

    The Spatial Distribution of Satellite Galaxies Selected from Redshift Space

    Full text link
    We investigate the spatial distribution of satellite galaxies using a mock redshift survey of the first Millennium Run simulation. The satellites were identified using common redshift space criteria and the sample therefore includes a large percentage of interlopers. The satellite locations are well-fitted by a combination of a Navarro, Frenk & White(NFW) density profile and a power law. At fixed stellar mass, the NFW scale parameter, r_s, for the satellite distribution of red hosts exceeds r_s for the satellite distribution of blue hosts. In both cases the dependence of r_s on host stellar mass is well-fitted by a power law. For the satellites of red hosts, r_s^{red} \propto (M_\ast / M_\sun)^{0.71 \pm 0.05} while for the satellites of blue hosts, r_s^{blue} \propto (M_\ast / M_\sun)^{0.48 \pm 0.07}$. For hosts with stellar masses greater than 4.0E+10 M_sun, the satellite distribution around blue hosts is more concentrated than is the satellite distribution around red hosts. The spatial distribution of the satellites of red hosts traces that of the hosts' halos; however, the spatial distribution of the satellites of blue hosts is more concentrated than that of the hosts' halos by a factor of ~2. Our methodology is general and applies to any analysis of satellites in a mock redshift survey. However, our conclusions necessarily depend upon the semi-analytic galaxy formation model that was adopted, and different galaxy formation models may yield different results.Comment: 25 pages, 5 figures, accepted for publication in The Astrophysical Journa

    Locations of Satellite Galaxies in the Two-Degree Field Galaxy Redshift Survey

    Full text link
    We compute the locations of satellite galaxies in the Two-Degree Field Galaxy Redshift Survey using two sets of selection criteria and three sources of photometric data. Using the SuperCOSMOS r_F photometry, we find that the satellites are located preferentially near the major axes of their hosts, and the anisotropy is detected at a highly-significant level (confidence levels of 99.6% to 99.9%). The locations of satellites that have high velocities relative to their hosts are statistically indistinguishable from the locations of satellites that have low velocities relative to their hosts. Additionally, satellites with passive star formation are distributed anisotropically about their hosts (99% confidence level), while the locations of star-forming satellites are consistent with an isotropic distribution. These two distributions are, however, statistically indistinguishable. Therefore it is not correct to interpret this as evidence that the locations of the star-forming satellites are intrinsically different from those of the passive satellites.Comment: 21 pages, 3 figure

    Neighborhood Selection for Thresholding-based Subspace Clustering

    Full text link
    Subspace clustering refers to the problem of clustering high-dimensional data points into a union of low-dimensional linear subspaces, where the number of subspaces, their dimensions and orientations are all unknown. In this paper, we propose a variation of the recently introduced thresholding-based subspace clustering (TSC) algorithm, which applies spectral clustering to an adjacency matrix constructed from the nearest neighbors of each data point with respect to the spherical distance measure. The new element resides in an individual and data-driven choice of the number of nearest neighbors. Previous performance results for TSC, as well as for other subspace clustering algorithms based on spectral clustering, come in terms of an intermediate performance measure, which does not address the clustering error directly. Our main analytical contribution is a performance analysis of the modified TSC algorithm (as well as the original TSC algorithm) in terms of the clustering error directly.Comment: ICASSP 201

    Diagnostic Musculoskeletal Imaging: How Physical Therapists Utilize Imaging in Clinical Decision-Making

    Get PDF
    This qualitative study describes how physical therapist experts in musculoskeletal disorders evaluate and interpret imaging studies and how they employ imaging in clinical decision-making. The informants are physical therapists who are certified orthopedic clinical specialists (OCS) and/or fellows of the American Academy of Orthopaedic Manual Physical Therapists (AAOMPT). The study employed web conferencing to display patient cases, record screen-capture videos, and to conduct interviews. Informants were observed and their activity video-captured as they evaluated imaging studies and, afterwards, interviews were employed to explore the processes they utilized to evaluate and interpret the images and to discuss imaging-related clinical decision-making, including possible functional consequences of changes seen in the images, contraindications to treatment, and indications for referral. The interviews were transcribed and analyzed in the tradition of grounded theory. This study found that the informants’ evaluation of imaging studies was contextual and non-systematic, guided by the clinical presentation. The informants used imaging studies to provide a deeper understanding of clinical findings and widen perspectives, arriving at clinical decisions through the synthesis of imaging, clinical findings, and didactic knowledge. They tended to look for imaging evidence of interference with normal motion, rather than evidence of pathology. Overall, the informants expressed conservative views on the use of imaging, noting they would rather use clinical findings and treatment response than imaging findings as a basis for referral to other health care professionals. Using imaging studies to support clinical decision-making can provide physical therapists a wider perspective when planning treatment interventions. By showing physical therapists’ approach to interpreting imaging studies and how this relates to their clinical decision-making, the findings of this study could contribute to discussions of the place of imaging in physical therapist practice, as well as help set objectives for imaging curricula in professional-level and continuing education

    Arrays of gold nanoparticles as a platform for molecular electronics

    Get PDF
    The field of molecular electronics has grown rapidly in the past few years. The people working in the field are faced with great technological challenges in order to discover interesting physics and chemistry. The greatest challenge people face is how to contact individual molecules with metallic electrodes in a controlled manner. Because of how small molecules are it is difficult to address or handle a single molecule. In order to prepare junctions, comprising of a molecule contacted by two metal electrodes, people have come up with a range of device. The devices can be split into roughly two categories, those utilizing a monolayer of molecules and those utilizing a metal-molecule-metal junction capturing a single molecule. In the first mentioned devices, those utilizing a monolayer of molecules, the molecules are usually encapsulated and can not be influenced from the outside. This prevents those devices to be used to investigate how the electrical properties of molecules change as their environment changes. The devices which capture a single molecule between two electrodes suffer from instability and the fact that a molecule can bind to the electrodes in more than one way. In this thesis we use an array of gold nanoparticles as a platform to perform electrical transport measurements on molecular junctions. The diameter of the gold nanoparticles, 10 nm, is comparable with the length of the molecules used. The array can be easily contacted with large electrodes. The nanoparticles in the array are stable, mechanically and chemically. Once the nanoparticle array has been prepared the molecules of interest can be inserted into it. The distance between the nanoparticles in the array and the organic ligands covering them control how a molecule can bridge a pair of neighbouring nanoparticles. Using the nanoparticle arrays we were able to investigate the influence of the oxidation state of a redox active molecule on the electrical transport through it. We see clear evidence of the influence in our data, by changing the oxidation state of the molecule an order of magnitude change in conductance was observed. By bringing the electrodes connecting to the nanoparticle array close together we were able to make small arrays, comprising of only 20 x 20 nanoparticles. By performing electrical transport measurements on those devices at a temperature of 4 K we could detect the phonon modes of the molecules bridging the nanoparticles

    Practical Full Resolution Learned Lossless Image Compression

    Full text link
    We propose the first practical learned lossless image compression system, L3C, and show that it outperforms the popular engineered codecs, PNG, WebP and JPEG 2000. At the core of our method is a fully parallelizable hierarchical probabilistic model for adaptive entropy coding which is optimized end-to-end for the compression task. In contrast to recent autoregressive discrete probabilistic models such as PixelCNN, our method i) models the image distribution jointly with learned auxiliary representations instead of exclusively modeling the image distribution in RGB space, and ii) only requires three forward-passes to predict all pixel probabilities instead of one for each pixel. As a result, L3C obtains over two orders of magnitude speedups when sampling compared to the fastest PixelCNN variant (Multiscale-PixelCNN). Furthermore, we find that learning the auxiliary representation is crucial and outperforms predefined auxiliary representations such as an RGB pyramid significantly.Comment: Updated preprocessing and Table 1, see A.1 in supplementary. Code and models: https://github.com/fab-jul/L3C-PyTorc
    • …
    corecore