196,559 research outputs found

    "TNOs are Cool": A survey of the trans-Neptunian region X. Analysis of classical Kuiper belt objects from Herschel and Spitzer observations

    Get PDF
    The classical Kuiper belt contains objects both from a low-inclination, presumably primordial, distribution and from a high-inclination dynamically excited population. Based on a sample of classical TNOs with observations at thermal wavelengths we determine radiometric sizes, geometric albedos and thermal beaming factors as well as study sample properties of dynamically hot and cold classicals. Observations near the thermal peak of TNOs using infra-red space telescopes are combined with optical magnitudes using the radiometric technique with near-Earth asteroid thermal model (NEATM). We have determined three-band flux densities from Herschel/PACS observations at 70.0, 100.0 and 160.0 ÎĽ\mum and Spitzer/MIPS at 23.68 and 71.42 ÎĽ\mum when available. We have analysed 18 classical TNOs with previously unpublished data and re-analysed previously published targets with updated data reduction to determine their sizes and geometric albedos as well as beaming factors when data quality allows. We have combined these samples with classical TNOs with radiometric results in the literature for the analysis of sample properties of a total of 44 objects. We find a median geometric albedo for cold classical TNOs of 0.14 and for dynamically hot classical TNOs, excluding the Haumea family and dwarf planets, 0.085. We have determined the bulk densities of Borasisi-Pabu (2.1 g/cm^3), Varda-Ilmare (1.25 g/cm^3) and 2001 QC298 (1.14 g/cm^3) as well as updated previous density estimates of four targets. We have determined the slope parameter of the debiased cumulative size distribution of dynamically hot classical TNOs as q=2.3 +- 0.1 in the diameter range 100<D<500 km. For dynamically cold classical TNOs we determine q=5.1 +- 1.1 in the diameter range 160<D<280 km as the cold classical TNOs have a smaller maximum size.Comment: 22 pages, 7 figures Accepted to be published in Astronomy and Astrophysic

    Differentially Private Publication of Sparse Data

    Full text link
    The problem of privately releasing data is to provide a version of a dataset without revealing sensitive information about the individuals who contribute to the data. The model of differential privacy allows such private release while providing strong guarantees on the output. A basic mechanism achieves differential privacy by adding noise to the frequency counts in the contingency tables (or, a subset of the count data cube) derived from the dataset. However, when the dataset is sparse in its underlying space, as is the case for most multi-attribute relations, then the effect of adding noise is to vastly increase the size of the published data: it implicitly creates a huge number of dummy data points to mask the true data, making it almost impossible to work with. We present techniques to overcome this roadblock and allow efficient private release of sparse data, while maintaining the guarantees of differential privacy. Our approach is to release a compact summary of the noisy data. Generating the noisy data and then summarizing it would still be very costly, so we show how to shortcut this step, and instead directly generate the summary from the input data, without materializing the vast intermediate noisy data. We instantiate this outline for a variety of sampling and filtering methods, and show how to use the resulting summary for approximate, private, query answering. Our experimental study shows that this is an effective, practical solution, with comparable and occasionally improved utility over the costly materialization approach

    "TNOs are Cool": A survey of the trans-Neptunian region VI. Herschel/PACS observations and thermal modeling of 19 classical Kuiper belt objects

    Full text link
    Trans-Neptunian objects (TNO) represent the leftovers of the formation of the Solar System. Their physical properties provide constraints to the models of formation and evolution of the various dynamical classes of objects in the outer Solar System. Based on a sample of 19 classical TNOs we determine radiometric sizes, geometric albedos and beaming parameters. Our sample is composed of both dynamically hot and cold classicals. We study the correlations of diameter and albedo of these two subsamples with each other and with orbital parameters, spectral slopes and colors. We have done three-band photometric observations with Herschel/PACS and we use a consistent method for data reduction and aperture photometry of this sample to obtain monochromatic flux densities at 70.0, 100.0 and 160.0 \mu m. Additionally, we use Spitzer/MIPS flux densities at 23.68 and 71.42 \mu m when available, and we present new Spitzer flux densities of eight targets. We derive diameters and albedos with the near-Earth asteroid thermal model (NEATM). As auxiliary data we use reexamined absolute visual magnitudes from the literature and data bases, part of which have been obtained by ground based programs in support of our Herschel key program. We have determined for the first time radiometric sizes and albedos of eight classical TNOs, and refined previous size and albedo estimates or limits of 11 other classicals. The new size estimates of 2002 MS4 and 120347 Salacia indicate that they are among the 10 largest TNOs known. Our new results confirm the recent findings that there are very diverse albedos among the classical TNOs and that cold classicals possess a high average albedo (0.17 +/- 0.04). Diameters of classical TNOs strongly correlate with orbital inclination in our sample. We also determine the bulk densities of six binary TNOs.Comment: 21 pages, 9 figures, accepted for publication in Astronomy and Astrophysic

    A new approach to measure reduction intensity on cores and tools on cobbles: the Volumetric Reconstruction Method

    No full text
    Knowing to what extent lithic cores have been reduced through knapping is an important step toward understanding the technological variability of lithic assemblages and disentangling the formation processes of archaeological assemblages. In addition, it is a good complement to more developed studies of reduction intensity in retouched tools, and can provide information on raw material management or site occupation dynamics. This paper presents a new methodology for estimating the intensity of reduction in cores and tools on cobbles, the Volumetric Reconstruction Method (VRM). This method is based on a correction of the dimensions (length, width, and thickness) of each core from an assemblage. The mean values of thickness and platform thickness of the assemblage’s flakes are used as corrections for the cores’ original dimensions, after its diacritic analysis. Then, based on these new dimensions, the volume or mass of the original blank are reconstructed using the ellipsoid volume formula. The accuracy of this method was experimentally tested, reproducing a variety of possible archaeological scenarios. The experimental results demonstrate a high inferential potential of the VRM, both in estimating the original volume or mass of the original blanks, and in inferring the individual percentage of reduction for each core. The results of random resampling demonstrate the applicability of VRM to non size-biased archaeological contexts.Introduction Methods - The Volumetric Reconstruction Method - Experimental design - Statistical procedures - Resamples Results - Geometric formulas - Reduction strategy and size - Resampling (randomly biased record) - Resampling (size bias) - Measuring the effect of number of generations Discussion and conclusion

    Statistical Geometry of Packing Defects of Lattice Chain Polymer from Enumeration and Sequential Monte Carlo Method

    Get PDF
    Voids exist in proteins as packing defects and are often associated with protein functions. We study the statistical geometry of voids in two-dimensional lattice chain polymers. We define voids as topological features and develop a simple algorithm for their detection. For short chains, void geometry is examined by enumerating all conformations. For long chains, the space of void geometry is explored using sequential Monte Carlo importance sampling and resampling techniques. We characterize the relationship of geometric properties of voids with chain length, including probability of void formation, expected number of voids, void size, and wall size of voids. We formalize the concept of packing density for lattice polymers, and further study the relationship between packing density and compactness, two parameters frequently used to describe protein packing. We find that both fully extended and maximally compact polymers have the highest packing density, but polymers with intermediate compactness have low packing density. To study the conformational entropic effects of void formation, we characterize the conformation reduction factor of void formation and found that there are strong end-effect. Voids are more likely to form at the chain end. The critical exponent of end-effect is twice as large as that of self-contacting loop formation when existence of voids is not required. We also briefly discuss the sequential Monte Carlo sampling and resampling techniques used in this study.Comment: 29 pages, including 12 figure

    Airborne LiDAR for DEM generation: some critical issues

    Get PDF
    Airborne LiDAR is one of the most effective and reliable means of terrain data collection. Using LiDAR data for DEM generation is becoming a standard practice in spatial related areas. However, the effective processing of the raw LiDAR data and the generation of an efficient and high-quality DEM remain big challenges. This paper reviews the recent advances of airborne LiDAR systems and the use of LiDAR data for DEM generation, with special focus on LiDAR data filters, interpolation methods, DEM resolution, and LiDAR data reduction. Separating LiDAR points into ground and non-ground is the most critical and difficult step for DEM generation from LiDAR data. Commonly used and most recently developed LiDAR filtering methods are presented. Interpolation methods and choices of suitable interpolator and DEM resolution for LiDAR DEM generation are discussed in detail. In order to reduce the data redundancy and increase the efficiency in terms of storage and manipulation, LiDAR data reduction is required in the process of DEM generation. Feature specific elements such as breaklines contribute significantly to DEM quality. Therefore, data reduction should be conducted in such a way that critical elements are kept while less important elements are removed. Given the highdensity characteristic of LiDAR data, breaklines can be directly extracted from LiDAR data. Extraction of breaklines and integration of the breaklines into DEM generation are presented
    • …
    corecore