2,013 research outputs found

    A framework for evaluating 3D topological relations based on a vector data model

    Get PDF
    3D topological relations are commonly used for testing or imposing the existence of desired properties between objects of a dataset, such as a city model. Currently available GIS systems usually provide a limited 3D support which usually includes a set of 3D spatial data types together with few operations and predicates, while limited or no support is generally provided for 3D topological relations. Therefore, an important problem to face is how such relations can be actually implemented by using the constructs already provided by the available systems. In this paper, we introduce a generic 3D vector model which includes an abstract and formal description of the 3D spatial data types and of the related basic operations and predicates that are commonly provided by GIS systems. Based on this model, we formally demonstrate how these limited sets of operations and predicates can be combined with 2D topological relations for implementing 3D topological relations

    Cost estimation of spatial join in spatialhadoop

    Get PDF
    Spatial join is an important operation in geo-spatial applications, since it is frequently used for performing data analysis involving geographical information. Many efforts have been done in the past decades in order to provide efficient algorithms for spatial join and this becomes particularly important as the amount of spatial data to be processed increases. In recent years, the MapReduce approach has become a de-facto standard for processing large amount of data (big-data) and some attempts have been made for extending existing frameworks for the processing of spatial data. In this context, several different MapReduce implementations of spatial join have been defined which mainly differ in the use of a spatial index and in the way this index is built and used. In general, none of these algorithms can be considered better than the others, but the choice might depend on the characteristics of the involved datasets. The aim of this work is to deeply analyse them and define a cost model for ranking them based on the characteristics of the dataset at hand (i.e., selectivity or spatial properties). This cost model has been extensively tested w.r.t. a set of synthetic datasets in order to prove its effectiveness

    Promoting data provenance tracking in the archaeological interpretation process

    Get PDF
    n this paper we propose a model and a set of derivation rules for tracking data provenance during the archaeological interpretation process. The interpretation process is the main task performed by an archaeologist that, starting from ground data about evidences and findings, tries to derive knowledge about an ancient object or event. In particular, in this work we concentrate on the dating process used by archaeologists to assign one or more time intervals to a finding in order to define its lifespan on the temporal axis and we propose a framework to represent such information and infer new knowledge including provenance of data. Archaeological data, and in particular their temporal dimension, are typically vague, since many different interpretations can coexist, thus we will use Fuzzy Logic to assign a degree of confidence to values and Fuzzy Temporal Constraint Networks to model relationships between dating of different finding

    The Fully Frustrated Hypercubic Model is Glassy and Aging at Large DD

    Full text link
    We discuss the behavior of the fully frustrated hypercubic cell in the infinite dimensional mean-field limit. In the Ising case the system undergoes a glass transition, well described by the random orthogonal model. Under the glass temperature aging effects show clearly. In the XYXY case there is no sign of a phase transition, and the system is always a paramagnet.Comment: Figures added in uufiles format, and epsf include

    Spectroscopic survey of M--type asteroids

    Full text link
    M-type asteroids, as defined in the Tholen taxonomy (Tholen, 1984), are medium albedo bodies supposed to have a metallic composition and to be the progenitors both of differentiated iron-nickel meteorites and enstatite chondrites. We carried out a spectroscopic survey in the visible and near infrared wavelength range (0.4-2.5 micron) of 30 asteroids chosen from the population of asteroids initially classified as Tholen M -types, aiming to investigate their surface composition. The data were obtained during several observing runs during the years 2004-2007 at the TNG, NTT, and IRTF telescopes. We computed the spectral slopes in several wavelength ranges for each observed asteroid, and we searched for diagnostic spectral features. We confirm a large variety of spectral behaviors for these objects as their spectra are extended into the near-infrared, including the identification of weak absorption bands, mainly of the 0.9 micron band tentatively attributed to orthopyroxene, and of the 0.43 micron band that may be associated to chlorites and Mg-rich serpentines or pyroxene minerals such us pigeonite or augite. A comparison with previously published data indicates that the surfaces of several asteroids belonging to the M-class may vary significantly. We attempt to constrain the asteroid surface compositions of our sample by looking for meteorite spectral analogues in the RELAB database and by modelling with geographical mixtures of selected meteorites/minerals. We confirm that iron meteorites, pallasites, and enstatite chondrites are the best matches to most objects in our sample, as suggested for M-type asteroids. The presence of subtle absorption features on several asteroids confirms that not all objects defined by the Tholen M-class have a pure metallic composition.Comment: 10 figures, 6 tables; Icarus, in pres

    Acculturation process and life domains: Different perceptions of native and immigrant adults in italy

    Get PDF
    Background: Acculturation process has taken up a relevant place in cross-cultural psychology by demonstrating the strong relationships between cultural context and individual behavioral development. Aim: The purpose of this study is to analyse acculturation strategies and attitudes in different life domains of native and immigrant adults living in Italy, following the Relative Acculturation Extended Model (RAEM). Methods: The participants were 250 Italian native and 100 immigrant adults who completed a questionnaire with items to measure their acculturation strategies (real plane) and attitudes (ideal plane), in general and related to different life domains (peripheral and central). Results: Results revealed that the acculturation attitude of immigrants is integration, whereas Italians prefer their assimilation. Conclusion: However, when different life domains are taken into account, immigrants claim to put in practice and prefer integration in most of the domains, whereas Italians perceive immigrants are separated but they prefer their assimilation or integration, depending on the specific domain

    The blockchain role in ethical data acquisition and provisioning

    Get PDF
    The collection of personal data through mobile applications and IoT devices represents the core business of many corporations. From one hand, users are losing control about the property of their data and rarely are conscious about what they are sharing with whom; from the other hand, laws like the European General Data Protection Regulation try to bring data control and ownership back to users. In this paper we discuss the possible impact of the blockchain technology in building independent and resilient data management systems able to ensure data ownership and traceability. The use of this technology could play a major role in creating a transparent global market of aggregated personal data where voluntary acquisition is subject to clear rules and some forms of incentives, making not only the process ethical but also encouraging the sharing of high quality sensitive data

    A context-based approach for partitioning big data

    Get PDF
    In recent years, the amount of available data keeps growing at fast rate, and it is therefore crucial to be able to process them in an efficient way. The level of parallelism in tools such as Hadoop or Spark is determined, among other things, by the partitioning applied to the dataset. A common method is to split the data into chunks considering the number of bytes. While this approach may work well for text-based batch processing, there are a number of cases where the dataset contains structured information, such as the time or the spatial coordinates, and one may be interested in exploiting such a structure to improve the partitioning. This could have an impact on the processing time and increase the overall resource usage efficiency. This paper explores an approach based on the notion of context, such as temporal or spatial information, for partitioning the data. We design a context-based multi-dimensional partitioning technique that divides an n 12dimensional space into splits by considering the distribution of the each contextual dimension in the dataset. We tested our approach on a dataset from a touristic scenario, and our experiments show that we are able to improve the efficiency of the resource usage
    • …
    corecore