15,710 research outputs found

    NOSQL design for analytical workloads: Variability matters

    Get PDF
    Big Data has recently gained popularity and has strongly questioned relational databases as universal storage systems, especially in the presence of analytical workloads. As result, co-relational alternatives, commonly known as NOSQL (Not Only SQL) databases, are extensively used for Big Data. As the primary focus of NOSQL is on performance, NOSQL databases are directly designed at the physical level, and consequently the resulting schema is tailored to the dataset and access patterns of the problem in hand. However, we believe that NOSQL design can also benefit from traditional design approaches. In this paper we present a method to design databases for analytical workloads. Starting from the conceptual model and adopting the classical 3-phase design used for relational databases, we propose a novel design method considering the new features brought by NOSQL and encompassing relational and co-relational design altogether.Peer ReviewedPostprint (author's final draft

    Query processing of spatial objects: Complexity versus Redundancy

    Get PDF
    The management of complex spatial objects in applications, such as geography and cartography, imposes stringent new requirements on spatial database systems, in particular on efficient query processing. As shown before, the performance of spatial query processing can be improved by decomposing complex spatial objects into simple components. Up to now, only decomposition techniques generating a linear number of very simple components, e.g. triangles or trapezoids, have been considered. In this paper, we will investigate the natural trade-off between the complexity of the components and the redundancy, i.e. the number of components, with respect to its effect on efficient query processing. In particular, we present two new decomposition methods generating a better balance between the complexity and the number of components than previously known techniques. We compare these new decomposition methods to the traditional undecomposed representation as well as to the well-known decomposition into convex polygons with respect to their performance in spatial query processing. This comparison points out that for a wide range of query selectivity the new decomposition techniques clearly outperform both the undecomposed representation and the convex decomposition method. More important than the absolute gain in performance by a factor of up to an order of magnitude is the robust performance of our new decomposition techniques over the whole range of query selectivity

    Cultural heritage and sustainable development targets : a possible harmonisation? Insights from the European Perspective

    Get PDF
    The Agenda 2030 includes a set of targets that need to be achieved by 2030. Although none of the 17 Sustainable Development Goals (SDGs) focuses exclusively on cultural heritage, the resulting Agenda includes explicit reference to heritage in SDG 11.4 and indirect reference to other Goals. Achievement of international targets shall happen at local and national level, and therefore, it is crucial to understand how interventions on local heritage are monitored nationally, therefore feeding into the sustainable development framework. This paper is focused on gauging the implementation of the Sustainable Development Goals with reference to cultural heritage, by interrogating the current way of classifying it (and consequently monitoring). In fact, there is no common dataset associated with monitoring SDGs, and the field of heritage is extremely complex and diversified. The purpose for the paper is to understand if the taxonomy used by different national databases allows consistency in the classification and valuing of the different assets categories. The European case study has been chosen as field of investigation, in order to pilot a methodology that can be expanded in further research. A cross‐comparison of a selected sample of publicly accessible national cultural heritage databases has been conducted. As a result, this study confirms the existence of general harmonisation of data towards the achievement of the SDGs with a broad agreement of the conceptualisation of cultural heritage with international frameworks, thus confirming that consistency exists in the classification and valuing of the different assets categories. However, diverse challenges of achieving a consistent and coherent approach to integrating culture in sustainability remains problematic. The findings allow concluding that it could be possible to mainstream across different databases those indicators, which could lead to depicting the overall level of attainment of the Agenda 2030 targets on heritage. However, more research is needed in developing a robust correlation between national datasets and international targets

    Analyzing peptides and proteins by mass spectrometry: principles and applications in proteomics

    Get PDF
    Podeu consultar el llibre complet a: http://hdl.handle.net/2445/32166The study of proteins has been a key element in biomedicine and biotechnology because of their important role in cell functions or enzymatic activity. Cells are the basic unit of living organisms, which are governed by a vast range of chemical reactions. These chemical reactions must be highly regulated in order to achieve homeostasis. Proteins are polymeric molecules that have taken on the evolutionary process the role, along with other factors, of control these chemical reactions. Learning how proteins interact and control their up and down regulations can teach us how living cells regulate their functions, as well as the cause of certain anomalies that occur in different diseases where proteins are involved. Mass spectrometry (MS) is an analytical widely used technique to study the protein content inside the cells as a biomarker point, which describes dysfunctions in diseases and increases knowledge of how proteins are working. All the methodologies involved in these descriptions are integrated in the field called Proteomics

    Apex Peptide Elution Chain Selection: A New Strategy for Selecting Precursors in 2D-LC-MALDI-TOF/TOF Experiments on Complex Biological Samples

    Get PDF
    LC-MALDI provides an often overlooked opportunity to exploit the separation between LC-MS and MS/MS stages of a 2D-LC-MS-based proteomics experiment, that is, by making a smarter selection for precursor fragmentation. Apex Peptide Elution Chain Selection (APECS) is a simple and powerful method for intensity-based peptide selection in a complex sample separated by 2D-LC, using a MALDI-TOF/TOF instrument. It removes the peptide redundancy present in the adjacent first-dimension (typically strong cation exchange, SCX) fractions by constructing peptide elution profiles that link the precursor ions of the same peptide across SCX fractions. Subsequently, the precursor ion most likely to fragment successfully in a given profile is selected for fragmentation analysis, selecting on precursor intensity and absence of adjacent ions that may cofragment. To make the method independent of experiment-specific tolerance criteria, we introduce the concept of the branching factor, which measures the likelihood of false clustering of precursor ions based on past experiments. By validation with a complex proteome sample of Arabidopsis thaliana, APECS identified an equivalent number of peptides as a conventional data-dependent acquisition method but with a 35% smaller work load. Consequently, reduced sample depletion allowed further selection of lower signal-to-noise ratio precursor ions, leading to a larger number of identified unique peptides.

    Software Tools and Approaches for Compound Identification of LC-MS/MS Data in Metabolomics.

    Get PDF
    The annotation of small molecules remains a major challenge in untargeted mass spectrometry-based metabolomics. We here critically discuss structured elucidation approaches and software that are designed to help during the annotation of unknown compounds. Only by elucidating unknown metabolites first is it possible to biologically interpret complex systems, to map compounds to pathways and to create reliable predictive metabolic models for translational and clinical research. These strategies include the construction and quality of tandem mass spectral databases such as the coalition of MassBank repositories and investigations of MS/MS matching confidence. We present in silico fragmentation tools such as MS-FINDER, CFM-ID, MetFrag, ChemDistiller and CSI:FingerID that can annotate compounds from existing structure databases and that have been used in the CASMI (critical assessment of small molecule identification) contests. Furthermore, the use of retention time models from liquid chromatography and the utility of collision cross-section modelling from ion mobility experiments are covered. Workflows and published examples of successfully annotated unknown compounds are included

    Solutions for decision support in university management

    Get PDF
    The paper proposes an overview of decision support systems in order to define the role of a system to assist decision in university management. The authors present new technologies and the basic concepts of multidimensional data analysis using models of business processes within the universities. Based on information provided by scientific literature and on the authors’ experience, the study aims to define selection criteria in choosing a development environment for designing a support system dedicated to university management. The contributions consist in designing a data warehouse model and models of OLAP analysis to assist decision in university management.university management, decision support, multidimensional analysis, data warehouse, OLAP
    corecore