8,373 research outputs found

    The LIGO Open Science Center

    Get PDF
    The LIGO Open Science Center (LOSC) fulfills LIGO's commitment to release, archive, and serve LIGO data in a broadly accessible way to the scientific community and to the public, and to provide the information and tools necessary to understand and use the data. In August 2014, the LOSC published the full dataset from Initial LIGO's "S5" run at design sensitivity, the first such large-scale release and a valuable testbed to explore the use of LIGO data by non-LIGO researchers and by the public, and to help teach gravitational-wave data analysis to students across the world. In addition to serving the S5 data, the LOSC web portal (losc.ligo.org) now offers documentation, data-location and data-quality queries, tutorials and example code, and more. We review the mission and plans of the LOSC, focusing on the S5 data release.Comment: 8 pages, 1 figure, proceedings of the 10th LISA Symposium, University of Florida, Gainesville, May 18-23, 2014; final published version; see losc.ligo.org for the S5 data release and more information about the LIGO Open Science Cente

    Stigmergic hyperlink's contributes to web search

    Get PDF
    Stigmergic hyperlinks are hyperlinks with a "heart beat": if used they stay healthy and online; if neglected, they fade, eventually getting replaced. Their life attribute is a relative usage measure that regular hyperlinks do not provide, hence PageRank-like measures have historically been well informed about the structure of webs of documents, but unaware of what users effectively do with the links. This paper elaborates on how to input the users’ perspective into Google’s original, structure centric, PageRank metric. The discussion then bridges to the Deep Web, some search challenges, and how stigmergic hyperlinks could help decentralize the search experience, facilitating user generated search solutions and supporting new related business models.info:eu-repo/semantics/publishedVersio

    Selection of third party software in Off-The-Shelf-based software development: an interview study with industrial practitioners

    Get PDF
    The success of software development using third party components highly depends on the ability to select a suitable component for the intended application. The evidence shows that there is limited knowledge about current industrial OTS selection practices. As a result, there is often a gap between theory and practice, and the proposed methods for supporting selection are rarely adopted in the industrial practice. This paper's goal is to investigate the actual industrial practice of component selection in order to provide an initial empirical basis that allows the reconciliation of research and industrial endeavors. The study consisted of semi-structured interviews with 23 employees from 20 different software-intensive companies that mostly develop web information system applications. It provides qualitative information that help to further understand these practices, and emphasize some aspects that have been overlooked by researchers. For instance, although the literature claims that component repositories are important for locating reusable components; these are hardly used in industrial practice. Instead, other resources that have not received considerable attention are used with this aim. Practices and potential market niches for software-intensive companies have been also identified. The results are valuable from both the research and the industrial perspectives as they provide a basis for formulating well-substantiated hypotheses and more effective improvement strategies.Peer ReviewedPostprint (author's final draft

    Legal Information on the Web: the Case of Italy

    Get PDF

    Navegador ontológico matemático-NOMAT

    Get PDF
    The query algorithms in search engines use indexing, contextual analysis and ontologies, among other techniques, for text search. However, they do not use equations due to their writing complexity. NOMAT is a prototype of mathematical expression search engine that seeks information both in thesaurus and internet, using ontological tool for filtering and contextualizing information and LaTeX editor for the symbols in these expressions. This search engine was created to support mathematical research. Compared to other Internet search engines, NOMAT does not require prior knowledge of LaTeX, because has an editing tool which enables writing directly the symbols that make up the mathematical expression of interest. The results obtained were accurate and contextualized, compared to other commercial and no-commercial search engines.Los algoritmos de consulta de los motores de búsqueda utilizan indexación, análisis contextual y ontologías, entre otras técnicas, para la búsqueda de texto. Sin embargo, no utilizan ecuaciones debido a su complejidad de escritura. Nomat es un prototipo de motor de búsqueda de expresión matemática que busca información tanto en tesauro como en Internet, utilizando la Herramienta ontológica para filtrar y contextualizar información y editor de látex para los símbolos de estas expresiones. Este buscador fue creado para apoyar la investigación matemática. En comparación con otros motores de búsqueda de Internet, Nomat no requiere conocimientos previos de látex, ya que cuenta con una herramienta de edición que permite escribir directamente los símbolos que componen la expresión matemática de interés. Los resultados obtenidos fueron precisos y contextualizados, en comparación con otros motores de búsqueda comerciales y no comerciales

    Building Data-Driven Pathways From Routinely Collected Hospital Data:A Case Study on Prostate Cancer

    Get PDF
    Background: Routinely collected data in hospitals is complex, typically heterogeneous, and scattered across multiple Hospital Information Systems (HIS). This big data, created as a byproduct of health care activities, has the potential to provide a better understanding of diseases, unearth hidden patterns, and improve services and cost. The extent and uses of such data rely on its quality, which is not consistently checked, nor fully understood. Nevertheless, using routine data for the construction of data-driven clinical pathways, describing processes and trends, is a key topic receiving increasing attention in the literature. Traditional algorithms do not cope well with unstructured processes or data, and do not produce clinically meaningful visualizations. Supporting systems that provide additional information, context, and quality assurance inspection are needed. Objective: The objective of the study is to explore how routine hospital data can be used to develop data-driven pathways that describe the journeys that patients take through care, and their potential uses in biomedical research; it proposes a framework for the construction, quality assessment, and visualization of patient pathways for clinical studies and decision support using a case study on prostate cancer. Methods: Data pertaining to prostate cancer patients were extracted from a large UK hospital from eight different HIS, validated, and complemented with information from the local cancer registry. Data-driven pathways were built for each of the 1904 patients and an expert knowledge base, containing rules on the prostate cancer biomarker, was used to assess the completeness and utility of the pathways for a specific clinical study. Software components were built to provide meaningful visualizations for the constructed pathways. Results: The proposed framework and pathway formalism enable the summarization, visualization, and querying of complex patient-centric clinical information, as well as the computation of quality indicators and dimensions. A novel graphical representation of the pathways allows the synthesis of such information. Conclusions: Clinical pathways built from routinely collected hospital data can unearth information about patients and diseases that may otherwise be unavailable or overlooked in hospitals. Data-driven clinical pathways allow for heterogeneous data (ie, semistructured and unstructured data) to be collated over a unified data model and for data quality dimensions to be assessed. This work has enabled further research on prostate cancer and its biomarkers, and on the development and application of methods to mine, compare, analyze, and visualize pathways constructed from routine data. This is an important development for the reuse of big data in hospitals

    Km4City Ontology Building vs Data Harvesting and Cleaning for Smart-city Services

    Get PDF
    Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-Store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection

    Proceedings of the 15th Conference on Knowledge Organization WissOrg'17 of theGerman Chapter of the International Society for Knowledge Organization (ISKO),30th November - 1st December 2017, Freie Universität Berlin

    Get PDF
    Wissensorganisation is the name of a series of biennial conferences / workshops with a long tradition, organized by the German chapter of the International Society of Knowledge Organization (ISKO). The 15th conference in this series, held at Freie Universität Berlin, focused on knowledge organization for the digital humanities. Structuring, and interacting with, large data collections has become a major issue in the digital humanities. In these proceedings, various aspects of knowledge organization in the digital humanities are discussed, and the authors of the papers show how projects in the digital humanities deal with knowledge organization.Wissensorganisation ist der Name einer Konferenzreihe mit einer langjährigen Tradition, die von der Deutschen Sektion der International Society of Knowledge Organization (ISKO) organisiert wird. Die 15. Konferenz dieser Reihe, die an der Freien Universität Berlin stattfand, hatte ihren Schwerpunkt im Bereich Wissensorganisation und Digital Humanities. Die Strukturierung von und die Interaktion mit großen Datenmengen ist ein zentrales Thema in den Digital Humanities. In diesem Konferenzband werden verschiedene Aspekte der Wissensorganisation in den Digital Humanities diskutiert, und die Autoren der einzelnen Beiträge zeigen, wie die Digital Humanities mit Wissensorganisation umgehen
    • …
    corecore