1,372 research outputs found

    Semantic Modeling of Analytic-based Relationships with Direct Qualification

    Full text link
    Successfully modeling state and analytics-based semantic relationships of documents enhances representation, importance, relevancy, provenience, and priority of the document. These attributes are the core elements that form the machine-based knowledge representation for documents. However, modeling document relationships that can change over time can be inelegant, limited, complex or overly burdensome for semantic technologies. In this paper, we present Direct Qualification (DQ), an approach for modeling any semantically referenced document, concept, or named graph with results from associated applied analytics. The proposed approach supplements the traditional subject-object relationships by providing a third leg to the relationship; the qualification of how and why the relationship exists. To illustrate, we show a prototype of an event-based system with a realistic use case for applying DQ to relevancy analytics of PageRank and Hyperlink-Induced Topic Search (HITS).Comment: Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015

    Logics and practices of transparency and opacity in real-world applications of public sector machine learning

    Get PDF
    Machine learning systems are increasingly used to support public sector decision-making across a variety of sectors. Given concerns around accountability in these domains, and amidst accusations of intentional or unintentional bias, there have been increased calls for transparency of these technologies. Few, however, have considered how logics and practices concerning transparency have been understood by those involved in the machine learning systems already being piloted and deployed in public bodies today. This short paper distils insights about transparency on the ground from interviews with 27 such actors, largely public servants and relevant contractors, across 5 OECD countries. Considering transparency and opacity in relation to trust and buy-in, better decision-making, and the avoidance of gaming, it seeks to provide useful insights for those hoping to develop socio-technical approaches to transparency that might be useful to practitioners on-the-ground. An extended, archival version of this paper is available as Veale M., Van Kleek M., & Binns R. (2018). `Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making' Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI'18), http://doi.org/10.1145/3173574.3174014.Comment: 5 pages, 0 figures, presented as a talk at the 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017), Halifax, Canada, August 14, 201

    Geospatial big data and cartography : research challenges and opportunities for making maps that matter

    Get PDF
    Geospatial big data present a new set of challenges and opportunities for cartographic researchers in technical, methodological, and artistic realms. New computational and technical paradigms for cartography are accompanying the rise of geospatial big data. Additionally, the art and science of cartography needs to focus its contemporary efforts on work that connects to outside disciplines and is grounded in problems that are important to humankind and its sustainability. Following the development of position papers and a collaborative workshop to craft consensus around key topics, this article presents a new cartographic research agenda focused on making maps that matter using geospatial big data. This agenda provides both long-term challenges that require significant attention as well as short-term opportunities that we believe could be addressed in more concentrated studies.PostprintPeer reviewe

    Future of Big Earth Data Analytics

    Get PDF
    The state of the art of Big Earth Data Analytics can be expected to evolve rapidly in the coming years. The forces driving evolution come from both growth in the data and advancement in the field of data analytics. In the data area, advances in sensor instrumentation and platform miniaturization are increasing both data resolution and coverage, resulting in enormous growth in data Volume. Increases in temporal resolution in particular also generate demands for higher data Velocity. At the same time, the proliferation of instruments and the platforms on which they reside is increasing the Variety of datasets. The Variety increase in turn leads to questions about the Veracity of the data. In the algorithm area, powerful machine learning methods are coming to the fore, particularly Deep Neural Networks. These are powerful at detecting interesting features in the data, integrating many different measurements (i.e., data fusion), and classification problems. However, they are still challenging when seeking explanations of how natural or socio-economic phenomena work using Earth Observations. Thus, classical analysis techniques will remain relevant when the emphasis is on forming or testing explanations, as well as to support interactive data exploration

    The Analysis of Big Data on Cites and Regions - Some Computational and Statistical Challenges

    Get PDF
    Big Data on cities and regions bring new opportunities and challenges to data analysts and city planners. On the one side, they hold great promise to combine increasingly detailed data for each citizen with critical infrastructures to plan, govern and manage cities and regions, improve their sustainability, optimize processes and maximize the provision of public and private services. On the other side, the massive sample size and high-dimensionality of Big Data and their geo-temporal character introduce unique computational and statistical challenges. This chapter provides overviews on the salient characteristics of Big Data and how these features impact on paradigm change of data management and analysis, and also on the computing environment.Series: Working Papers in Regional Scienc

    Global-Scale Resource Survey and Performance Monitoring of Public OGC Web Map Services

    Full text link
    One of the most widely-implemented service standards provided by the Open Geospatial Consortium (OGC) to the user community is the Web Map Service (WMS). WMS is widely employed globally, but there is limited knowledge of the global distribution, adoption status or the service quality of these online WMS resources. To fill this void, we investigated global WMSs resources and performed distributed performance monitoring of these services. This paper explicates a distributed monitoring framework that was used to monitor 46,296 WMSs continuously for over one year and a crawling method to discover these WMSs. We analyzed server locations, provider types, themes, the spatiotemporal coverage of map layers and the service versions for 41,703 valid WMSs. Furthermore, we appraised the stability and performance of basic operations for 1210 selected WMSs (i.e., GetCapabilities and GetMap). We discuss the major reasons for request errors and performance issues, as well as the relationship between service response times and the spatiotemporal distribution of client monitoring sites. This paper will help service providers, end users and developers of standards to grasp the status of global WMS resources, as well as to understand the adoption status of OGC standards. The conclusions drawn in this paper can benefit geospatial resource discovery, service performance evaluation and guide service performance improvements.Comment: 24 pages; 15 figure
    corecore