520 research outputs found

    Privacy Issues of the W3C Geolocation API

    Full text link
    The W3C's Geolocation API may rapidly standardize the transmission of location information on the Web, but, in dealing with such sensitive information, it also raises serious privacy concerns. We analyze the manner and extent to which the current W3C Geolocation API provides mechanisms to support privacy. We propose a privacy framework for the consideration of location information and use it to evaluate the W3C Geolocation API, both the specification and its use in the wild, and recommend some modifications to the API as a result of our analysis

    RANCANG BANGUN APLIKASI LAYANAN INFORMASI WISATA BUDAYA YOGYAKARTA BERBASIS MOBILE WEB DAN LOCATIONBASED SERVICE SECARA KOLABORATIF

    Get PDF
    Yogyakarta merupakan salah satu kota yang memiliki potensi pariwisata yang beragam. Potensi wisata tersebut antara lain keindahan alam, peninggalan sejarah, kuliner, dan budaya, Jika potensi ini dapat dioptimalkan, maka akan dapat membantu meningkatkan kondisi perekonomian daerah di sekitar potensi wisata tersebut. Promosi wisata budaya Yogyakarta belum banyak dilakukan. Kegiatan promosi wisata dapat dilakukan dengan memanfaatkan perkembangan teknologi, khususnya teknologi piranti mobile. Aplikasi pengumpulan dan penyampaian informasi wisata budaya Yogyakarta yang akan dikembangkan merupakan aplikasi berbasis mobile web agar dapat diakses melalui berbagai piranti, khususnya piranti mobile, tanpa terikat platform dari piranti yang digunakan. Pemuktahiran informasi mengenai wisata budaya ini dilakukan secara kolaboratif, agar informasi lebih cepat terkumpul. Aplikasi ini juga menyediakan location-based service agar orang yang tertarik untuk mempelajari budaya Yogyakarta dapat dipandu mencapai lokasinya dengan mudah. Dengan adanya sistem ini diharapkan semakin banyak orang yang terlibat untuk berbagi informasi serta semakin tertarik untuk lebih mengenal budaya Yogyakarta

    Geodetic monitoring of complex shaped infrastructures using Ground-Based InSAR

    Get PDF
    In the context of climate change, alternatives to fossil energies need to be used as much as possible to produce electricity. Hydroelectric power generation through the utilisation of dams stands out as an exemplar of highly effective methodologies in this endeavour. Various monitoring sensors can be installed with different characteristics w.r.t. spatial resolution, temporal resolution and accuracy to assess their safe usage. Among the array of techniques available, it is noteworthy that ground-based synthetic aperture radar (GB-SAR) has not yet been widely adopted for this purpose. Despite its remarkable equilibrium between the aforementioned attributes, its sensitivity to atmospheric disruptions, specific acquisition geometry, and the requisite for phase unwrapping collectively contribute to constraining its usage. Several processing strategies are developed in this thesis to capitalise on all the opportunities of GB-SAR systems, such as continuous, flexible and autonomous observation combined with high resolutions and accuracy. The first challenge that needs to be solved is to accurately localise and estimate the azimuth of the GB-SAR to improve the geocoding of the image in the subsequent step. A ray tracing algorithm and tomographic techniques are used to recover these external parameters of the sensors. The introduction of corner reflectors for validation purposes confirms a significant error reduction. However, for the subsequent geocoding, challenges persist in scenarios involving vertical structures due to foreshortening and layover, which notably compromise the geocoding quality of the observed points. These issues arise when multiple points at varying elevations are encapsulated within a singular resolution cell, posing difficulties in pinpointing the precise location of the scattering point responsible for signal return. To surmount these hurdles, a Bayesian approach grounded in intensity models is formulated, offering a tool to enhance the accuracy of the geocoding process. The validation is assessed on a dam in the black forest in Germany, characterised by a very specific structure. The second part of this thesis is focused on the feasibility of using GB-SAR systems for long-term geodetic monitoring of large structures. A first assessment is made by testing large temporal baselines between acquisitions for epoch-wise monitoring. Due to large displacements, the phase unwrapping can not recover all the information. An improvement is made by adapting the geometry of the signal processing with the principal component analysis. The main case study consists of several campaigns from different stations at Enguri Dam in Georgia. The consistency of the estimated displacement map is assessed by comparing it to a numerical model calibrated on the plumblines data. It exhibits a strong agreement between the two results and comforts the usage of GB-SAR for epoch-wise monitoring, as it can measure several thousand points on the dam. It also exhibits the possibility of detecting local anomalies in the numerical model. Finally, the instrument has been installed for continuous monitoring for over two years at Enguri Dam. An adequate flowchart is developed to eliminate the drift happening with classical interferometric algorithms to achieve the accuracy required for geodetic monitoring. The analysis of the obtained time series confirms a very plausible result with classical parametric models of dam deformations. Moreover, the results of this processing strategy are also confronted with the numerical model and demonstrate a high consistency. The final comforting result is the comparison of the GB-SAR time series with the output from four GNSS stations installed on the dam crest. The developed algorithms and methods increase the capabilities of the GB-SAR for dam monitoring in different configurations. It can be a valuable and precious supplement to other classical sensors for long-term geodetic observation purposes as well as short-term monitoring in cases of particular dam operations

    Open budget data: mapping the landscape

    Get PDF
    This report offers analysis of the emerging issue of open budget data, which has begun to gain traction amongst advocates and practitioners of financial transparency. Issues and initiatives associated with the emerging issue of open budget data are charted in different forms of digital media. The objective is to enable practitioners – in particular civil society organisations, intergovernmental organisations, governments, multilaterals and funders – to navigate this developing field and to identify trends, gaps and opportunities for supporting it. How public money is collected and distributed is one of the most pressing political questions of our time, influencing the health, well-being and prospects of billions of people. Decisions about fiscal policy affect everyone - determining everything from the resourcing of essential public services, to the capacity of public institutions to take action on global challenges such as poverty, inequality or climate change. Digital technologies have the potential to transform the way that information about public money is organised, circulated and utilised in society, which in turn could shape the character of public debate, democratic engagement, governmental accountability and public participation in decision-making about public funds. Data could play a vital role in tackling the democratic deficit in fiscal policy and in supporting better outcomes for citizens

    Localizing the media, locating ourselves: a critical comparative analysis of socio-spatial sorting in locative media platforms (Google AND Flickr 2009-2011)

    Get PDF
    In this thesis I explore media geocoding (i.e., geotagging or georeferencing), the process of inscribing the media with geographic information. A process that enables distinct forms of producing, storing, and distributing information based on location. Historically, geographic information technologies have served a biopolitical function producing knowledge of populations. In their current guise as locative media platforms, these systems build rich databases of places facilitated by user-generated geocoded media. These geoindexes render places, and users of these services, this thesis argues, subject to novel forms of computational modelling and economic capture. Thus, the possibility of tying information, people and objects to location sets the conditions to the emergence of new communicative practices as well as new forms of governmentality (management of populations). This project is an attempt to develop an understanding of the socio-economic forces and media regimes structuring contemporary forms of location-aware communication, by carrying out a comparative analysis of two of the main current location-enabled platforms: Google and Flickr. Drawing from the medium-specific approach to media analysis characteristic of the subfield of Software Studies, together with the methodological apparatus of Cultural Analytics (data mining and visualization methods), the thesis focuses on examining how social space is coded and computed in these systems. In particular, it looks at the databases’ underlying ontologies supporting the platforms' geocoding capabilities and their respective algorithmic logics. In the final analysis the thesis argues that the way social space is translated in the form of POIs (Points of Interest) and business-biased categorizations, as well as the geodemographical ordering underpinning the way it is computed, are pivotal if we were to understand what kind of socio-spatial relations are actualized in these systems, and what modalities of governing urban mobility are enabled

    SCOPE: Building and Testing an Integrated Manual-Automated Event Extraction Tool for Online Text-Based Media Sources

    Get PDF
    Building on insights from two years of manually extracting events information from online news media, an interactive information extraction environment (IIEE) was developed. SCOPE, the Scientific Collection of Open-source Policy Evidence, is a Python Django-based tool divided across specialized modules for extracting structured events data from unstructured text. These modules are grouped into a flexible framework which enables the user to tailor the tool to meet their needs. Following principles of user-oriented learning for information extraction (IE), SCOPE offers an alternative approach to developing AI-assisted IE systems. In this piece, we detail the ongoing development of the SCOPE tool, present methods and results of tests of the efficacy of SCOPE relative to past methods, and provide a novel framework for future tests of AI-assisted IE tasks. Information gathered from a four-week period of use was analyzed to evaluate the initial utility of the tool and establish baseline accuracy metrics. Using the SCOPE tool, 15 users extracted 529 summaries and 362 structured events from 207 news articles achieving an accuracy of 31.8% holding time constant at 4 minutes per source. To demonstrate how fully or partially-automated AI processes can be integrated into SCOPE, a baseline AI was implemented and achieved 4.8% accuracy at 3.25 seconds per source. These results illustrate the ability of SCOPE to present the relative strengths and weaknesses of manual users and AI, as well as establish precedent and methods for integrating the two

    DARIAH and the Benelux

    Get PDF

    Guide to Social Science Data Preparation and Archiving: Best Practice Throughout the Data Life Cycle

    Full text link
    http://deepblue.lib.umich.edu/bitstream/2027.42/134032/1/dataprep.pdfDescription of dataprep.pdf : Boo
    • …
    corecore