1,275 research outputs found

    Exploring the effectiveness of geomasking techniques for protecting the geoprivacy of Twitter users

    Get PDF
    With the ubiquitous use of location-based services, large-scale individual-level location data has been widely collected through location-awareness devices. Geoprivacy concerns arise on the issues of user identity de-anonymization and location exposure. In this work, we investigate the effectiveness of geomasking techniques for protecting the geoprivacy of active Twitter users who frequently share geotagged tweets in their home and work locations. By analyzing over 38,000 geotagged tweets of 93 active Twitter users in three U.S. cities, the two-dimensional Gaussian masking technique with proper standard deviation settings is found to be more effective to protect user\u27s location privacy while sacrificing geospatial analytical resolution than the random perturbation masking method and the aggregation on traffic analysis zones. Furthermore, a three-dimensional theoretical framework considering privacy, analytics, and uncertainty factors simultaneously is proposed to assess geomasking techniques. Our research offers insights into geoprivacy concerns of social media users\u27 georeferenced data sharing for future development of location-based applications and services

    HERMESv3, a stand-alone multi-scale atmospheric emission modelling framework – Part 1: global and regional module

    Get PDF
    Abstract Back to top We present the High-Elective Resolution Modelling Emission System version 3 (HERMESv3), an open source, parallel and stand-alone multi-scale atmospheric emission modelling framework that computes gaseous and aerosol emissions for use in atmospheric chemistry models. HERMESv3 is coded in Python and consists of a global_regional module and a bottom_up module that can be either combined or executed separately. In this contribution (Part 1) we describe the global_regional module, a customizable emission processing system that calculates emissions from different sources, regions and pollutants on a user-specified global or regional grid. The user can flexibly define combinations of existing up-to-date global and regional emission inventories and apply country-specific scaling factors and masks. Each emission inventory is individually processed using user-defined vertical, temporal and speciation profiles that allow obtaining emission outputs compatible with multiple chemical mechanisms (e.g. Carbon-Bond 05). The selection and combination of emission inventories and databases is done through detailed configuration files providing the user with a widely applicable framework for designing, choosing and adjusting the emission modelling experiment without modifying the HERMESv3 source code. The generated emission fields have been successfully tested in different atmospheric chemistry models (i.e. CMAQ, WRF-Chem and NMMB-MONARCH) at multiple spatial and temporal resolutions. In a companion article (Part 2; Guevara et al., 2019) we describe the bottom_up module, which estimates emissions at the source level (e.g. road link) combining state-of-the-art bottom–up methods with local activity and emission factors.The research leading to these results has received funding from the Ministerio de EconomĂ­a y Competitividad (MINECO) as part of the PAISA project CGL2016-75725-R and the NUTRIENT project CGL2017-88911-R. The authors acknowledge PRACE for awarding access to Marenostrum4 based in Spain at the Barcelona Supercomputing Center through the Tier-0 HHRNTCP and Tier-0 EEDMC projects. Carlos PĂ©rez GarcĂ­a-Pando acknowledges long-term support from the AXA Research Fund, as well as the support received through the RamĂłn y Cajal programme (grant RYC-2015-18690) of the Spanish Ministry of Economy and Competitiveness. The authors would also like to thank the two anonymous referees for their thorough comments, which helped improve the quality of the paper.Peer ReviewedPostprint (published version

    Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation Learning

    Full text link
    Large, pretrained models are commonly finetuned with imagery that is heavily augmented to mimic different conditions and scales, with the resulting models used for various tasks with imagery from a range of spatial scales. Such models overlook scale-specific information in the data for scale-dependent domains, such as remote sensing. In this paper, we present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales throughout the pretraining process. Scale-MAE pretrains a network by masking an input image at a known input scale, where the area of the Earth covered by the image determines the scale of the ViT positional encoding, not the image resolution. Scale-MAE encodes the masked image with a standard ViT backbone, and then decodes the masked image through a bandpass filter to reconstruct low/high frequency images at lower/higher scales. We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery. Scale-MAE achieves an average of a 2.4−5.6%2.4 - 5.6\% non-parametric kNN classification improvement across eight remote sensing datasets compared to current state-of-the-art and obtains a 0.90.9 mIoU to 1.71.7 mIoU improvement on the SpaceNet building segmentation transfer task for a range of evaluation scales

    SpatioTemporal Feature Integration and Model Fusion for Full Reference Video Quality Assessment

    Full text link
    Perceptual video quality assessment models are either frame-based or video-based, i.e., they apply spatiotemporal filtering or motion estimation to capture temporal video distortions. Despite their good performance on video quality databases, video-based approaches are time-consuming and harder to efficiently deploy. To balance between high performance and computational efficiency, Netflix developed the Video Multi-method Assessment Fusion (VMAF) framework, which integrates multiple quality-aware features to predict video quality. Nevertheless, this fusion framework does not fully exploit temporal video quality measurements which are relevant to temporal video distortions. To this end, we propose two improvements to the VMAF framework: SpatioTemporal VMAF and Ensemble VMAF. Both algorithms exploit efficient temporal video features which are fed into a single or multiple regression models. To train our models, we designed a large subjective database and evaluated the proposed models against state-of-the-art approaches. The compared algorithms will be made available as part of the open source package in https://github.com/Netflix/vmaf
    • 

    corecore