1,293 research outputs found

    When and where do you want to hide? Recommendation of location privacy preferences with local differential privacy

    Full text link
    In recent years, it has become easy to obtain location information quite precisely. However, the acquisition of such information has risks such as individual identification and leakage of sensitive information, so it is necessary to protect the privacy of location information. For this purpose, people should know their location privacy preferences, that is, whether or not he/she can release location information at each place and time. However, it is not easy for each user to make such decisions and it is troublesome to set the privacy preference at each time. Therefore, we propose a method to recommend location privacy preferences for decision making. Comparing to existing method, our method can improve the accuracy of recommendation by using matrix factorization and preserve privacy strictly by local differential privacy, whereas the existing method does not achieve formal privacy guarantee. In addition, we found the best granularity of a location privacy preference, that is, how to express the information in location privacy protection. To evaluate and verify the utility of our method, we have integrated two existing datasets to create a rich information in term of user number. From the results of the evaluation using this dataset, we confirmed that our method can predict location privacy preferences accurately and that it provides a suitable method to define the location privacy preference

    Disagreeable Privacy Policies: Mismatches between Meaning and Users’ Understanding

    Get PDF
    Privacy policies are verbose, difficult to understand, take too long to read, and may be the least-read items on most websites even as users express growing concerns about information collection practices. For all their faults, though, privacy policies remain the single most important source of information for users to attempt to learn how companies collect, use, and share data. Likewise, these policies form the basis for the self-regulatory notice and choice framework that is designed and promoted as a replacement for regulation. The underlying value and legitimacy of notice and choice depends, however, on the ability of users to understand privacy policies. This paper investigates the differences in interpretation among expert, knowledgeable, and typical users and explores whether those groups can understand the practices described in privacy policies at a level sufficient to support rational decision-making. The paper seeks to fill an important gap in the understanding of privacy policies through primary research on user interpretation and to inform the development of technologies combining natural language processing, machine learning and crowdsourcing for policy interpretation and summarization. For this research, we recruited a group of law and public policy graduate students at Fordham University, Carnegie Mellon University, and the University of Pittsburgh (“knowledgeable users”) and presented these law and policy researchers with a set of privacy policies from companies in the e-commerce and news & entertainment industries. We asked them nine basic questions about the policies’ statements regarding data collection, data use, and retention. We then presented the same set of policies to a group of privacy experts and to a group of non-expert users. The findings show areas of common understanding across all groups for certain data collection and deletion practices, but also demonstrate very important discrepancies in the interpretation of privacy policy language, particularly with respect to data sharing. The discordant interpretations arose both within groups and between the experts and the two other groups. The presence of these significant discrepancies has critical implications. First, the common understandings of some attributes of described data practices mean that semi-automated extraction of meaning from website privacy policies may be able to assist typical users and improve the effectiveness of notice by conveying the true meaning to users. However, the disagreements among experts and disagreement between experts and the other groups reflect that ambiguous wording in typical privacy policies undermines the ability of privacy policies to effectively convey notice of data practices to the general public. The results of this research will, consequently, have significant policy implications for the construction of the notice and choice framework and for the US reliance on this approach. The gap in interpretation indicates that privacy policies may be misleading the general public and that those policies could be considered legally unfair and deceptive. And, where websites are not effectively conveying privacy policies to consumers in a way that a “reasonable person” could, in fact, understand the policies, “notice and choice” fails as a framework. Such a failure has broad international implications since websites extend their reach beyond the United States

    A framework for applying natural language processing in digital health interventions

    Get PDF
    BACKGROUND: Digital health interventions (DHIs) are poised to reduce target symptoms in a scalable, affordable, and empirically supported way. DHIs that involve coaching or clinical support often collect text data from 2 sources: (1) open correspondence between users and the trained practitioners supporting them through a messaging system and (2) text data recorded during the intervention by users, such as diary entries. Natural language processing (NLP) offers methods for analyzing text, augmenting the understanding of intervention effects, and informing therapeutic decision making. OBJECTIVE: This study aimed to present a technical framework that supports the automated analysis of both types of text data often present in DHIs. This framework generates text features and helps to build statistical models to predict target variables, including user engagement, symptom change, and therapeutic outcomes. METHODS: We first discussed various NLP techniques and demonstrated how they are implemented in the presented framework. We then applied the framework in a case study of the Healthy Body Image Program, a Web-based intervention trial for eating disorders (EDs). A total of 372 participants who screened positive for an ED received a DHI aimed at reducing ED psychopathology (including binge eating and purging behaviors) and improving body image. These users generated 37,228 intervention text snippets and exchanged 4285 user-coach messages, which were analyzed using the proposed model. RESULTS: We applied the framework to predict binge eating behavior, resulting in an area under the curve between 0.57 (when applied to new users) and 0.72 (when applied to new symptom reports of known users). In addition, initial evidence indicated that specific text features predicted the therapeutic outcome of reducing ED symptoms. CONCLUSIONS: The case study demonstrates the usefulness of a structured approach to text data analytics. NLP techniques improve the prediction of symptom changes in DHIs. We present a technical framework that can be easily applied in other clinical trials and clinical presentations and encourage other groups to apply the framework in similar contexts

    Spectrometric method to detect exoplanets as another test to verify the invariance of the velocity of light

    Full text link
    Hypothetical influences of variability of light velocity due to the parameters of the source of radiation, for the results of spectral measurements of stars to search for exoplanets are considered. Accounting accelerations of stars relative to the barycenter of the star - a planet (the planets) was carried out. The dependence of the velocity of light from the barycentric radial velocity and barycentric radial acceleration component of the star should lead to a substantial increase (up to degree of magnitude) semi-major axes of orbits detected candidate to extrasolar planets. Consequently, the correct comparison of the results of spectral method with results of other well-known modern methods of detecting extrasolar planets can regard the results obtained in this paper as a reliable test for testing the invariance of the velocity of light.Comment: 11 pages, 5 figure

    Multimodal Earth observation data fusion: Graph-based approach in shared latent space

    Get PDF
    Multiple and heterogenous Earth observation (EO) platforms are broadly used for a wide array of applications, and the integration of these diverse modalities facilitates better extraction of information than using them individually. The detection capability of the multispectral unmanned aerial vehicle (UAV) and satellite imagery can be significantly improved by fusing with ground hyperspectral data. However, variability in spatial and spectral resolution can affect the efficiency of such dataset's fusion. In this study, to address the modality bias, the input data was projected to a shared latent space using cross-modal generative approaches or guided unsupervised transformation. The proposed adversarial networks and variational encoder-based strategies used bi-directional transformations to model the cross-domain correlation without using cross-domain correspondence. It may be noted that an interpolation-based convolution was adopted instead of the normal convolution for learning the features of the point spectral data (ground spectra). The proposed generative adversarial network-based approach employed dynamic time wrapping based layers along with a cyclic consistency constraint to use the minimal number of unlabeled samples, having cross-domain correlation, to compute a cross-modal generative latent space. The proposed variational encoder-based transformation also addressed the cross-modal resolution differences and limited availability of cross-domain samples by using a mixture of expert-based strategy, cross-domain constraints, and adversarial learning. In addition, the latent space was modelled to be composed of modality independent and modality dependent spaces, thereby further reducing the requirement of training samples and addressing the cross-modality biases. An unsupervised covariance guided transformation was also proposed to transform the labelled samples without using cross-domain correlation prior. The proposed latent space transformation approaches resolved the requirement of cross-domain samples which has been a critical issue with the fusion of multi-modal Earth observation data. This study also proposed a latent graph generation and graph convolutional approach to predict the labels resolving the domain discrepancy and cross-modality biases. Based on the experiments over different standard benchmark airborne datasets and real-world UAV datasets, the developed approaches outperformed the prominent hyperspectral panchromatic sharpening, image fusion, and domain adaptation approaches. By using specific constraints and regularizations, the network developed was less sensitive to network parameters, unlike in similar implementations. The proposed approach illustrated improved generalizability in comparison with the prominent existing approaches. In addition to the fusion-based classification of the multispectral and hyperspectral datasets, the proposed approach was extended to the classification of hyperspectral airborne datasets where the latent graph generation and convolution were employed to resolve the domain bias with a small number of training samples. Overall, the developed transformations and architectures will be useful for the semantic interpretation and analysis of multimodal data and are applicable to signal processing, manifold learning, video analysis, data mining, and time series analysis, to name a few.This research was partly supported by the Hebrew University of Jerusalem Intramural Research Found Career Development, Association of Field Crop Farmers in Israel and the Chief Scientist of the Israeli Ministry of Agriculture and Rural Development (projects 20-02-0087 and 12-01-0041)

    Data Deluge in Astrophysics: Photometric Redshifts as a Template Use Case

    Get PDF
    Astronomy has entered the big data era and Machine Learning based methods have found widespread use in a large variety of astronomical applications. This is demonstrated by the recent huge increase in the number of publications making use of this new approach. The usage of machine learning methods, however is still far from trivial and many problems still need to be solved. Using the evaluation of photometric redshifts as a case study, we outline the main problems and some ongoing efforts to solve them.Comment: 13 pages, 3 figures, Springer's Communications in Computer and Information Science (CCIS), Vol. 82

    Local and macroscopic tunneling spectroscopy of Y(1-x)CaxBa2Cu3O(7-d) films: evidence for a doping dependent is or idxy component in the order parameter

    Full text link
    Tunneling spectroscopy of epitaxial (110) Y1-xCaxBa2Cu3O7-d films reveals a doping dependent transition from pure d(x2-y2) to d(x2-y2)+is or d(x2-y2)+idxy order parameter. The subdominant (is or idxy) component manifests itself in a splitting of the zero bias conductance peak and the appearance of subgap structures. The splitting is seen in the overdoped samples, increases systematically with doping, and is found to be an inherent property of the overdoped films. It was observed in both local tunnel junctions, using scanning tunneling microscopy (STM), and in macroscopic planar junctions, for films prepared by either RF sputtering or laser ablation. The STM measurements exhibit fairly uniform splitting size in [110] oriented areas on the order of 10 nm2 but vary from area to area, indicating some doping inhomogeneity. U and V-shaped gaps were also observed, with good correspondence to the local faceting, a manifestation of the dominant d-wave order parameter

    Developmental effects on sleep–wake patterns in infants receiving a cow’s milk-based infant formula with an added prebiotic blend: A Randomized Controlled Trial

    Get PDF
    Background Few studies have evaluated nutritive effects of prebiotics on infant behavior state, physiology, or metabolic status. Methods In this double-blind randomized study, infants (n = 161) received cow’s milk-based infant formula (Control) or similar formula with an added prebiotic blend (polydextrose and galactooligosaccharides [PDX/GOS]) from 14–35 to 112 days of age. Infant wake behavior (crying/fussing, awake/content) and 24-h sleep–wake actograms were analyzed (Baseline, Days 70 and 112). Salivary cortisol was immunoassayed (Days 70 and 112). In a subset, exploratory stool 16S ribosomal RNA-sequencing was analyzed (Baseline, Day 112). Results One hundred and thirty-one infants completed the study. Average duration of crying/fussing episodes was similar at Baseline, significantly shorter for PDX/GOS vs. Control at Day 70, and the trajectory continued at Day 112. Latency to first and second nap was significantly longer for PDX/GOS vs. Control at Day 112. Cortisol awakening response was demonstrated at Days 70 and 112. Significant stool microbiome beta-diversity and individual taxa abundance differences were observed in the PDX/GOS group. Conclusions Results indicate faster consolidation of daytime waking state in infants receiving prebiotics and support home-based actigraphy to assess early sleep–wake patterns. A prebiotic effect on wake organization is consistent with influence on the gut–brain axis and warrants further investigation. Impact Few studies have evaluated nutritive effects of prebiotics on infant behavior state, cortisol awakening response, sleep–wake entrainment, and gut microbiome. Faster consolidation of daytime waking state was demonstrated in infants receiving a prebiotic blend in infant formula through ~4 months of age. Shorter episodes of crying were demonstrated at ~2 months of age (time point corresponding to age/developmental range associated with peak crying) in infants receiving formula with added prebiotics. Results support home-based actigraphy as a suitable method to assess early sleep–wake patterns. Prebiotic effect on wake organization is consistent with influence on the gut–brain axis and warrants further investigation

    Direct measurement of stellar angular diameters by the VERITAS Cherenkov Telescopes

    Full text link
    The angular size of a star is a critical factor in determining its basic properties. Direct measurement of stellar angular diameters is difficult: at interstellar distances stars are generally too small to resolve by any individual imaging telescope. This fundamental limitation can be overcome by studying the diffraction pattern in the shadow cast when an asteroid occults a star, but only when the photometric uncertainty is smaller than the noise added by atmospheric scintillation. Atmospheric Cherenkov telescopes used for particle astrophysics observations have not generally been exploited for optical astronomy due to the modest optical quality of the mirror surface. However, their large mirror area makes them well suited for such high-time-resolution precision photometry measurements. Here we report two occultations of stars observed by the VERITAS Cherenkov telescopes with millisecond sampling, from which we are able to provide a direct measurement of the occulted stars' angular diameter at the ≤0.1\leq0.1 milliarcsecond scale. This is a resolution never achieved before with optical measurements and represents an order of magnitude improvement over the equivalent lunar occultation method. We compare the resulting stellar radius with empirically derived estimates from temperature and brightness measurements, confirming the latter can be biased for stars with ambiguous stellar classifications.Comment: Accepted for publication in Nature Astronom

    Gamma-ray observations of Tycho's SNR with VERITAS and Fermi

    Full text link
    High-energy gamma-ray emission from supernova remnants (SNRs) has provided a unique perspective for studies of Galactic cosmic-ray acceleration. Tycho's SNR is a particularly good target because it is a young, type Ia SNR that is well-studied over a wide range of energies and located in a relatively clean environment. Since the detection of gamma-ray emission from Tycho's SNR by VERITAS and Fermi-LAT, there have been several theoretical models proposed to explain its broadband emission and high-energy morphology. We report on an update to the gamma-ray measurements of Tycho's SNR with 147 hours of VERITAS and 84 months of Fermi-LAT observations, which represents about a factor of two increase in exposure over previously published data. About half of the VERITAS data benefited from a camera upgrade, which has made it possible to extend the TeV measurements toward lower energies. The TeV spectral index measured by VERITAS is consistent with previous results, but the expanded energy range softens a straight power-law fit. At energies higher than 400 GeV, the power-law index is 2.92±0.42stat±0.20sys2.92 \pm 0.42_{\mathrm{stat}} \pm 0.20_{\mathrm{sys}}. It is also softer than the spectral index in the GeV energy range, 2.14±0.09stat±0.02sys2.14 \pm 0.09_{\mathrm{stat}} \pm 0.02_{\mathrm{sys}}, measured by this study using Fermi--LAT data. The centroid position of the gamma-ray emission is coincident with the center of the remnant, as well as with the centroid measurement of Fermi--LAT above 1 GeV. The results are consistent with an SNR shell origin of the emission, as many models assume. The updated spectrum points to a lower maximum particle energy than has been suggested previously.Comment: Accepted for publication in The Astrophysical Journa
    • …
    corecore