8,289 research outputs found

    XRay: Enhancing the Web's Transparency with Differential Correlation

    Get PDF
    Today's Web services - such as Google, Amazon, and Facebook - leverage user data for varied purposes, including personalizing recommendations, targeting advertisements, and adjusting prices. At present, users have little insight into how their data is being used. Hence, they cannot make informed choices about the services they choose. To increase transparency, we developed XRay, the first fine-grained, robust, and scalable personal data tracking system for the Web. XRay predicts which data in an arbitrary Web account (such as emails, searches, or viewed products) is being used to target which outputs (such as ads, recommended products, or prices). XRay's core functions are service agnostic and easy to instantiate for new services, and they can track data within and across services. To make predictions independent of the audited service, XRay relies on the following insight: by comparing outputs from different accounts with similar, but not identical, subsets of data, one can pinpoint targeting through correlation. We show both theoretically, and through experiments on Gmail, Amazon, and YouTube, that XRay achieves high precision and recall by correlating data from a surprisingly small number of extra accounts.Comment: Extended version of a paper presented at the 23rd USENIX Security Symposium (USENIX Security 14

    Making AI Meaningful Again

    Get PDF
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy

    Crop Insurance Premium Recommendation System Using Artificial Intelligence Techniques

    Get PDF
    Purpose: The objective of this study is to build a crop insurance premium recommender model which will be fair to both crop insurance policy holders and crop insurance service providers.   Theoretical Framework: The Nonparametric Bayesian Model (modified) is the name of the proposed model suggested by Maulidi et al. (2021) and it consists of six variables which are regional risk, cultivation time period, land area, claim frequency, discount eligibility (local variable) and premium. Discount eligibility variable is introduced to encourage right farming practices among farmers.   Design/methodology/approach: Descriptive research method is used in this study as it is used to accurately represent the characteristics of a group of items. The population for this study is 943 respondents. The entire dataset is used for in-depth and accurate analysis. Five Artificial Intelligence models (Machine Learning models) are proposed for crop insurance premium prediction and they are Ada Boost Regressor, Gradient Boosting Regressor, Extra Trees Regressor, Support Vector Regressor and K-Neighbors Regressor. Among them Gradient Boosting Regression model has given the highest accuracy. Thus, Gradient Boosting Regression model is the most suitable model to be recommended for crop insurance premium prediction.   Findings and Suggestions: Regional risk, land area, claim frequency and cultivation time period is the order of independent variables from highest to least in terms of regression coefficient. This relative importance helps Non-Banking Financial Companies (NBFCs) to suggest farmers that they should concentrate most on the regional risk or chances of crop failure in a particular region in which they are doing agriculture and least on the cultivation time period of a crop or the season in which a crop is cultivated. Two suggestions for future researchers are to extend this research work to other parts of Tamil Nadu and to apply hybrid machine learning techniques to the proposed model.   Practical Implication: Unlike the existing formula-based traditional method used for calculating crop insurance premium, artificial intelligence models (machine learning models) can automatically learn the changes that take place with respect to the nature of variables in the proposed model and improve its accuracy based on new data. Hence, the crop insurance premium suggested by the most accurate model among the artificial intelligence models used in this study will be fair to both NBFCs and farmers. Here, fair means moderate. On the other hand, the crop insurance premium suggested by the existing formula-based method may not be fair in the long term as they cannot automatically learn the changes that take place with respect to the nature of variables in the proposed model and improve.   Originality/value: In this research article, the relative importance of independent variables in the proposed model is determined and it helps NBFCs to suggest farmers that they should concentrate most on the region they are doing agriculture and least on the cultivation time period of a crop. Additionally, a machine learning model which can automatically learn and improve itself is used and hence the crop insurance premium predicted by it will be fair. Finally, the entire population containing 943 respondents details is analysed
    • …
    corecore