12,273 research outputs found

    Effect of Values and Technology Use on Exercise: Implications for Personalized Behavior Change Interventions

    Full text link
    Technology has recently been recruited in the war against the ongoing obesity crisis; however, the adoption of Health & Fitness applications for regular exercise is a struggle. In this study, we present a unique demographically representative dataset of 15k US residents that combines technology use logs with surveys on moral views, human values, and emotional contagion. Combining these data, we provide a holistic view of individuals to model their physical exercise behavior. First, we show which values determine the adoption of Health & Fitness mobile applications, finding that users who prioritize the value of purity and de-emphasize values of conformity, hedonism, and security are more likely to use such apps. Further, we achieve a weighted AUROC of .673 in predicting whether individual exercises, and we also show that the application usage data allows for substantially better classification performance (.608) compared to using basic demographics (.513) or internet browsing data (.546). We also find a strong link of exercise to respondent socioeconomic status, as well as the value of happiness. Using these insights, we propose actionable design guidelines for persuasive technologies targeting health behavior modification

    Genetic Programming for Smart Phone Personalisation

    Full text link
    Personalisation in smart phones requires adaptability to dynamic context based on user mobility, application usage and sensor inputs. Current personalisation approaches, which rely on static logic that is developed a priori, do not provide sufficient adaptability to dynamic and unexpected context. This paper proposes genetic programming (GP), which can evolve program logic in realtime, as an online learning method to deal with the highly dynamic context in smart phone personalisation. We introduce the concept of collaborative smart phone personalisation through the GP Island Model, in order to exploit shared context among co-located phone users and reduce convergence time. We implement these concepts on real smartphones to demonstrate the capability of personalisation through GP and to explore the benefits of the Island Model. Our empirical evaluations on two example applications confirm that the Island Model can reduce convergence time by up to two-thirds over standalone GP personalisation.Comment: 43 pages, 11 figure

    Learning Location from Shared Elevation Profiles in Fitness Apps: A Privacy Perspective

    Full text link
    The extensive use of smartphones and wearable devices has facilitated many useful applications. For example, with Global Positioning System (GPS)-equipped smart and wearable devices, many applications can gather, process, and share rich metadata, such as geolocation, trajectories, elevation, and time. For example, fitness applications, such as Runkeeper and Strava, utilize the information for activity tracking and have recently witnessed a boom in popularity. Those fitness tracker applications have their own web platforms and allow users to share activities on such platforms or even with other social network platforms. To preserve the privacy of users while allowing sharing, several of those platforms may allow users to disclose partial information, such as the elevation profile for an activity, which supposedly would not leak the location of the users. In this work, and as a cautionary tale, we create a proof of concept where we examine the extent to which elevation profiles can be used to predict the location of users. To tackle this problem, we devise three plausible threat settings under which the city or borough of the targets can be predicted. Those threat settings define the amount of information available to the adversary to launch the prediction attacks. Establishing that simple features of elevation profiles, e.g., spectral features, are insufficient, we devise both natural language processing (NLP)-inspired text-like representation and computer vision-inspired image-like representation of elevation profiles, and we convert the problem at hand into text and image classification problem. We use both traditional machine learning- and deep learning-based techniques and achieve a prediction success rate ranging from 59.59\% to 99.80\%. The findings are alarming, highlighting that sharing elevation information may have significant location privacy risks.Comment: 16 pages, 12 figures, 10 tables; accepted for publication in IEEE Transactions on Mobile Computing (October 2022). arXiv admin note: substantial text overlap with arXiv:1910.0904

    Fitness First or Safety First? Examining Adverse Consequences of Privacy Seals in the Event of a Data Breach.

    Get PDF
    Data breaches are increasing, and fitness trackers have proven to be an ideal target, as they collect highly sensitive personal health data and are not governed by strict security guidelines. Nevertheless, companies encourage their customers to share data with the fitness tracker using privacy seals, gaining their trust without ensuring security. Since companies cannot guarantee security, the question arises on how privacy seals work after not keeping the security promise. This study examines the possibilities to mitigate the consequences of data breaches in advance to maintain the continuance intention. Expectation-confirmation theory (ECT) and privacy assurance statements as a shaping of privacy seals are used to influence customer expectations regarding the data security of fitness trackers in the run-up to a data breach. Results show that the use of privacy assurance statements leads to high-security expectations, and failure to meet these has a negative impact on satisfaction and thus continuance intention

    Preserving Differential Privacy in Convolutional Deep Belief Networks

    Full text link
    The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users' personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) under differential privacy. Our main idea of enforcing epsilon-differential privacy is to leverage the functional mechanism to perturb the energy-based objective functions of traditional CDBNs, rather than their results. One key contribution of this work is that we propose the use of Chebyshev expansion to derive the approximate polynomial representation of objective functions. Our theoretical analysis shows that we can further derive the sensitivity and error bounds of the approximate polynomial representation. As a result, preserving differential privacy in CDBNs is feasible. We applied our model in a health social network, i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for human behavior prediction, human behavior classification, and handwriting digit recognition tasks. Theoretical analysis and rigorous experimental evaluations show that the pCDBN is highly effective. It significantly outperforms existing solutions

    Conceptualizing human resilience in the face of the global epidemiology of cyber attacks

    Get PDF
    Computer security is a complex global phenomenon where different populations interact, and the infection of one person creates risk for another. Given the dynamics and scope of cyber campaigns, studies of local resilience without reference to global populations are inadequate. In this paper we describe a set of minimal requirements for implementing a global epidemiological infrastructure to understand and respond to large-scale computer security outbreaks. We enumerate the relevant dimensions, the applicable measurement tools, and define a systematic approach to evaluate cyber security resilience. From the experience in conceptualizing and designing a cross-national coordinated phishing resilience evaluation we describe the cultural, logistic, and regulatory challenges to this proposed public health approach to global computer assault resilience. We conclude that mechanisms for systematic evaluations of global attacks and the resilience against those attacks exist. Coordinated global science is needed to address organised global ecrime

    Crowd-ML: A Privacy-Preserving Learning Framework for a Crowd of Smart Devices

    Full text link
    Smart devices with built-in sensors, computational capabilities, and network connectivity have become increasingly pervasive. The crowds of smart devices offer opportunities to collectively sense and perform computing tasks in an unprecedented scale. This paper presents Crowd-ML, a privacy-preserving machine learning framework for a crowd of smart devices, which can solve a wide range of learning problems for crowdsensing data with differential privacy guarantees. Crowd-ML endows a crowdsensing system with an ability to learn classifiers or predictors online from crowdsensing data privately with minimal computational overheads on devices and servers, suitable for a practical and large-scale employment of the framework. We analyze the performance and the scalability of Crowd-ML, and implement the system with off-the-shelf smartphones as a proof of concept. We demonstrate the advantages of Crowd-ML with real and simulated experiments under various conditions

    Expecting the Unexpected in Security Violations in Mobile Apps

    Get PDF
    personal data. This increased access and control may raise users’ perception of heightened privacy leakage and security issues. This is especially the case if users’ awareness and expectations of this external access and control is not accurately recognized through proper security declarations. This proposal thus attempts to put forth an investigation on the effect of mobile users’ privacy expectation disconfirmation on their continued usage intention of mobile apps sourced from app distribution stores. Drawing upon the APCO framework, security awareness literature and the expectation-disconfirmation perspective, two key types of security awareness information are identified; namely access annotation and modification annotation. It is noted that these types of information can be emphasized in app distribution stores to reduce subsequent privacy expectation disconfirmation. Hence, this study plans to examine the downstream effect of privacy expectation disconfirmation on users’ continued usage intention. To operationalize this research, a laboratory experiment will be conducted

    Detection of Lying Electrical Vehicles in Charging Coordination Application Using Deep Learning

    Full text link
    The simultaneous charging of many electric vehicles (EVs) stresses the distribution system and may cause grid instability in severe cases. The best way to avoid this problem is by charging coordination. The idea is that the EVs should report data (such as state-of-charge (SoC) of the battery) to run a mechanism to prioritize the charging requests and select the EVs that should charge during this time slot and defer other requests to future time slots. However, EVs may lie and send false data to receive high charging priority illegally. In this paper, we first study this attack to evaluate the gains of the lying EVs and how their behavior impacts the honest EVs and the performance of charging coordination mechanism. Our evaluations indicate that lying EVs have a greater chance to get charged comparing to honest EVs and they degrade the performance of the charging coordination mechanism. Then, an anomaly based detector that is using deep neural networks (DNN) is devised to identify the lying EVs. To do that, we first create an honest dataset for charging coordination application using real driving traces and information revealed by EV manufacturers, and then we also propose a number of attacks to create malicious data. We trained and evaluated two models, which are the multi-layer perceptron (MLP) and the gated recurrent unit (GRU) using this dataset and the GRU detector gives better results. Our evaluations indicate that our detector can detect lying EVs with high accuracy and low false positive rate
    • 

    corecore