1,440 research outputs found

    Usability, Efficiency and Security of Personal Computing Technologies

    Get PDF
    New personal computing technologies such as smartphones and personal fitness trackers are widely integrated into user lifestyles. Users possess a wide range of skills, attributes and backgrounds. It is important to understand user technology practices to ensure that new designs are usable and productive. Conversely, it is important to leverage our understanding of user characteristics to optimize new technology efficiency and effectiveness. Our work initially focused on studying older users, and personal fitness tracker users. We applied the insights from these investigations to develop new techniques improving user security protections, computational efficiency, and also enhancing the user experience. We offer that by increasing the usability, efficiency and security of personal computing technology, users will enjoy greater privacy protections along with experiencing greater enjoyment of their personal computing devices. Our first project resulted in an improved authentication system for older users based on familiar facial images. Our investigation revealed that older users are often challenged by traditional text passwords, resulting in decreased technology use or less than optimal password practices. Our graphical password-based system relies on memorable images from the user\u27s personal past history. Our usability study demonstrated that this system was easy to use, enjoyable, and fast. We show that this technique is extendable to smartphones. Personal fitness trackers are very popular devices, often worn by users all day. Our personal fitness tracker investigation provides the first quantitative baseline of usage patterns with this device. By exploring public data, real-world user motivations, reliability concerns, activity levels, and fitness-related socialization patterns were discerned. This knowledge lends insight to active user practices. Personal user movement data is captured by sensors, then analyzed to provide benefits to the user. The dynamic time warping technique enables comparison of unequal data sequences, and sequences containing events at offset times. Existing techniques target short data sequences. Our Phase-aware Dynamic Time Warping algorithm focuses on a class of sinusoidal user movement patterns, resulting in improved efficiency over existing methods. Lastly, we address user data privacy concerns in an environment where user data is increasingly flowing to manufacturer remote cloud servers for analysis. Our secure computation technique protects the user\u27s privacy while data is in transit and while resident on cloud computing resources. Our technique also protects important data on cloud servers from exposure to individual users

    Secure and Privacy-Preserving Automated Machine Learning Operations into End-to-End Integrated IoT-Edge-Artificial Intelligence-Blockchain Monitoring System for Diabetes Mellitus Prediction

    Full text link
    Diabetes Mellitus, one of the leading causes of death worldwide, has no cure to date and can lead to severe health complications, such as retinopathy, limb amputation, cardiovascular diseases, and neuronal disease, if left untreated. Consequently, it becomes crucial to take precautionary measures to avoid/predict the occurrence of diabetes. Machine learning approaches have been proposed and evaluated in the literature for diabetes prediction. This paper proposes an IoT-edge-Artificial Intelligence (AI)-blockchain system for diabetes prediction based on risk factors. The proposed system is underpinned by the blockchain to obtain a cohesive view of the risk factors data from patients across different hospitals and to ensure security and privacy of the user's data. Furthermore, we provide a comparative analysis of different medical sensors, devices, and methods to measure and collect the risk factors values in the system. Numerical experiments and comparative analysis were carried out between our proposed system, using the most accurate random forest (RF) model, and the two most used state-of-the-art machine learning approaches, Logistic Regression (LR) and Support Vector Machine (SVM), using three real-life diabetes datasets. The results show that the proposed system using RF predicts diabetes with 4.57% more accuracy on average compared to LR and SVM, with 2.87 times more execution time. Data balancing without feature selection does not show significant improvement. The performance is improved by 1.14% and 0.02% after feature selection for PIMA Indian and Sylhet datasets respectively, while it reduces by 0.89% for MIMIC III

    Internalizing Data Collection: Personal Analytics as an Investigation of the Self

    Get PDF
    Personal analytics, aka self-tracking, is the practice of using a digital device to track aspects of behavior, such as exercise habits, heart rate, sleep patterns, location, diet, and a host of other data points. This dissertation is an exploration of “self” in self-tracking, informed by theories of subjectivity, autonomy, power and knowledge. As a technological intervention, self-tracking devices change how we experience our own body and behavior. They also serve as methods to digitize human behavior. This data is combined with other data and processed using computational methods. Self-tracking devices are both personal and bureaucratic. They are devices used for self-care and institutional processes. As mediating objects, they occupy a multifaceted position that they share with other forms of mediated experience. Like social media, which is both a form of personal expression and a way to track users’ behavior, self-tracking participates in changing attitudes about surveillance. People are willing to subject themselves to surveillance and are largely unaware or unconcerned with the ways in which self-surveillance is the same thing as institutional surveillance. This study positions self-tracking as a practice of institutional population management, not simply personalized exercise tools. A Fitbit might seem to simply measure a “step,” an identifiable metric that exists regardless of whether it is counted. Yet, how can this metric be considered neutral and objective when its institutional purpose guides its development? Thinking of measurement as neutral ignores the process by which anything comes to be measured. All kinds of decisions—about what to count, how to count it, and what to do with the data—are made prior to the end user’s experience. Measurement is a cultural activity and thus the outcome of this data collection is never neutral with respect to power. By looking at fitness-tracker privacy policies, workplace wellness programs, data sharing practices, and advertising materials, I trace the discursive practices surrounding self-tracking. As we surveil our bodies and behavior, we enact a focused attention upon the self. Understanding the consequence of this focus is crucial to understanding how data operates in today’s economy. My overall critique of data in this dissertation concerns how the focus on self obscures the institutional uses and abuses of data. The epistemic affordances of data flow in multiple directions. Self-tracking devices offer the promise to reveal hidden data about the self. They accomplish something different—they create the means to recraft the self into something else entirely. They make the self into an entity that is knowable and therefore able to be the subject of market transactions and manipulated by institutions

    Performance modelling and optimization for video-analytic algorithms in a cloud-like environment using machine learning

    Get PDF
    CCTV cameras produce a large amount of video surveillance data per day, and analysing them require the use of significant computing resources that often need to be scalable. The emergence of the Hadoop distributed processing framework has had a significant impact on various data intensive applications as the distributed computed based processing enables an increase of the processing capability of applications it serves. Hadoop is an open source implementation of the MapReduce programming model. It automates the operation of creating tasks for each function, distribute data, parallelize executions and handles machine failures that reliefs users from the complexity of having to manage the underlying processing and only focus on building their application. It is noted that in a practical deployment the challenge of Hadoop based architecture is that it requires several scalable machines for effective processing, which in turn adds hardware investment cost to the infrastructure. Although using a cloud infrastructure offers scalable and elastic utilization of resources where users can scale up or scale down the number of Virtual Machines (VM) upon requirements, a user such as a CCTV system operator intending to use a public cloud would aspire to know what cloud resources (i.e. number of VMs) need to be deployed so that the processing can be done in the fastest (or within a known time constraint) and the most cost effective manner. Often such resources will also have to satisfy practical, procedural and legal requirements. The capability to model a distributed processing architecture where the resource requirements can be effectively and optimally predicted will thus be a useful tool, if available. In literature there is no clear and comprehensive modelling framework that provides proactive resource allocation mechanisms to satisfy a user's target requirements, especially for a processing intensive application such as video analytic. In this thesis, with the hope of closing the above research gap, novel research is first initiated by understanding the current legal practices and requirements of implementing video surveillance system within a distributed processing and data storage environment, since the legal validity of data gathered or processed within such a system is vital for a distributed system's applicability in such domains. Subsequently the thesis presents a comprehensive framework for the performance ii modelling and optimization of resource allocation in deploying a scalable distributed video analytic application in a Hadoop based framework, running on virtualized cluster of machines. The proposed modelling framework investigates the use of several machine learning algorithms such as, decision trees (M5P, RepTree), Linear Regression, Multi Layer Perceptron(MLP) and the Ensemble Classifier Bagging model, to model and predict the execution time of video analytic jobs, based on infrastructure level as well as job level parameters. Further in order to propose a novel framework for the allocate resources under constraints to obtain optimal performance in terms of job execution time, we propose a Genetic Algorithms (GAs) based optimization technique. Experimental results are provided to demonstrate the proposed framework's capability to successfully predict the job execution time of a given video analytic task based on infrastructure and input data related parameters and its ability determine the minimum job execution time, given constraints of these parameters. Given the above, the thesis contributes to the state-of-art in distributed video analytics, design, implementation, performance analysis and optimisation

    Marketing Research 2.0

    Get PDF
    Authors: KrisztiĂĄn SzƱcs; Erika LĂĄzĂĄr; PĂ©ter NĂ©meth | Title: Marketing Research 2.0 | Publisher: University of PĂ©cs Faculty of Business and Economics Department of Marketing and Tourism PĂ©cs, 2020 | ISBN (pdf) 978-963-429-630-0 --- Marketing research always follows the trends and improves its methods according to the ever-changing demands of companies. Thus, consecutive periods, alternating between growth and decrease, enliven the days of researchers. We can already see that the mid-‘90s and the second half of the decade were an end of an era, or rather the beginning of a new chapter that has been evolving steadily until today, changing everything we had learned. The changes were hard to detect in the Hungary of the late ‘90s and the turn of the millennium, yet they had already started in the field of applied marketing research in economically developed countries. The changes presented themselves mainly due to technological development and were later amplified by the global economic crisis in the first decade of the new millennium. These two effects triggered fundamental changes in the industry and its research methods. First, the efficiency of traditional techniques and the novelty of results were questioned, then, by the years of the crisis, even the value-creating potential of research firms was disputed. The past decade witnessed a kind of renewal, that entails a significant transformation of methodologies on the one hand, while on the other, it has enforced research companies on the market to identify and augment new skills and competencies. Our book summarises this process with the main stages and levelling points while drawing upon some limitations to set our frame of reference. Consequently, we will not discuss the methodological transformation of fundamental research, the changes of mathematical and statistical devices, or the developments in B2B, and other fields of research. We will try to provide, however, insights to a wide range of topics such as the current status of consumer research, the trends setting the near future, and we will also draw a generic model that has taken the place of the former cooperation among the actors of the industry ultimately changing the points of references for researchers to appear on the market with competitive services

    Security Issues of Mobile and Smart Wearable Devices

    Get PDF
    Mobile and smart devices (ranging from popular smartphones and tablets to wearable fitness trackers equipped with sensing, computing and networking capabilities) have proliferated lately and redefined the way users carry out their day-to-day activities. These devices bring immense benefits to society and boast improved quality of life for users. As mobile and smart technologies become increasingly ubiquitous, the security of these devices becomes more urgent, and users should take precautions to keep their personal information secure. Privacy has also been called into question as so many of mobile and smart devices collect, process huge quantities of data, and store them on the cloud as a matter of fact. Ensuring confidentiality, integrity, and authenticity of the information is a cybersecurity challenge with no easy solution. Unfortunately, current security controls have not kept pace with the risks posed by mobile and smart devices, and have proven patently insufficient so far. Thwarting attacks is also a thriving research area with a substantial amount of still unsolved problems. The pervasiveness of smart devices, the growing attack vectors, and the current lack of security call for an effective and efficient way of protecting mobile and smart devices. This thesis deals with the security problems of mobile and smart devices, providing specific methods for improving current security solutions. Our contributions are grouped into two related areas which present natural intersections and corresponds to the two central parts of this document: (1) Tackling Mobile Malware, and (2) Security Analysis on Wearable and Smart Devices. In the first part of this thesis, we study methods and techniques to assist security analysts to tackle mobile malware and automate the identification of malicious applications. We provide threefold contributions in tackling mobile malware: First, we introduce a Secure Message Delivery (SMD) protocol for Device-to-Device (D2D) networks, with primary objective of choosing the most secure path to deliver a message from a sender to a destination in a multi-hop D2D network. Second, we illustrate a survey to investigate concrete and relevant questions concerning Android code obfuscation and protection techniques, where the purpose is to review code obfuscation and code protection practices. We evaluate efficacy of existing code de-obfuscation tools to tackle obfuscated Android malware (which provide attackers with the ability to evade detection mechanisms). Finally, we propose a Machine Learning-based detection framework to hunt malicious Android apps by introducing a system to detect and classify newly-discovered malware through analyzing applications. The proposed system classifies different types of malware from each other and helps to better understanding how malware can infect devices, the threat level they pose and how to protect against them. Our designed system leverages more complete coverage of apps’ behavioral characteristics than the state-of-the-art, integrates the most performant classifier, and utilizes the robustness of extracted features. The second part of this dissertation conducts an in-depth security analysis of the most popular wearable fitness trackers on the market. Our contributions are grouped into four central parts in this domain: First, we analyze the primitives governing the communication between fitness tracker and cloud-based services. In addition, we investigate communication requirements in this setting such as: (i) Data Confidentiality, (ii) Data Integrity, and (iii) Data Authenticity. Second, we show real-world demos on how modern wearable devices are vulnerable to false data injection attacks. Also, we document successful injection of falsified data to cloud-based services that appears legitimate to the cloud to obtain personal benefits. Third, we circumvent End-to-End protocol encryption implemented in the most advanced and secure fitness trackers (e.g., Fitbit, as the market leader) through Hardware-based reverse engineering. Last but not least, we provide guidelines for avoiding similar vulnerabilities in future system designs

    Understanding and Enriching Randomness Within Resource-Constrained Devices

    Get PDF
    Random Number Generators (RNG) find use throughout all applications of computing, from high level statistical modeling all the way down to essential security primitives. A significant amount of prior work has investigated this space, as a poorly performing generator can have significant impacts on algorithms that rely on it. However, recent explosive growth of the Internet of Things (IoT) has brought forth a class of devices for which common RNG algorithms may not provide an optimal solution. Furthermore, new hardware creates opportunities that have not yet been explored with these devices. in this Dissertation, we present research fostering deeper understanding of and enrichment of the state of randomness within the context of resource-constrained devices. First, we present an exploratory study into methods of generating random numbers on devices with sensors. We perform a data collection study across 37 android devices to determine how much random data is consumed, and which sensors are capable of producing sufficiently entropic data. We use the results of our analysis to create an experimental framework called SensoRNG, which serves as a prototype to test the efficacy of a sensor-based RNG. SensoRNG employs opportunistic collection of data from on-board sensors and applies a light-weight mixing algorithm to produce random numbers. We evaluate SensoRNG with the National Institute of Standards and Technology (NIST) statistical testing suite and demonstrate that a sensor-based RNG can provide high quality random numbers with only little additional overhead. Second, we explore the design, implementation, and efficacy of a Collaborative and Distributed Entropy Transfer protocol (CADET), which explores moving random number generation from an individual task to a collaborative one. Through the sharing of excess random data, devices that are unable to meet their own needs can be aided by contributions from other devices. We implement and test a proof-of-concept version of CADET on a testbed of 49 Raspberry Pi 3B single-board computers, which have been underclocked to emulate resource-constrained devices. Through this, we evaluate and demonstrate the efficacy and baseline performance of remote entropy protocols of this type, as well as highlight remaining research questions and challenges. Finally, we design and implement a system called RightNoise, which automatically profiles the RNG activity of a device by using techniques adapted from language modeling. First, by performing offline analysis, RightNoise is able to mine and reconstruct, in the context of a resource-constrained device, the structure of different activities from raw RNG access logs. After recovering these patterns, the device is able to profile its own behavior in real time. We give a thorough evaluation of the algorithms used in RightNoise and show that, with only five instances of each activity type per log, RightNoise is able to reconstruct the full set of activities with over 90\% accuracy. Furthermore, classification is very quick, with an average speed of 0.1 seconds per block. We finish this work by discussing real world application scenarios for RightNoise

    Machine Learning Enabled Vital Sign Monitoring System

    Get PDF
    Internet of Things (IoT)- based remote health monitoring systems have an enormous potential of becoming an integral part of the future medical system. In particular, these systems can play life-saving roles for treating or monitoring patients with critical health issues. On the other hand, it can also reduce pressure on the health-care system by reducing unnecessary hospital visits of patients. Any health care monitoring system must be free from erroneous data, which may arise because of instrument failure or communication errors. In this thesis, machine-learning techniques are implemented to detect reliability and accuracy of data obtained by the IoT-based remote health monitoring. A system is a set-up where vital health signs, namely, blood pressure, respiratory rate, and pulse rate, are collected by using Spire Stone and iHealth Sense devices. This data is then sent to the intermediate device and then to the cloud. In this system, it is assumed that the channel for transmission of data (vital signs) from users to cloud server is error-free. Afterward, the information is extracted from the cloud, and two machine learning techniques, i.e., Support Vector Machines and K-Nearest Neighbor are applied to compare their accuracy in distinguishing correct and erroneous data. The thesis undertakes two different approaches of erroneous data detection. In the first approach, an unsupervised classifier called Auto Encoder (AE) is used for labeling data by using the latent features. Then the labeled data from AE is used as ground truth for comparing the accuracy of supervised learning models. In the second approach, the raw data is labeled based on the correlation between various features. The accuracy comparison is performed between strongly correlated features and weakly correlated features. Finally, the accuracy comparison between two approaches is performed to check which method is performing better for detecting erroneous data for the given dataset
    • 

    corecore