2,981 research outputs found

    Anticipatory Mobile Computing: A Survey of the State of the Art and Research Challenges

    Get PDF
    Today's mobile phones are far from mere communication devices they were ten years ago. Equipped with sophisticated sensors and advanced computing hardware, phones can be used to infer users' location, activity, social setting and more. As devices become increasingly intelligent, their capabilities evolve beyond inferring context to predicting it, and then reasoning and acting upon the predicted context. This article provides an overview of the current state of the art in mobile sensing and context prediction paving the way for full-fledged anticipatory mobile computing. We present a survey of phenomena that mobile phones can infer and predict, and offer a description of machine learning techniques used for such predictions. We then discuss proactive decision making and decision delivery via the user-device feedback loop. Finally, we discuss the challenges and opportunities of anticipatory mobile computing.Comment: 29 pages, 5 figure

    Group Activity Recognition Using Wearable Sensing Devices

    Get PDF
    Understanding behavior of groups in real time can help prevent tragedy in crowd emergencies. Wearable devices allow sensing of human behavior, but the infrastructure required to communicate data is often the first casualty in emergency situations. Peer-to-peer (P2P) methods for recognizing group behavior are necessary, but the behavior of the group cannot be observed at any single location. The contribution is the methods required for recognition of group behavior using only wearable devices

    A Novel Energy-Efficient Approach for Human Activity Recognition

    Get PDF
    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper

    Privacy-preserving human mobility and activity modelling

    Get PDF
    The exponential proliferation of digital trends and worldwide responses to the COVID-19 pandemic thrust the world into digitalization and interconnectedness, pushing increasingly new technologies/devices/applications into the market. More and more intimate data of users are collected for positive analysis purposes of improving living well-being but shared with/without the user's consent, emphasizing the importance of making human mobility and activity models inclusive, private, and fair. In this thesis, I develop and implement advanced methods/algorithms to model human mobility and activity in terms of temporal-context dynamics, multi-occupancy impacts, privacy protection, and fair analysis. The following research questions have been thoroughly investigated: i) whether the temporal information integrated into the deep learning networks can improve the prediction accuracy in both predicting the next activity and its timing; ii) how is the trade-off between cost and performance when optimizing the sensor network for multiple-occupancy smart homes; iii) whether the malicious purposes such as user re-identification in human mobility modelling could be mitigated by adversarial learning; iv) whether the fairness implications of mobility models and whether privacy-preserving techniques perform equally for different groups of users. To answer these research questions, I develop different architectures to model human activity and mobility. I first clarify the temporal-context dynamics in human activity modelling and achieve better prediction accuracy by appropriately using the temporal information. I then design a framework MoSen to simulate the interaction dynamics among residents and intelligent environments and generate an effective sensor network strategy. To relieve users' privacy concerns, I design Mo-PAE and show that the privacy of mobility traces attains decent protection at the marginal utility cost. Last but not least, I investigate the relations between fairness and privacy and conclude that while the privacy-aware model guarantees group fairness, it violates the individual fairness criteria.Open Acces

    Agency

    Get PDF
    "There is agency in all we do: thinking, doing, or making. We invent a tune, play, or use it to celebrate an occasion. Or we make a conceptual leap and ask more abstract questions about the conditions for agency. They include autonomy and self-appraisal, each contested by arguments immersing us in circumstances we don’t control. But can it be true we that have no personal responsibility for all we think and do? Agency: Moral Identity and Free Will proposes that deliberation, choice, and free will emerged within the evolutionary history of animals with a physical advantage: organisms having cell walls or exoskeletons had an internal space within which to protect themselves from external threats or encounters. This defense was both structural and active: such organisms could ignore intrusions or inhibit risky behavior. Their capacities evolved with time: inhibition became the power to deliberate and choose the manner of one’s responses. Hence the ability of humans and some other animals to determine their reactions to problematic situations or to information that alters values and choices. This is free will as a material power, not as the conclusion to a conceptual argument. Having it makes us morally responsible for much we do. It prefigures moral identity. Closely argued but plainly written, Agency: Moral Identity and Free Will speaks for autonomy and responsibility when both are eclipsed by ideas that embed us in history or tradition. Our sense of moral choice and freedom is accurate. We are not altogether the creatures of our circumstances.

    Agency

    Get PDF
    "There is agency in all we do: thinking, doing, or making. We invent a tune, play, or use it to celebrate an occasion. Or we make a conceptual leap and ask more abstract questions about the conditions for agency. They include autonomy and self-appraisal, each contested by arguments immersing us in circumstances we don’t control. But can it be true we that have no personal responsibility for all we think and do? Agency: Moral Identity and Free Will proposes that deliberation, choice, and free will emerged within the evolutionary history of animals with a physical advantage: organisms having cell walls or exoskeletons had an internal space within which to protect themselves from external threats or encounters. This defense was both structural and active: such organisms could ignore intrusions or inhibit risky behavior. Their capacities evolved with time: inhibition became the power to deliberate and choose the manner of one’s responses. Hence the ability of humans and some other animals to determine their reactions to problematic situations or to information that alters values and choices. This is free will as a material power, not as the conclusion to a conceptual argument. Having it makes us morally responsible for much we do. It prefigures moral identity. Closely argued but plainly written, Agency: Moral Identity and Free Will speaks for autonomy and responsibility when both are eclipsed by ideas that embed us in history or tradition. Our sense of moral choice and freedom is accurate. We are not altogether the creatures of our circumstances.

    Situation inference and context recognition for intelligent mobile sensing applications

    Get PDF
    The usage of smart devices is an integral element in our daily life. With the richness of data streaming from sensors embedded in these smart devices, the applications of ubiquitous computing are limitless for future intelligent systems. Situation inference is a non-trivial issue in the domain of ubiquitous computing research due to the challenges of mobile sensing in unrestricted environments. There are various advantages to having robust and intelligent situation inference from data streamed by mobile sensors. For instance, we would be able to gain a deeper understanding of human behaviours in certain situations via a mobile sensing paradigm. It can then be used to recommend resources or actions for enhanced cognitive augmentation, such as improved productivity and better human decision making. Sensor data can be streamed continuously from heterogeneous sources with different frequencies in a pervasive sensing environment (e.g., smart home). It is difficult and time-consuming to build a model that is capable of recognising multiple activities. These activities can be performed simultaneously with different granularities. We investigate the separability aspect of multiple activities in time-series data and develop OPTWIN as a technique to determine the optimal time window size to be used in a segmentation process. As a result, this novel technique reduces need for sensitivity analysis, which is an inherently time consuming task. To achieve an effective outcome, OPTWIN leverages multi-objective optimisation by minimising the impurity (the number of overlapped windows of human activity labels on one label space over time series data) while maximising class separability. The next issue is to effectively model and recognise multiple activities based on the user's contexts. Hence, an intelligent system should address the problem of multi-activity and context recognition prior to the situation inference process in mobile sensing applications. The performance of simultaneous recognition of human activities and contexts can be easily affected by the choices of modelling approaches to build an intelligent model. We investigate the associations of these activities and contexts at multiple levels of mobile sensing perspectives to reveal the dependency property in multi-context recognition problem. We design a Mobile Context Recognition System, which incorporates a Context-based Activity Recognition (CBAR) modelling approach to produce effective outcome from both multi-stage and multi-target inference processes to recognise human activities and their contexts simultaneously. Upon our empirical evaluation on real-world datasets, the CBAR modelling approach has significantly improved the overall accuracy of simultaneous inference on transportation mode and human activity of mobile users. The accuracy of activity and context recognition can also be influenced progressively by how reliable user annotations are. Essentially, reliable user annotation is required for activity and context recognition. These annotations are usually acquired during data capture in the world. We research the needs of reducing user burden effectively during mobile sensor data collection, through experience sampling of these annotations in-the-wild. To this end, we design CoAct-nnotate --- a technique that aims to improve the sampling of human activities and contexts by providing accurate annotation prediction and facilitates interactive user feedback acquisition for ubiquitous sensing. CoAct-nnotate incorporates a novel multi-view multi-instance learning mechanism to perform more accurate annotation prediction. It also includes a progressive learning process (i.e., model retraining based on co-training and active learning) to improve its predictive performance over time. Moving beyond context recognition of mobile users, human activities can be related to essential tasks that the users perform in daily life. Conversely, the boundaries between the types of tasks are inherently difficult to establish, as they can be defined differently from the individuals' perspectives. Consequently, we investigate the implication of contextual signals for user tasks in mobile sensing applications. To define the boundary of tasks and hence recognise them, we incorporate such situation inference process (i.e., task recognition) into the proposed Intelligent Task Recognition (ITR) framework to learn users' Cyber-Physical-Social activities from their mobile sensing data. By recognising the engaged tasks accurately at a given time via mobile sensing, an intelligent system can then offer proactive supports to its user to progress and complete their tasks. Finally, for robust and effective learning of mobile sensing data from heterogeneous sources (e.g., Internet-of-Things in a mobile crowdsensing scenario), we investigate the utility of sensor data in provisioning their storage and design QDaS --- an application agnostic framework for quality-driven data summarisation. This allows an effective data summarisation by performing density-based clustering on multivariate time series data from a selected source (i.e., data provider). Thus, the source selection process is determined by the measure of data quality. Nevertheless, this framework allows intelligent systems to retain comparable predictive results by its effective learning on the compact representations of mobile sensing data, while having a higher space saving ratio. This thesis contains novel contributions in terms of the techniques that can be employed for mobile situation inference and context recognition, especially in the domain of ubiquitous computing and intelligent assistive technologies. This research implements and extends the capabilities of machine learning techniques to solve real-world problems on multi-context recognition, mobile data summarisation and situation inference from mobile sensing. We firmly believe that the contributions in this research will help the future study to move forward in building more intelligent systems and applications

    Privaatsust säilitava raalnägemise meetodi arendamine kehalise aktiivsuse automaatseks jälgimiseks koolis

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneKuidas vaadelda inimesi ilma neid nägemata? Öeldakse, et ei ole viisakas jõllitada. Õigus privaatsusele on lausa inimõigus. Siiski on inimkäitumises palju sellist, mida teadlased tahaksid uurida inimesi vaadeldes. Näiteks tahame teada, kas lapsed hakkavad vahetunnis rohkem liikuma, kui koolis keelatakse nutitelefonid? Selle välja selgitamiseks peaks teadlane küsima lapsevanematelt nõusolekut võsukeste vaatlemiseks. Eeldusel, et lapsevanemad annavad loa, oleks klassikaliseks vaatluseks vaja tohutult palju tööjõudu – mitu vaatlejat koolimajas iga päev piisavalt pikal perioodil enne ja pärast nutitelefoni keelu kehtestamist. Doktoritööga püüdsin lahendada korraga privaatsuse probleemi ja tööjõu probleemi, asendades inimvaatleja tehisaruga. Kaasaegsed masinõppe meetodid võimaldavad luua mudeleid, mis tuvastavad automaatselt pildil või videos kujutatud objekte ja nende omadusi. Kui tahame tehisaru, mis tunneb pildil ära inimese, tuleb moodustada masinõppe andmestik, kus on pilte inimestest ja pilte ilma inimesteta. Kui tahame tehisaru, mis eristaks videos madalat ja kõrget kehalist aktiivsust, on vaja vastavat videoandmestikku. Doktoritöös kogusingi andmestiku, kus video laste liikumisest on sünkroniseeritud puusal kantavate aktseleromeetritega, et treenida mudel, mis eristaks videopikslites madalamat ja kõrgemat liikumise intensiivsust. Koostöös Tehonoloogiainstituudi iCV laboriga arendasime välja videoanalüüsi sensori prototüübi, mis suudab reaalaja kiirusel hinnata kaamera vaateväljas olevate inimeste kehalise aktiivsuse taset. Just see, et tehisaru suudab tuletada videost kehalise aktiivsuse informatsiooni ilma neid videokaadreid salvestamata ega inimestele üldsegi näitamata, võimaldab vaadelda inimesi ilma neid nägemata. Väljatöötatud meetod on mõeldud kehalise aktiivsuse mõõtmiseks koolipõhistes teadusuuringutes ning seetõttu on arenduses rõhutatud privaatsuse kaitsmist ja teaduseetikat. Laiemalt vaadates illustreerib doktoritöö aga raalnägemistehnoloogiate potentsiaali töötlemaks visuaalset infot linnaruumis ja töökohtadel ning mitte ainult kehalise aktiivsuse mõõtmiseks kõrgete teaduseetika kriteerimitega. Siin ongi koht avalikuks aruteluks – millistel tingimustel või kas üldse on OK, kui sind jõllitab robot?  How to observe people without seeing them? They say it's not polite to stare. The right to privacy is considered a human right. However, there is much in human behavior that scientists would like to study via observation. For example, we want to know whether children will start moving more during recess if smartphones are banned at school? To figure this out, scientists would have to ask parental consent to carry out the observation. Assuming parents grant permission, a huge amount of labour would be needed for classical observation - several observers in the schoolhouse every day for a sufficiently long period before and after the smartphone ban. With my doctoral thesis, I tried to solve both the problem of privacy and of labor by replacing the human observer with artificial intelligence (AI). Modern machine learning methods allow training models that automatically detect objects and their properties in images or video. If we want an AI that recognizes people in images, we need to form a machine learning dataset with pictures of people and pictures without people. If we want an AI that differentiates between low and high physical activity in video, we need a corresponding video dataset. In my doctoral thesis, I collected a dataset where video of children's movement is synchronized with hip-worn accelerometers to train a model that could differentiate between lower and higher levels of physical activity in video. In collaboration with the ICV lab at the Institute of Technology, we developed a prototype video analysis sensor that can estimate the level of physical activity of people in the camera's field of view at real-time speed. The fact that AI can derive information about physical activity from the video without recording the footage or showing it to anyone at all, makes it possible to observe without seeing. The method is designed for measuring physical activity in school-based research and therefore highly prioritizes privacy protection and research ethics. But more broadly, the thesis illustrates the potential of computer vision technologies for processing visual information in urban spaces and workplaces, and not only for measuring physical activity or adhering to high ethical standards. This warrants wider public discussion – under what conditions or whether at all is it OK to have a robot staring at you?https://www.ester.ee/record=b555972

    A Novel Approach to Complex Human Activity Recognition

    Get PDF
    Human activity recognition is a technology that offers automatic recognition of what a person is doing with respect to body motion and function. The main goal is to recognize a person\u27s activity using different technologies such as cameras, motion sensors, location sensors, and time. Human activity recognition is important in many areas such as pervasive computing, artificial intelligence, human-computer interaction, health care, health outcomes, rehabilitation engineering, occupational science, and social sciences. There are numerous ubiquitous and pervasive computing systems where users\u27 activities play an important role. The human activity carries a lot of information about the context and helps systems to achieve context-awareness. In the rehabilitation area, it helps with functional diagnosis and assessing health outcomes. Human activity recognition is an important indicator of participation, quality of life and lifestyle. There are two classes of human activities based on body motion and function. The first class, simple human activity, involves human body motion and posture, such as walking, running, and sitting. The second class, complex human activity, includes function along with simple human activity, such as cooking, reading, and watching TV. Human activity recognition is an interdisciplinary research area that has been active for more than a decade. Substantial research has been conducted to recognize human activities, but, there are many major issues still need to be addressed. Addressing these issues would provide a significant improvement in different aspects of the applications of the human activity recognition in different areas. There has been considerable research conducted on simple human activity recognition, whereas, a little research has been carried out on complex human activity recognition. However, there are many key aspects (recognition accuracy, computational cost, energy consumption, mobility) that need to be addressed in both areas to improve their viability. This dissertation aims to address the key aspects in both areas of human activity recognition and eventually focuses on recognition of complex activity. It also addresses indoor and outdoor localization, an important parameter along with time in complex activity recognition. This work studies accelerometer sensor data to recognize simple human activity and time, location and simple activity to recognize complex activity

    A compatibilist computational theory of mind

    Get PDF
    This thesis defends the idea that the mind is essentially computational, a position that has in recent decades come under attack by theories that focus on bodily action and that view the mind as a product of interaction with the world and not as a set of secluded processes in the brain. The most prominent of these is the contemporary criticism coming from enactivism, a theory that argues that cognition is born not from internal processes but from dynamic interactions between brain, body and world. The radical version of enactivism in particular seeks to reject the idea of representational content, a key part in the computational theory of mind. To this end I propose a Compatibilist Computational Theory of Mind. This compatibilist theory incorporates embodied and embedded elements of cognition and also supports a predictive theory of perception, while maintaining the core beliefs pertaining to brain-centric computationalism: That our cognition takes place in our brain, not in bonds between brain and world, and that cognition involves manipulation of mental representational content. While maintaining the position that a computational theory of mind is the best model we have for understanding how the mind works, this thesis also reviews the various flaws and problems that the position has had since its inception. Seeking to overcome these problems, as well as showing that computationalism is still perfectly compatible with contemporary action and prediction-based research in cognitive science, the thesis argues that by revising the theory in such a way that it can incorporate these new elements of cognition we arrive at a theory that is much stronger and more versatile than contemporary non-computational alternatives
    corecore