251 research outputs found

    WoX+: A Meta-Model-Driven Approach to Mine User Habits and Provide Continuous Authentication in the Smart City

    Get PDF
    The literature is rich in techniques and methods to perform Continuous Authentication (CA) using biometric data, both physiological and behavioral. As a recent trend, less invasive methods such as the ones based on context-aware recognition allows the continuous identification of the user by retrieving device and app usage patterns. However, a still uncovered research topic is to extend the concepts of behavioral and context-aware biometric to take into account all the sensing data provided by the Internet of Things (IoT) and the smart city, in the shape of user habits. In this paper, we propose a meta-model-driven approach to mine user habits, by means of a combination of IoT data incoming from several sources such as smart mobility, smart metering, smart home, wearables and so on. Then, we use those habits to seamlessly authenticate users in real time all along the smart city when the same behavior occurs in different context and with different sensing technologies. Our model, which we called WoX+, allows the automatic extraction of user habits using a novel Artificial Intelligence (AI) technique focused on high-level concepts. The aim is to continuously authenticate the users using their habits as behavioral biometric, independently from the involved sensing hardware. To prove the effectiveness of WoX+ we organized a quantitative and qualitative evaluation in which 10 participants told us a spending habit they have involving the use of IoT. We chose the financial domain because it is ubiquitous, it is inherently multi-device, it is rich in time patterns, and most of all it requires a secure authentication. With the aim of extracting the requirement of such a system, we also asked the cohort how they expect WoX+ will use such habits to securely automatize payments and identify them in the smart city. We discovered that WoX+ satisfies most of the expected requirements, particularly in terms of unobtrusiveness of the solution, in contrast with the limitations observed in the existing studies. Finally, we used the responses given by the cohorts to generate synthetic data and train our novel AI block. Results show that the error in reconstructing the habits is acceptable: Mean Squared Error Percentage (MSEP) 0.04%

    Data Behind Mobile Behavioural Biometrics – a Survey

    Get PDF
    Behavioural biometrics are becoming more and more popular. It is hard to ïŹnd a sensor that is embedded in a mobile/wearable device, which can’t be exploited to extract behavioural biometric data. In this paper, we investigate data in behavioural biometrics and how this data is used in experiments, especially examining papers that introduce new datasets. We will not examine performance accomplished by the algorithms used since a system’s performance is enormously affected by the data used, its amount and quality. Altogether, 32 papers are examined, assessing how often they are cited, have databases published, what modality data are collected, and how the data is used. We offer a roadmap that should be taken into account when designing behavioural data collection and using collected data. We further look at the General Data Protection Regulation, and its signiïŹcance to the scientiïŹc research in the ïŹeld of biometrics. It is possible to conclude that there is a need for publicly available datasets with comprehensive experimental protocols, similarly established in facial recognition

    Practical, appropriate, empirically-validated guidelines for designing educational games

    Get PDF
    There has recently been a great deal of interest in the potential of computer games to function as innovative educational tools. However, there is very little evidence of games fulfilling that potential. Indeed, the process of merging the disparate goals of education and games design appears problematic, and there are currently no practical guidelines for how to do so in a coherent manner. In this paper, we describe the successful, empirically validated teaching methods developed by behavioural psychologists and point out how they are uniquely suited to take advantage of the benefits that games offer to education. We conclude by proposing some practical steps for designing educational games, based on the techniques of Applied Behaviour Analysis. It is intended that this paper can both focus educational games designers on the features of games that are genuinely useful for education, and also introduce a successful form of teaching that this audience may not yet be familiar with

    Code offloading in opportunistic computing

    Get PDF
    With the advent of cloud computing, applications are no longer tied to a single device, but they can be migrated to a high-performance machine located in a distant data center. The key advantage is the enhancement of performance and consequently, the users experience. This activity is commonly referred computational offloading and it has been strenuously investigated in the past years. The natural candidate for computational offloading is the cloud, but recent results point out the hidden costs of cloud reliance in terms of latency and energy; Cuervo et. al. illustrates the limitations on cloud-based computational offloading based on WANs latency times. The dissertation confirms the results of Cuervo et. al. and illustrates more use cases where the cloud may not be the right choice. This dissertation addresses the following question: is it possible to build a novel approach for offloading the computation that overcomes the limitations of the state-of-the-art? In other words, is it possible to create a computational offloading solution that is able to use local resources when the Cloud is not usable, and remove the strong bond with the local infrastructure? To this extent, I propose a novel paradigm for computation offloading named anyrun computing, whose goal is to use any piece of higher-end hardware (locally or remotely accessible) to offloading a portion of the application. With anyrun computing I removed the boundaries that tie the solution to an infrastructure by adding locally available devices to augment the chances to succeed in offloading. To achieve the goals of the dissertation it is fundamental to have a clear view of all the steps that take part in the offloading process. To this extent, I firstly provided a categorization of such activities combined with their interactions and assessed the impact on the system. The outcome of the analysis is the mapping to the problem to a combinatorial optimization problem that is notoriously known to be NP-Hard. There are a set of well-known approaches to solving such kind of problems, but in this scenario, they cannot be used because they require a global view that can be only maintained by a centralized infrastructure. Thus, local solutions are needed. Moving further, to empirically tackle the anyrun computing paradigm, I propose the anyrun computing framework (ARC), a novel software framework whose objective is to decide whether to offload or not to any resource-rich device willing to lend assistance is advantageous compared to local execution with respect to a rich array of performance dimensions. The core of ARC is the nference nodel which receives a rich set of information about the available remote devices from the SCAMPI opportunistic computing framework developed within the European project SCAMPI, and employs the information to profile a given device, in other words, it decides whether offloading is advantageous compared to local execution, i.e. whether it can reduce the local footprint compared to local execution in the dimensions of interest (CPU and RAM usage, execution time, and energy consumption). To empirically evaluate ARC I presented a set of experimental results on the cloud, cloudlet, and opportunistic domain. In the cloud domain, I used the state of the art in cloud solutions over a set of significant benchmark problems and with three WANs access technologies (i.e. 3G, 4G, and high-speed WAN). The main outcome is that the cloud is an appealing solution for a wide variety of problems, but there is a set of circumstances where the cloud performs poorly. Moreover, I have empirically shown the limitations of cloud-based approaches, specifically, In some circumstances, problems with high transmission costs tend to perform poorly, unless they have high computational needs. The second part of the evaluation is done in opportunistic/cloudlet scenarios where I used my custom-made testbed to compare ARC and MAUI, the state of the art in computation offloading. To this extent, I have performed two distinct experiments: the first with a cloudlet environment and the second with an opportunistic environment. The key outcome is that ARC virtually matches the performances of MAUI (in terms of energy savings) in cloudlet environment, but it improves them by a 50% to 60% in the opportunistic domain

    SHELDON Smart habitat for the elderly.

    Get PDF
    An insightful document concerning active and assisted living under different perspectives: Furniture and habitat, ICT solutions and Healthcare
    • 

    corecore