281 research outputs found
Usability of a telehealth solution based on TV interaction for the elderly: the VITASENIOR-MT case study
Remote monitoring of biometric data in the elderly population is an important asset for improving the quality of life and level of independence of elderly people living alone. However, the design and implementation of health technological solutions often disregard the elderly physiological and psychological abilities, leading to low adoption of these technologies. We evaluate the usability of a remote patient monitoring solution, VITASENIOR-MT, which is based on the interaction with a television set. Twenty senior participants (over 64 years) and a control group of 20 participants underwent systematic tests with the health platform and assessed its usability through several questionnaires. Elderly participants scored high on the usability of the platform, very close to the evaluation of the control group. Sensory, motor and cognitive limitations were the issues that most contributed to the difference in usability assessment between the elderly group and the control group. The solution showed high usability and acceptance regardless of age, digital literacy, education and impairments (sensory, motor and cognitive), which shows its effective viability for use and implementation as a consumer product in the senior market.This work has been financially supported by the Portuguese foundation for science and technology (FCT) and European funds through Project VITASENIOR-MT with grant CENTRO-01-0145-FEDER-023659.info:eu-repo/semantics/publishedVersio
Recommended from our members
Design and Optimization of Mobile Cloud Computing Systems with Networked Virtual Platforms
A Mobile Cloud Computing (MCC) system is a cloud-based system that is accessed by the users through their own mobile devices. MCC systems are emerging as the product of two technology trends: 1) the migration of personal computing from desktop to mobile devices and 2) the growing integration of large-scale computing environments into cloud systems. Designers are developing a variety of new mobile cloud computing systems. Each of these systems is developed with different goals and under the influence of different design constraints, such as high network latency or limited energy supply.
The current MCC systems rely heavily on Computation Offloading, which however incurs new problems such as scalability of the cloud, privacy concerns due to storing personal information on the cloud, and high energy consumption on the cloud data centers. In this dissertation, I address these problems by exploring different options in the distribution of computation across different computing nodes in MCC systems. My thesis is that "the use of design and simulation tools optimized for design space exploration of the MCC systems is the key to optimize the distribution of computation in MCC."
For a quantitative analysis of mobile cloud computing systems through design space exploration, I have developed netShip, the first generation of an innovative design and simulation tool, that offers large scalability and heterogeneity support. With this tool system designers and software programmers can efficiently develop, optimize, and validate large-scale, heterogeneous MCC systems. I have enhanced netShip to support the development of ever-evolving MCC applications with a variety of emerging needs including the fast simulation of new devices, e.g., Internet-of-Things devices, and accelerators, e.g., mobile GPUs. Leveraging netShip, I developed three new MCC systems where I applied three variations of a new computation distributing technique, called Reverse Offloading. By more actively leveraging the computational power on mobile devices, the MCC systems can reduce the total execution times, the burden of concentrated computations on the cloud, and the privacy concerns about storing personal information available in the cloud. This approach also creates opportunities for new services by utilizing the information available on the mobile device instead of accessing the cloud.
Throughout my research I have enabled the design optimization of mobile applications and cloud-computing platforms. In particular, my design tool for MCC systems becomes a vehicle to optimize not only the performance but also the energy dissipation, an aspect of critical importance for any computing system
Machine learning based botnet identification traffic
The continued growth of the Internet has resulted in the increasing sophistication of toolkit and methods to conduct computer attacks and intrusions that are easy to use and publicly available to download, such as Zeus botnet toolkit. Botnets are responsible for many cyber-attacks, such as spam, distributed denial-of-service (DDoS), identity theft, and phishing. Most of existence botnet toolkits release updates for new features, development and support. This presents challenges in the detection and prevention of bots. Current botnet detection approaches mostly ineffective as botnets change their Command and Control (C&C) server structures, centralized (e.g., IRC, HTTP), distributed (e.g., P2P), and encryption deterrent. In this paper, based on real world data sets we present our preliminary research on predicting the new bots before they launch their attack. We propose a rich set of features of network traffic using Classification of Network Information Flow Analysis (CONIFA) framework to capture regularities in C&C communication channels and malicious traffic. We present a case study of applying the approach to a popular botnet toolkit, Zeus. The experimental evaluation suggest that it is possible to detect effectively botnets during the botnet C&C communication generated from new updated Zeus botnet toolkit by building the classifier using machine learning from an earlier version and before they launch their attacks using traffic behaviors. Also, show that there is similarity in C&C structures various Botnet toolkit versions and that the network characteristics of botnet C&C traffic is different from legitimate network traffic. Such methods could reduce many different resources needed to identify C&C communication channels and malicious traffic
Fenrir: Blockchain-based Inter-company App-Store for the Automotive Industry
International audienceFrom a software evolution perspective, more actors are integrating the in-vehicle software development cycle. In this process, software deployment mechanisms must include more complex techniques to meet the software verification and traceability levels required by industry safety and security constraints. In this context, we propose Fenrir, a public inter-automaker blockchain-based application store framework in which each automaker retains software installability control. This application store also aims to ensure traceability and security, while also keeping the solution light in terms of both energy consumption and computing requirements, to be used in constrained environments. We implemented Fenrir in a heterogeneous architecture composed of both on-board (bearing an ARM Cortex-A53 chipset, already deployed in cars) and off-board (Amazon EC2) nodes for a realistic automotive use-case scenario, in which we evaluated its performance and energy consumption. We demonstrate that the overheads added by our solution for an entire software deployment pipeline-comprising both deployment and usage of already deployed software packages-depends mainly on the verification mechanism, whose impact is not significant, i.e., 3.8% for the worst-case scenario and 0.3% for a typical scenario
Cyber-offenders versus traditional offenders: An empirical comparison
Bernasco, W. [Promotor]Ruiter, S. [Promotor]Gelder, J.-.L. van [Copromotor
Machine learning based botnet identification traffic
The continued growth of the Internet has resulted in the increasing sophistication of toolkit and methods to conduct computer attacks and intrusions that are easy to use and publicly available to download, such as Zeus botnet toolkit. Botnets are responsible for many cyber-attacks, such as spam, distributed denial-of-service (DDoS), identity theft, and phishing. Most of existence botnet toolkits release updates for new features, development and support. This presents challenges in the detection and prevention of bots. Current botnet detection approaches mostly ineffective as botnets change their Command and Control (C&C) server structures, centralized (e.g., IRC, HTTP), distributed (e.g., P2P), and encryption deterrent. In this paper, based on real world data sets we present our preliminary research on predicting the new bots before they launch their attack. We propose a rich set of features of network traffic using Classification of Network Information Flow Analysis (CONIFA) framework to capture regularities in C&C communication channels and malicious traffic. We present a case study of applying the approach to a popular botnet toolkit, Zeus. The experimental evaluation suggest that it is possible to detect effectively botnets during the botnet C&C communication generated from new updated Zeus botnet toolkit by building the classifier using machine learning from an earlier version and before they launch their attacks using traffic behaviors. Also, show that there is similarity in C&C structures various Botnet toolkit versions and that the network characteristics of botnet C&C traffic is different from legitimate network traffic. Such methods could reduce many different resources needed to identify C&C communication channels and malicious traffic
Deliverable D8.2 First market analysis
This deliverable provides an overview of a first market analysis of the IPTV market. It points out possible customers, competitors and the differences between LinkedTV and their competitive firms
From the Smart City to the Smart Community, Model and Architecture of a Real Project: SensorNet
This article presents a conceptual, architectural and organizational model for the realization of a smart city based on a holistic paradigm as its cornerstone and on the new technologies as its enabling tools. The model is based on the concept of integration of the data belonging to different systems, through the development of a middleware, which allows the retrieval of data from various sources and their storage in a standard format in a new centralized database. The article also illustrates a real project concerning the integration of different sensor networks for the environmental monitoring that exemplifies and implements the main topics discussed. The issues related to its "governance" are also highlighted, not only from a strategic point of view, but also, and above all, from the perspective of its maintenance, which is an important and crucial feature for its "survival" over time
- …