15 research outputs found

    Opportunistic human activity and context recognition

    Get PDF
    Although the Internet of Things allows seamless access to billions of sensors readily deployed throughout the world, current context- and activity-recognition approaches restrict ambient intelligence to domains where dedicated sensors are deployed. The big data delivered by the Internet of Things calls for a new opportunistic recognition paradigm. Instead of setting-up information sources for a specific recognition goal, the methods themselves adapt to the data available at any time. We present enabling methods that allow for opportunistic recognition in dynamic sensor configurations. This could be the missing link to fulfill the promise of ambient intelligence anywhere

    Modeling IoT-Based Solutions Using Human-Centric Wireless Sensor Networks

    Get PDF
    The Internet of Things (IoT) has inspired solutions that are already available for addressing problems in various application scenarios, such as healthcare, security, emergency support and tourism. However, there is no clear approach to modeling these systems and envisioning their capabilities at the design time. Therefore, the process of designing these systems is ad hoc and its real impact is evaluated once the solution is already implemented, which is risky and expensive. This paper proposes a modeling approach that uses  human-centric wireless sensor networks to specify and evaluate models of IoT-based systems at the time of design, avoiding the need to spend time and effort on early implementations of immature designs. It allows designers to focus on the system design, leaving the implementation decisions for a next phase. The article illustrates the usefulness of this proposal through a running example, showing the design of an IoT-based solution to support the first responses during medium-sized or large urban incidents. The case study used in the proposal evaluation is based on a real train crash. The proposed modeling approach can be used to design IoT-based systems for other application scenarios, e.g., to support security operatives or monitor chronic patients in their homes.Fil: Monares, Álvaro . Universidad de Chile; ChileFil: Ochoa, Sergio F.. Universidad de Chile; ChileFil: Santos, Rodrigo Martin. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Bahía Blanca. Instituto de Investigación en Ingeniería Eléctrica; Argentina. Universidad Nacional del Sur. Departamento de Ingenieria Electrica y de Computadoras; ArgentinaFil: Orozco, Javier Dario. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Bahía Blanca. Instituto de Investigación en Ingeniería Eléctrica; Argentina. Universidad Nacional del Sur. Departamento de Ingenieria Electrica y de Computadoras; ArgentinaFil: Meseguer, Roc . Universidad Politecnica de Catalunya; Españ

    Engineering Pervasive Service Ecosystems: The SAPERE approach

    Get PDF
    Emerging pervasive computing services will typically involve a large number of devices and service components cooperating together in an open and dynamic environment. This calls for suitable models and infrastructures promoting spontaneous, situated, and self-adaptive interactions between components. SAPERE (Self-Aware Pervasive Service Ecosystems) is a general coordination framework aimed at facilitating the decentralized and situated execution of self-organizing and self-adaptive pervasive computing services. SAPERE adopts a nature-inspired approach, in which pervasive services are modeled and deployed as autonomous individuals in an ecosystem of other services and devices, all of which interact in accord to a limited set of coordination laws, or eco-laws. In this article, we present the overall rationale underlying SAPERE and its reference architecture. We introduce the eco-laws--based coordination model and show how it can be used to express and easily enforce general-purpose self-organizing coordination patterns. The middleware infrastructure supporting the SAPERE model is presented and evaluated, and the overall advantages of SAPERE are discussed in the context of exemplary use cases

    IoT-based Architectures for Sensing and Local Data Processing in Ambient Intelligence: Research and Industrial Trends

    Get PDF
    This paper presents an overview of new-generation technologies based on Internet of Things (IoT) and Ambient Intelligence (AmI), which create smart environments that respond intelligently to the presence of people, by collecting data from sensors, aggregating measurements, and extracting knowledge to support daily activities, perform proactive actions, and improve the quality of life. Recent advances in miniaturized instrumentation, general-purpose computing architectures, advanced communication networks, and non-intrusive measurement procedures are enabling the introduction of IoT and AmI technologies in a wider range of applications. To efficiently process the large quantities of data collected in recent AmI applications, many architectures use remote cloud computing, either for data storage or for faster computation. However, local data processing architectures are often preferred over cloud computing in the cases of privacy-compliant or time-critical applications. To highlight recent advances of AmI environments for these applications, in this paper we focus on the technologies, challenges, and research trends in new-generation IoT-based architectures requiring local data processing techniques, with specific attention to smart homes, intelligent vehicles, and healthcare

    Decoding Neural Correlates of Cognitive States to Enhance Driving Experience

    Get PDF
    Modern cars can support their drivers by assessing and autonomously performing different driving maneuvers based on information gathered by in-car sensors. We propose that brain–machine interfaces (BMIs) can provide complementary information that can ease the interaction with intelligent cars in order to enhance the driving experience. In our approach, the human remains in control, while a BMI is used to monitor the driver's cognitive state and use that information to modulate the assistance provided by the intelligent car. In this paper, we gather our proof-of-concept studies demonstrating the feasibility of decoding electroencephalography correlates of upcoming actions and those reflecting whether the decisions of driving assistant systems are in-line with the drivers' intentions. Experimental results while driving both simulated and real cars consistently showed neural signatures of anticipation, movement preparation, and error processing. Remarkably, despite the increased noise inherent to real scenarios, these signals can be decoded on a single-trial basis, reflecting some of the cognitive processes that take place while driving. However, moderate decoding performance compared to the controlled experimental BMI paradigms indicate there exists room for improvement of the machine learning methods typically used in the state-of-the-art BMIs. We foresee that neural fusion correlates with information extracted from other physiological measures, e.g., eye movements or electromyography as well as contextual information gathered by in-car sensors will allow intelligent cars to provide timely and tailored assistance only if it is required; thus, keeping the user in the loop and allowing him to fully enjoy the driving experience

    Mobile Activity Recognition for a Whole Day: Recognizing Real Nursing Activity with a Big Dataset

    Get PDF
    In this paper, we provide a real nursing data set for mobile activity recognition that can be used for supervised machine learning, and big data combined the patient medical records and sensors attempted for 2 years, and also propose a method for recognizing activities for a whole day utilizing prior knowledge about the activity segments in a day. Furthermore, we demonstrate data mining by applying our method to the bigger data with additional hospital data. In the proposed method, we 1) convert a set of segment timestamps into a prior probability of the activity segment by exploiting the concept of importance sampling, 2) obtain the likelihood of traditional recognition methods for each local time window within the segment range, and, 3) apply Bayesian estimation by marginalizing the conditional probability of estimating the activities for the segment samples. By evaluating with the dataset, the proposed method outperformed the traditional method without using the prior knowledge by 25.81% at maximum by balanced classification rate. Moreover, the proposed method significantly reduces duration errors of activity segments from 324.2 seconds of the traditional method to 74.6 seconds at maximum. We also demonstrate the data mining by applying our method to bigger data in a hospital.2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2015), Sep. 7-11, Grand Front Osaka in Umeda, Osaka, Japa

    Reconocimiento de la actividad humana mediante aprendizaje profundo en imágenes de vídeo y sobre dataset multimodal

    Get PDF
    El campo del Reconocimiento de la Actividad Humana (HAR) se encuentra en auge debido a la creciente demanda de análisis de vídeo aplicado al ámbito médico. No obstante, la tarea de predicción de actividades en una secuencia de vídeo no es trivial, puesto que existen numerosos factores como la iluminación o el ángulo de captura, que afectan al reconocimiento. El objetivo del trabajo es poder realizar este Reconocimiento de la Actividad Humana haciendo uso de Aprendizaje Profundo (Deep Learning), más concretamente, mediante una Red Neuronal. La red utilizada permite ejercer la tarea de clasificación de secuencias de imágenes. Para la extracción de características de las imágenes se emplean capas convolucionales 3D, asimismo, se emplean bloques residuales para mitigar el problema del desvanecimiento de gradiente observado en redes con un elevado número de capas. Trabajos previos han realizado estimación de poses de las mismas secuencias de vídeo, así como han llevado a cabo el HAR mediante Aprendizaje Profundo haciendo uso de datos provenientes de sensores. Debido al aumento en el uso de sistemas de captura ópticos para la adquisición de datos, han surgido grandes datasets de refencia. No obstante, el trabajo se centra en el reconocimiento de actividades con relevancia en el ámbito médico, razón por la cual se ha hecho uso del dataset adquirido por el grupo de investigación. En consecuencia, se ha llevado a cabo el reconocimiento de 13 actividades realizadas por 37 sujetos diferentes. El entrenamiento de la red para dicho dataset ha sido realizado tanto desde cero, como mediante el uso de transfer learning. Se ha observado como el empleo de un modelo pre-entrenado permite llegar al punto de convergencia de la red más rápidamente, ahorrando además capacidad computacional. Además, se muestran las dificultades del reconocimiento de datos provenientes de sistemas de captura ópticos, como son la dificultad en clasificación de actividades con movimiento reducido, o actividades bimanuales.Human Activity Recognition (HAR) has garnered a lot of attention due to the growing demand for video analysis applied to the medical field. However, the task of predicting activities in video sequences is not trivial, since there are numerous factors that affect the recognition, such as lighting or the viewpoint. The purpose of this work is to carry out Human Activity Recognition using Deep Learning, more specifically, through Neural Networks. The network performs the task of classifying image sequences. 3D convolutional layers are used to extract image features, and residual blocks are used to mitigate the problem of gradient vanishing observed in networks with a large number of layers. Previous works have estimated poses in the same video sequences that were employed. Moreover, they have also carried out HAR through Deep Learning using data acquired from sensors. Due to the growing popularity of optical capture systems for data acquisition, a large number of benchmark datasets have emerged. Nevertheless, this work focuses on the recognition of activities relevant in the medical field, consecuently, the dataset employed has been the one acquired by the research group. Therefore, 13 activities carried out by 37 different subjects have been classified. The network’s training has been conducted both from scratch, and by transferring learning from a previously trained model. It has been observed how the use of a pre-trained model allows reaching convergence faster, thus saving computational cost. In addition, the results exhibit the limitations of recognizing data from optical capture systems, such as the difficulty of classifying activities with reduced movement, or bimanual activities.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería TelemáticaGrado en Ingeniería de Tecnologías de Telecomunicació

    Realidad Aumentada Adaptativa

    Get PDF
    En la actualidad los avances presentados por empresas como Google, con sus �últimas propuestas tales como Google Glass, las cuales hacen uso de la Realidad Aumentada Adaptativa, son consideradas como tecnologí��as emergentes. La existencia de dispositivos ubicuos y m�óviles y la gran cantidad de sensores disponibles dotan al software con capacidades para percibir su entorno y adaptarse a �el y al usuario de la aplicaci�ón en tiempo real. La Realidad Aumentada Adaptativa puede servirse de estos mecanismos, por lo que el presente trabajo muestra el estado de la cuesti�ón en el �área de los sistemas con adaptabilidad al usuario, prestando una especial atenci�ón a la Adaptabilidad Web. El problema central que se aborda es dar respuesta a las siguientes preguntas: ¿Qu�é es la Realidad Aumentada Adaptativa? ¿C�ómo y cu�áles son los sistemas adaptables al usuario? ¿Qu�é caracterí��sticas del usuario son relevantes para la adaptaci�ón? ¿Qu�é modelos requiere la adaptabilidad? ¿Qu�é requieren los sistemas adaptativos web actuales para ajustarse a las necesidades del usuario? ¿Qu�é m�as se requiere para una realidad con adaptaci�ón inteligente? ¿Qu�é proyectos de investigaci�ón existen, cu�áles son sus arquitecturas y modelos? A la vista de las respuestas obtenidas, se proponen al �final del trabajo una serie de posibles lí��neas de investigaci�ón
    corecore