8 research outputs found

    A mobile virtual character with emotion-aware strategies for human-robot interaction

    Get PDF
    Emotions may play an important role in human-robot interaction, especially with social robots. Although the emotion recognition problem has been massively studied, few research is aimed at investigating interaction strategies produced as response to inferred emotional states. The work described in this paper consists on conceiving and evaluating a dynamic in which, according to the user emotional state inferred through facial expressions analysis, two distinct interaction strategies are associated to a virtual character. An Android app, whose development is in progress, aggregates the user interface and interactive features. We have performed user experiments to evaluate whether the proposed dynamic is effective in producing more natural and empathic interaction.FAPESP (São Paulo State Research Support Foundation) (grant 2014/16862-4

    Reconhecimento de atividades e abordagens bioinspiradas para robótica em ambientes inteligentes

    No full text
    Home automation projects have been developed for some time, having evolved into the socalled smart environments. These environments are characterised by the presence of sets of sensors and actuators, connected in order to respond appropriately and proactively to different situations. The integration of intelligent environments with robots allows for the introduction of additional sensing capabilities, besides performing tasks with greater flexibility and less mechanical complexity than traditional monolithic robots. To endow such environments with truly autonomous behaviours, algorithms must extract semantically meaningful information from whichever sensor data is available. Human activity recognition is one of the most active fields of research within this context. In this project, the design and evaluations of learning techniques for human activity recognition was addressed, considering different sensor modalities. Two types of neural networks, based on combinations of Convolutional Neural Networks to Recurrent Networks with Long Short-Term Memory or Temporal Convolutional Networks, were proposed and evaluated on two public datasets for multimodal activity recognition from videos and inertial sensors. The resulting framework was then introduced to a new dataset, the HWU-USP activities dataset, collected as part of this work, in an actual environment endowed with videos, inertial units, and ambient sensors. This design allowed for assessing the influence of ambient sensors, synchronised to the inertial and video data, to the accuracy of the results, which has proven to be a promising approach. Also, the new dataset provided complex activities with long-term dependencies, evaluated through segment-wise classifiers simulating the results for real-time applications. In a second moment, works were developed on neurophysiological data from primates induced to Parkinsons disease. Those studies ranged from data analysis and classification, using neural networks, to the construction of a computational model of the affected structures within the brain. Although different from the studies on activity recognition and assistive technologies, which were the focus of this thesis, these works were related in the nature of the techniques used, and their results were part of the application scenario developed next. Finally, an application scenario was designed and implemented as a robot simulation, so that the developed module could be evaluated in practical situations. For the behaviour selection mechanism, a bioinspired approach based on computational models of the basal ganglia-thalamus-cortex circuit was evaluated and compared to non-bioinspired approaches based on simple heuristics.Projetos de automação residencial têm sido desenvolvidos há algum tempo, tendo evoluído para os chamados ambientes inteligentes. Esses ambientes são caracterizados pela presença de conjuntos de sensores e atuadores, conectados de forma a responder adequada e proativamente a diferentes situações. A integração de ambientes inteligentes com robôs permite a introdução de capacidades adicionais de sensoriamento, além da realização de tarefas com maior flexibilidade e menor complexidade mecânica do que os robôs monolíticos tradicionais. Para dotar tais ambientes de comportamentos verdadeiramente autônomos, algoritmos devem extrair informações semanticamente significativas de quaisquer dados sensoriais disponíveis. Reconhecimento de atividade humana é um dos campos de pesquisa mais ativos dentro deste contexto. Neste projeto, foi abordado o projeto e avaliação de técnicas de aprendizado para reconhecimento da atividade humana, considerando diferentes modalidades de sensores. Dois tipos de redes neurais, baseadas em combinações de Redes Neurais Convolucionais com Redes Recorrentes com Memória de Curto e Longo Prazo ou Redes Convolucionais Temporais, foram propostas e avaliadas em duas bases de dados públicas para reconhecimento de atividade multimodal de vídeos e sensores inerciais. A estrutura resultante foi então empregada a um novo conjunto de dados, o HWU-USP activities dataset, coletado como parte deste trabalho, em um ambiente real dotado de vídeos, unidades inerciais e sensores ambientais. Foi avaliada a influência dos sensores ambientais, sincronizados aos dados inerciais e de vídeo, na acurácia dos resultados, tendo se mostrado uma abordagem promissora. Além disso, o novo conjunto de dados foi provido de atividades complexas com dependências de longo prazo, avaliadas por meio de classificadores baseados em segmentos de comprimento limitado, simulando os resultados para aplicações de tempo real. Em um segundo momento, foram desenvolvidos trabalhos sobre dados neurofisiológicos de primatas induzidos à doença de Parkinson, indo de análises e classificação dos dados, com uso de redes neurais, até a construção de um modelo computacional das estruturas acometidas dentro do cérebro. Embora distinta dos estudos sobre reconhecimento de atividades e tecnologias assistivas, focos desta tese, esses trabalhos foram relacionados na natureza das técnicas empregadas, e seus resultados fizeram parte do cenário de aplicação desenvolvido em seguida. Por fim, foi projetado e implementado um cenário de aplicação na forma de simulação robótica, de modo que o módulo desenvolvido pudesse ser avaliado em situações práticas. Para o mecanismo de seleção de comportamento, uma abordagem bioinspirada baseada em modelos computacionais do circuito núcleos da base-tálamo-córtex foi avaliada e comparada a abordagens não bioinspiradas baseadas em heurísticas simples

    Environment based on emotion recognition for human-robot interaction

    No full text
    Nas ciências de computação, o estudo de emoções tem sido impulsionado pela construção de ambientes interativos, especialmente no contexto dos dispositivos móveis. Pesquisas envolvendo interação humano-robô têm explorado emoções para propiciar experiências naturais de interação com robôs sociais. Um dos aspectos a serem investigados é o das abordagens práticas que exploram mudanças na personalidade de um sistema artificial propiciadas por alterações em um estado emocional inferido do usuário. Neste trabalho, é proposto um ambiente para interação humano-robô baseado em emoções, reconhecidas por meio de análise de expressões faciais, para plataforma Android. Esse sistema consistiu em um agente virtual agregado a um aplicativo, o qual usou informação proveniente de um reconhecedor de emoções para adaptar sua estratégia de interação, alternando entre dois paradigmas discretos pré-definidos. Nos experimentos realizados, verificou-se que a abordagem proposta tende a produzir mais empatia do que uma condição controle, entretanto esse resultado foi observado somente em interações suficientemente longas.In computer sciences, the development of interactive environments have motivated the study of emotions, especially on the context of mobile devices. Research in human-robot interaction have explored emotions to create natural experiences on interaction with social robots. A fertile aspect consist on practical approaches concerning changes on the personality of an artificial system caused by modifications on the users inferred emotional state. The present project proposes to develop, for Android platform, an environment for human-robot interaction based on emotions. A dedicated module will be responsible for recognizing emotions by analyzing facial expressions. This system consisted of a virtual agent aggregated to an application, which used information of the emotion recognizer to adapt its interaction strategy, alternating between two pre-defined discrete paradigms. In the experiments performed, it was found that the proposed approach tends to produce more empathy than a control condition, however this result was observed only in sufficiently long interactions

    Sistema para navegação de robôs móveis e interação humano-robô baseada em comandos de voz

    No full text
    This project is comprised by an interactive mobile robotics’ environment, focused in human-robot interaction. The system was developed to work in a smartphone, with Android operating system, embedded in a small size mobile robot. Information provided by the smartphone’s camera and microp hone, as well as by proximity sensors embedded in the robot, is used as inputs of a control architecture, implemented in software. It is a behavior-based and receptive to human commands control architecture, to assist the robot’s navigation. The robot is controlled by its own behaviors or by commands em it ted by humansO presente projeto consiste no desenvolvimento de um ambiente interativo de robótica móvel, voltado para interação humano-robô. O sistema foi desenvolvido para funcionar em um smartphone, com sistema operacional Android, embarcado em um robô móvel de pequeno porte. Informações provenientes da câmera e do microfone do smartphone, bem como de alguns sensores de proximidade embarcados no robô, são usadas como entradas em uma arquitetura de controle, implementada em software. Trata-se de uma arquitetura de controle baseada em comportamentos e receptiva a comandos humanos para assistir a navegação do robô, controlado por comportamentos próprios ou por comandos humano

    Memory-Based Pruning of Deep Neural Networks for IoT Devices Applied to Flood Detection

    No full text
    Automatic flood detection may be an important component for triggering damage control systems and minimizing the risk of social or economic impacts caused by flooding. Riverside images from regular cameras are a widely available resource that can be used for tackling this problem. Nevertheless, state-of-the-art neural networks, the most suitable approach for this type of computer vision task, are usually resource-consuming, which poses a challenge for deploying these models within low-capability Internet of Things (IoT) devices with unstable internet connections. In this work, we propose a deep neural network (DNN) architecture pruning algorithm capable of finding a pruned version of a given DNN within a user-specified memory footprint. Our results demonstrate that our proposed algorithm can find a pruned DNN model with the specified memory footprint with little to no degradation of its segmentation performance. Finally, we show that our algorithm can be used in a memory-constraint wireless sensor network (WSN) employed to detect flooding events of urban rivers, and the resulting pruned models have competitive results compared with the original models

    Activity Recognition for Ambient Assisted Living with Videos, Inertial Units and Ambient Sensors

    No full text
    Worldwide demographic projections point to a progressively older population. This fact has fostered research on Ambient Assisted Living, which includes developments on smart homes and social robots. To endow such environments with truly autonomous behaviours, algorithms must extract semantically meaningful information from whichever sensor data is available. Human activity recognition is one of the most active fields of research within this context. Proposed approaches vary according to the input modality and the environments considered. Different from others, this paper addresses the problem of recognising heterogeneous activities of daily living centred in home environments considering simultaneously data from videos, wearable IMUs and ambient sensors. For this, two contributions are presented. The first is the creation of the Heriot-Watt University/University of Sao Paulo (HWU-USP) activities dataset, which was recorded at the Robotic Assisted Living Testbed at Heriot-Watt University. This dataset differs from other multimodal datasets due to the fact that it consists of daily living activities with either periodical patterns or long-term dependencies, which are captured in a very rich and heterogeneous sensing environment. In particular, this dataset combines data from a humanoid robot’s RGBD (RGB + depth) camera, with inertial sensors from wearable devices, and ambient sensors from a smart home. The second contribution is the proposal of a Deep Learning (DL) framework, which provides multimodal activity recognition based on videos, inertial sensors and ambient sensors from the smart home, on their own or fused to each other. The classification DL framework has also validated on our dataset and on the University of Texas at Dallas Multimodal Human Activities Dataset (UTD-MHAD), a widely used benchmark for activity recognition based on videos and inertial sensors, providing a comparative analysis between the results on the two datasets considered. Results demonstrate that the introduction of data from ambient sensors expressively improved the accuracy results

    MADCS: A Middleware for Anomaly Detection and Content Sharing for Blockchain-Based Systems

    No full text
    The massive growth in data generation, experienced throughout the current century, has enabled the design of data-driven solutions for various applications. On the other hand, privacy concerns have been raised, especially considering the problems that the leakage of personal data can cause. To address privacy and security issues when dealing with sensitive content, works in the literature have focused on improving protocols for content sharing, primarily by endowing them with anomaly detection modules. However, in Blockchain-based systems, the aggregation of anomaly detection modules to middleware environments is still an under-explored research direction. This paper introduces the Middleware for Anomaly Detection and Content Sharing (MADCS), a new middleware based on a layered structure composed of the application, preprocessing, data analysis and business layers, besides the Blockchain platform. For validation, we built a synthetic dataset of medical prescriptions following an international standard and applied a clustering-based technique for anomaly detection. Experiments demonstrated 85% precision and 78% accuracy in identifying abnormalities in the content-sharing process. The results show that a Blockchain combined with MADCS may contribute to a safer content-sharing network environment

    Privacy-Enhancing Technologies in Federated Learning for the Internet of Healthcare Things: A Survey

    No full text
    Advancements in wearable medical devices using the IoT technology are shaping the modern healthcare system. With the emergence of the Internet of Healthcare Things (IoHT), efficient healthcare services can be provided to patients. Healthcare professionals have effectively used AI-based models to analyze the data collected from IoHT devices to treat various diseases. Data must be processed and analyzed while avoiding privacy breaches, in compliance with legal rules and regulations, such as the HIPAA and GDPR. Federated learning (FL) is a machine learning-based approach allowing multiple entities to train an ML model collaboratively without sharing their data. It is particularly beneficial in healthcare, where data privacy and security are substantial concerns. Even though FL addresses some privacy concerns, there is still no formal proof of privacy guarantees for IoHT data. Privacy-enhancing technologies (PETs) are tools and techniques designed to enhance the privacy and security of online communications and data sharing. PETs provide a range of features that help protect users’ personal information and sensitive data from unauthorized access and tracking. This paper comprehensively reviews PETs concerning FL in the IoHT scenario and identifies several key challenges for future research
    corecore