996,990 research outputs found

    Cheating off your neighbors: Improving activity recognition through corroboration

    Full text link
    Understanding the complexity of human activities solely through an individual's data can be challenging. However, in many situations, surrounding individuals are likely performing similar activities, while existing human activity recognition approaches focus almost exclusively on individual measurements and largely ignore the context of the activity. Consider two activities: attending a small group meeting and working at an office desk. From solely an individual's perspective, it can be difficult to differentiate between these activities as they may appear very similar, even though they are markedly different. Yet, by observing others nearby, it can be possible to distinguish between these activities. In this paper, we propose an approach to enhance the prediction accuracy of an individual's activities by incorporating insights from surrounding individuals. We have collected a real-world dataset from 20 participants with over 58 hours of data including activities such as attending lectures, having meetings, working in the office, and eating together. Compared to observing a single person in isolation, our proposed approach significantly improves accuracy. We regard this work as a first step in collaborative activity recognition, opening new possibilities for understanding human activity in group settings

    Optimized Collaborative Brain-Computer Interfaces for Enhancing Face Recognition

    Get PDF
    : The aim of this study is to maximize group decision performance by optimally adapting EEG confidence decoders to the group composition. We train linear support vector machines to estimate the decision confidence of human participants from their EEG activity. We then simulate groups of different size and membership by combining individual decisions using a weighted majority rule. The weights assigned to each participant in the group are chosen solving a small-dimension, mixed, integer linear programming problem, where we maximize the group performance on the training set. We therefore introduce optimized collaborative brain-computer interfaces (BCIs), where the decisions of each team member are weighted according to both the individual neural activity and the group composition. We validate this approach on a face recognition task undertaken by 10 human participants. The results show that optimal collaborative BCIs significantly enhance team performance over other BCIs, while improving fairness within the group. This research paves the way for practical applications of collaborative BCIs to realistic scenarios characterized by stable teams, where optimizing the decision policy of a single group may lead to significant long-term benefits of team dynamics

    Optimized Collaborative Brain-Computer Interfaces for Enhancing Face Recognition

    Get PDF
    The aim of this study is to maximize group decision performance by optimally adapting EEG confidence decoders to the group composition. We train linear support vector machines to estimate the decision confidence of human participants from their EEG activity. We then simulate groups of different size and membership by combining individual decisions using a weighted majority rule. The weights assigned to each participant in the group are chosen solving a small-dimension, mixed, integer linear programming problem, where we maximize the group performance on the training set. We therefore introduce optimized collaborative brain-computer interfaces (BCIs), where the decisions of each team member are weighted according to both the individual neural activity and the group composition. We validate this approach on a face recognition task undertaken by 10 human participants. The results show that optimal collaborative BCIs significantly enhance team performance over other BCIs, while improving fairness within the group. This research paves the way for practical applications of collaborative BCIs to realistic scenarios characterized by stable teams, where optimizing the decision policy of a single group may lead to significant long-term benefits of team dynamics

    Robust human activity recognition using lesser number of wearable sensors

    Get PDF
    In recent years, research on the recognition of human physical activities solely using wearable sensors has received more and more attention. Compared to other types of sensory devices such as surveillance cameras, wearable sensors are preferred in most activity recognition applications mainly due to their non-intrusiveness and pervasiveness. However, many existing activity recognition applications or experiments using wearable sensors were conducted in the confined laboratory settings using specifically developed gadgets. These gadgets may be useful for a small group of people in certain specific scenarios, but probably will not gain their popularity because they introduce additional costs and they are unusual in everyday life. Alternatively, commercial devices such as smart phones and smart watches can be better utilized for robust activity recognitions. However, only few prior studies focused on activity recognitions using multiple commercial devices. In this paper, we present our feature extraction strategy and compare the performance of our feature set against other feature sets using the same classifiers. We conduct various experiments on a subset of a public dataset named PAMAP2. Specifically, we only select two sensors out of the thirteen used in PAMAP2. Experimental results show that our feature extraction strategy performs better than the others. This paper provides the necessary foundation towards robust activity recognition using only the commercial wearable devices.NRF (Natl Research Foundation, S’pore)Accepted versio

    How object segmentation and perceptual grouping emerge in noisy variational autoencoders

    Get PDF
    Many animals and humans can recognize and segment objects from their backgrounds. Whether object segmentation is necessary for object recognition has long been a topic of debate. Deep neural networks (DNNs) excel at object recognition, but not at segmentation tasks - this has led to the belief that object recognition and segmentation are separate mechanisms in visual processing. Here, however, we show evidence that in variational autoencoders (VAEs), segmentation and faithful representation of data can be interlinked. VAEs are encoder-decoder models that learn to represent independent generative factors of the data as a distribution in a very small bottleneck layer; specifically, we show that VAEs can be made to segment objects without any additional finetuning or downstream training. This segmentation is achieved with a procedure that we call the latent space noise trick: by perturbing the activity of the bottleneck units with activity-independent noise, and recurrently recording and clustering decoder outputs in response to these small changes, the model is able to segment and bind separate features together. We demonstrate that VAEs can group elements in a human-like fashion, are robust to occlusions, and produce illusory contours in simple stimuli. Furthermore, the model generalizes to the naturalistic setting of faces, producing meaningful subpart and figure-ground segmentation without ever having been trained on segmentation. For the first time, we show that learning to faithfully represent stimuli can be generally extended to segmentation using the same model backbone architecture without any additional training
    • …
    corecore