798 research outputs found
Recommended from our members
Towards Segment-level Video Understanding: Detecting Activities from Untrimmed Videos
We generate massive amounts of video data every day. While most real-world videos are long and untrimmed with sparsely localized segments of interest, existing AI systems that can interpret videos today often rely on static image analysis or can only process temporal information in a short video snippet. To automatically understand the content of long video streams, this thesis mainly describes the efforts to design accurate, efficient, and intelligent deep learning algorithms for temporal activity detection in untrimmed videos. Detecting segments of interest from untrimmed videos is a key step towards segment-level video understanding. Depending on the purposes of tasks being performed, we address three different activity detection tasks: detecting activities of interest from videos without specific purposes (i.e., temporal activity detection); detecting temporal segment that best corresponds to a language query (i.e., natural language moment retrieval); and detecting activities given less supervision (i.e., weakly-supervised or few-shot activity detection).In temporal activity detection, We first propose a highly unified single-shot temporal activity detector based on fully 3D convolutional networks, by eliminating explicit temporal proposal and classification stages. Evaluations show that it achieves state-of-the-art on temporal activity detection while being super efficient to operate at 1271 FPS. We then investigate how to effectively apply a multi-scale architecture to model activities with various temporal length and frequency. We propose three novel architecture designs: (1) dynamic temporal sampling; (2) two-branch feature hierarchy; (3) multi-scale contextual feature fusion, and we combine all these components into a uniform network and achieve the state-of-the-art on a much larger temporal activity detection benchmark.In natural language moment retrieval, we aim to localize the segment that best corresponds to a given language query. We present a language-guided temporal attention module and an iterative graph adjustment network to handle the semantic and structural misalignment between video and language. The proposed model demonstrates superior capability to handle temporal relations, thus, significantly improves the state-of-the-art by a large margin.Finally, we study the problem of weakly-supervised and few-shot temporal activity detection to mitigate the drawbacks of huge amounts of supervision needed to train a temporal detection model. Namely, we answer the question if we can learn a temporal activity detector under weak supervision that is able to localize unseen activity classes. A novel meta-learning based detection method is accordingly proposed by adopting the few-shot learning technique of Relation Network. Results show that our method achieves performance superior or competitive to state-of-the-art approaches with stronger supervision.In summary, we propose a suite of algorithms and solutions to automatically detect segments of interest in long untrimmed videos. We hope our studies could provide insights for researchers to explore new deep learning paradigms for future computer vision research, especially on video-related topics
Learning and detecting activities from movement trajectories using the hierarchical hidden Markov model
Directly modeling the inherent hierarchy and shared structures of human behaviors, we present an application of the hierarchical hidden Markov model (HHMM) for the problem of activity recognition. We argue that to robustly model and recognize complex human activities, it is crucial to exploit both the natural hierarchical decomposition and shared semantics embedded in the movement trajectories. To this end, we propose the use of the HHMM, a rich stochastic model that has been recently extended to handle shared structures, for representing and recognizing a set of complex indoor activities. Furthermore, in the need of real-time recognition, we propose a Rao-Blackwellised particle filter (RBPF) that efficiently computes the filtering distribution at a constant time complexity for each new observation arrival. The main contributions of this paper lie in the application of the shared-structure HHMM, the estimation of the model\u27s parameters at all levels simultaneously, and a construction of an RBPF approximate inference scheme. The experimental results in a real-world environment have confirmed our belief that directly modeling shared structures not only reduces computational cost, but also improves recognition accuracy when compared with the tree HHMM and the flat HMM.<br /
Hierarchical Hidden Markov Model in Detecting Activities of Daily Living in Wearable Videos for Studies of Dementia
International audienceThis paper presents a method for indexing activities of daily living in videos obtained from wearable cameras. In the context of dementia diagnosis by doctors, the videos are recorded at patients' houses and later visualized by the medical practitioners. The videos may last up to two hours, therefore a tool for an efficient navigation in terms of activities of interest is crucial for the doctors. The specific recording mode provides video data which are really difficult, being a single sequence shot where strong motion and sharp lighting changes often appear. Our work introduces an automatic motion based segmentation of the video and a video structuring approach in terms of activities by a hierarchical two-level Hidden Markov Model. We define our description space over motion and visual characteristics of video and audio channels. Experiments on real data obtained from the recording at home of several patients show the difficulty of the task and the promising results of our approach
Early Recognition of Human Activities from First-Person Videos Using Onset Representations
In this paper, we propose a methodology for early recognition of human
activities from videos taken with a first-person viewpoint. Early recognition,
which is also known as activity prediction, is an ability to infer an ongoing
activity at its early stage. We present an algorithm to perform recognition of
activities targeted at the camera from streaming videos, making the system to
predict intended activities of the interacting person and avoid harmful events
before they actually happen. We introduce the novel concept of 'onset' that
efficiently summarizes pre-activity observations, and design an approach to
consider event history in addition to ongoing video observation for early
first-person recognition of activities. We propose to represent onset using
cascade histograms of time series gradients, and we describe a novel
algorithmic setup to take advantage of onset for early recognition of
activities. The experimental results clearly illustrate that the proposed
concept of onset enables better/earlier recognition of human activities from
first-person videos
Identifying the time profile of everyday activities in the home using smart meter data
Activities are a descriptive term for the common ways households spend their time. Examples include cooking, doing laundry, or socialising. Smart meter data can be used to generate time profiles of activities that are meaningful to householdsâ own lived experience. Activities are therefore a lens through which energy feedback to households can be made salient and understandable. This paper demonstrates a multi-step methodology for inferring hourly time profiles of ten household activities using smart meter data, supplemented by individual appliance plug monitors and environmental sensors. First, household interviews, video ethnography, and technology surveys are used to identify appliances and devices in the home, and their roles in specific activities. Second, âontologiesâ are developed to map out the relationships between activities and technologies in the home. One or more technologies may indicate the occurrence of certain activities. Third, data from smart meters, plug monitors and sensor data are collected. Smart meter data measuring aggregate electricity use are disaggregated and processed together with the plug monitor and sensor data to identify when and for how long different activities are occurring. Sensor data are particularly useful for activities that are not always associated with an energy-using device. Fourth, the ontologies are applied to the disaggregated data to make inferences on hourly time profiles of ten everyday activities. These include washing, doing laundry, watching TV (reliably inferred), and cleaning, socialising, working (inferred with uncertainties). Fifth, activity time diaries and structured interviews are used to validate both the ontologies and the inferred activity time profiles. Two case study homes are used to illustrate the methodology using data collected as part of a UK trial of smart home technologies. The methodology is demonstrated to produce reliable time profiles of a range of domestic activities that are meaningful to households. The methodology also emphasises the value of integrating coded interview and video ethnography data into both the development of the activity inference process
- âŠ