621 research outputs found
Comprehensive Survey and Analysis of Techniques, Advancements, and Challenges in Video-Based Traffic Surveillance Systems
The challenges inherent in video surveillance are compounded by a several factors, like dynamic lighting conditions, the coordination of object matching, diverse environmental scenarios, the tracking of heterogeneous objects, and coping with fluctuations in object poses, occlusions, and motion blur. This research endeavor aims to undertake a rigorous and in-depth analysis of deep learning- oriented models utilized for object identification and tracking. Emphasizing the development of effective model design methodologies, this study intends to furnish a exhaustive and in-depth analysis of object tracking and identification models within the specific domain of video surveillance
Collaborative Multi-Agent Video Fast-Forwarding
Multi-agent applications have recently gained significant popularity. In many
computer vision tasks, a network of agents, such as a team of robots with
cameras, could work collaboratively to perceive the environment for efficient
and accurate situation awareness. However, these agents often have limited
computation, communication, and storage resources. Thus, reducing resource
consumption while still providing an accurate perception of the environment
becomes an important goal when deploying multi-agent systems. To achieve this
goal, we identify and leverage the overlap among different camera views in
multi-agent systems for reducing the processing, transmission and storage of
redundant/unimportant video frames. Specifically, we have developed two
collaborative multi-agent video fast-forwarding frameworks in distributed and
centralized settings, respectively. In these frameworks, each individual agent
can selectively process or skip video frames at adjustable paces based on
multiple strategies via reinforcement learning. Multiple agents then
collaboratively sense the environment via either 1) a consensus-based
distributed framework called DMVF that periodically updates the fast-forwarding
strategies of agents by establishing communication and consensus among
connected neighbors, or 2) a centralized framework called MFFNet that utilizes
a central controller to decide the fast-forwarding strategies for agents based
on collected data. We demonstrate the efficacy and efficiency of our proposed
frameworks on a real-world surveillance video dataset VideoWeb and a new
simulated driving dataset CarlaSim, through extensive simulations and
deployment on an embedded platform with TCP communication. We show that
compared with other approaches in the literature, our frameworks achieve better
coverage of important frames, while significantly reducing the number of frames
processed at each agent.Comment: IEEE Transactions on Multimedia, 2023. arXiv admin note: text overlap
with arXiv:2008.0443
Egocentric action recognition from noisy videos
学位の種別: 修士University of Tokyo(東京大学
Vision-Based Semantic Segmentation in Scene Understanding for Autonomous Driving: Recent Achievements, Challenges, and Outlooks
Scene understanding plays a crucial role in autonomous driving by utilizing sensory data for contextual information extraction and decision making. Beyond modeling advances, the enabler for vehicles to become aware of their surroundings is the availability of visual sensory data, which expand the vehicular perception and realizes vehicular contextual awareness in real-world environments. Research directions for scene understanding pursued by related studies include person/vehicle detection and segmentation, their transition analysis, lane change, and turns detection, among many others Unfortunately, these tasks seem insufficient to completely develop fully-autonomous vehicles i.e. achieving level-5 autonomy, travelling just like human-controlled cars. This latter statement is among the conclusions drawn from this review paper: scene understanding for autonomous driving cars using vision sensors still requires significant improvements. With this motivation, this survey defines, analyzes, and reviews the current achievements of the scene understanding research area that mostly rely on computationally complex deep learning models. Furthermore, it covers the generic scene understanding pipeline, investigates the performance reported by the state-of-the-art, informs about the time complexity analysis of avant garde modeling choices, and highlights major triumphs and noted limitations encountered by current research efforts. The survey also includes a comprehensive discussion on the available datasets, and the challenges that, even if lately confronted by researchers, still remain open to date. Finally, our work outlines future research directions to welcome researchers and practitioners to this exciting domain.This work was supported by the European Commission through European Union (EU) and Japan for Artificial Intelligence (AI) under Grant 957339
Deep Learning in Lane Marking Detection: A Survey
Lane marking detection is a fundamental but crucial step in intelligent driving systems. It can not only provide relevant road condition information to prevent lane departure but also assist vehicle positioning and forehead car detection. However, lane marking detection faces many challenges, including extreme lighting, missing lane markings, and obstacle obstructions. Recently, deep learning-based algorithms draw much attention in intelligent driving society because of their excellent performance. In this paper, we review deep learning methods for lane marking detection, focusing on their network structures and optimization objectives, the two key determinants of their success. Besides, we summarize existing lane-related datasets, evaluation criteria, and common data processing techniques. We also compare the detection performance and running time of various methods, and conclude with some current challenges and future trends for deep learning-based lane marking detection algorithm
- …