107 research outputs found
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Perception and Prediction in Multi-Agent Urban Traffic Scenarios for Autonomous Driving
In multi-agent urban scenarios, autonomous vehicles navigate an intricate network of interactions with a variety of agents, necessitating advanced perception modeling and trajectory prediction. Research to improve perception modeling and trajectory prediction in autonomous vehicles is fundamental to enhance safety and efficiency in complex driving scenarios. Better data association for 3D multi-object tracking ensures consistent identification and tracking of multiple objects over time, crucial in crowded urban environments to avoid mis-identifications that can lead to unsafe maneuvers or collisions. Effective context modeling for 3D object detection aids in interpreting complex scenes, effectively dealing with challenges like noisy or missing points in sensor data, and occlusions. It enables the system to infer properties of partially observed or obscured objects, enhancing the robustness of the autonomous system in varying conditions. Furthermore, improved trajectory prediction of surrounding vehicles allows an autonomous vehicle to anticipate future actions of other road agents and adapt accordingly, crucial in scenarios like merging lanes, making unprotected turns, or navigating intersections. In essence, these research directions are key to mitigating risks in autonomous driving, and facilitating seamless interaction with other road users.
In Part I, we address the task of improving perception modeling for AV systems. Concretely our contributions are: (i) FANTrack introduces a novel application of Convolutional Neural Networks (CNNs) for real-time 3D Multi-object Tracking (MOT) in autonomous driving, addressing challenges such as varying number of targets, track fragmentation, and noisy detections, thereby enhancing the accuracy of perception capabilities for safe and efficient navigation. (ii) FANTrack proposes to leverage both visual and 3D bounding box data, utilizing Siamese networks and hard-mining, to enhance the similarity functions used in data associations for 3D Multi-object Tracking (MOT). (iii) SA-Det3D introduces a globally-adaptive Full Self-Attention (FSA) module for enhanced feature extraction in 3D object detection, overcoming the limitations of traditional convolution-based techniques by facilitating adaptive context aggregation from entire point-cloud data, thereby bolstering perception modeling in autonomous driving. (iv) SA-Det3D also introduces the Deformable Self-Attention (DSA) module, a scalable adaptation for global context assimilation in large-scale point-cloud datasets, designed to select and focus on most informative regions, thereby improving the quality of feature descriptors and perception modeling in autonomous driving.
In Part II, we focus on the task of improving trajectory prediction of surrounding agents. Concretely, our contributions are: (i) SSL-Lanes introduces a self-supervised learning approach for motion forecasting in autonomous driving that enhances accuracy and generalizability without compromising inference speed or model simplicity, utilizing pseudo-labels from pretext tasks for learning transferable motion patterns. (ii) The second contribution in SSL-Lanes is the design of comprehensive experiments to demonstrate that SSL-Lanes can yield more generalizable and robust trajectory predictions than traditional supervised learning approaches. (iii) SSL-Interactions presents a new framework that utilizes pretext tasks to enhance interaction modeling for trajectory prediction in autonomous driving. (iv) SSL-Interactions advances the prediction of agent trajectories in interaction-centric scenarios by creating a curated dataset that explicitly labels meaningful interactions, thus enabling the effective training of a predictor utilizing pretext tasks and enhancing the modeling of agent-agent interactions in autonomous driving environments
Multi-modal Video Content Understanding
Video is an important format of information. Humans use videos for a variety of purposes such as entertainment, education, communication, information sharing, and capturing memories. To this date, humankind accumulated a colossal amount of video material online which is freely available. Manual processing at this scale is simply impossible. To this end, many research efforts have been dedicated to the automatic processing of video content.
At the same time, human perception of the world is multi-modal. A human uses multiple senses to understand the environment and objects, and their interactions. When watching a video, we perceive the content via both audio and visual modalities, and removing one of these modalities results in less immersive experience. Similarly, if information in both modalities does not correspond, it may create a sense of dissonance. Therefore, joint modelling of multiple modalities (such as audio, visual, and text) within one model is an active research area.
In the last decade, the fields of automatic video understanding and multi-modal modelling have seen exceptional progress due to the ubiquitous success of deep learning models and, more recently, transformer-based architectures in particular. Our work draws on these advances and pushes the state-of-the-art of multi-modal video understanding forward.
Applications of automatic multi-modal video processing are broad and exciting! For instance, the content-based textual description of a video (video captioning) may allow a visually- or auditory-impaired person to understand the content and, thus, engage in brighter social interactions. However, prior work in video content description relies on the visual input alone, missing vital information only available in the audio stream.
To this end, we proposed two novel multi-modal transformer models that encode audio and visual interactions simultaneously. More specifically, first, we introduced a late-fusion multi-modal transformer that is highly modular and allows the processing of an arbitrary set of modalities. Second, an efficient bi-modal transformer was presented to encode audio-visual cues starting from the lower network layers allowing more rich audio-visual features and stronger performance as a result.
Another application is the automatic visually-guided sound generation that might help professional sound (foley) designers who spend hours searching a database for relevant audio for a movie scene. Previous approaches for automatic conditional audio generation support only one class (e. g. “dog barking”), while real-life applications may require generation for hundreds of data classes and one would need to train one model for every data class which can be infeasible.
To bridge this gap, we introduced a novel two-stage model that, first, efficiently encodes audio as a set of codebook vectors (i. e. trains to make “building blocks”) and, then, learns to sample these audio vectors given visual inputs to make a relevant audio track for this visual input. Moreover, we studied the automatic evaluation of the conditional audio generation model and proposed metrics that measure both quality and relevance of the generated samples.
Finally, as video editing is becoming more common among non-professionals due to the increased popularity of such services as YouTube, automatic assistance during video editing grows in demand, e. g. off-sync detection between audio and visual tracks. Prior work in audio-visual synchronization was devoted to solving the task on lip-syncing datasets with “dense” signals, such as interviews and presentations. In such videos, synchronization cues occur “densely” across time, and it is enough to process just a few tens of a second to synchronize the tracks. In contrast, opendomain videos mostly have only “sparse” cues that occur just once in a seconds-long video clip (e. g. “chopping wood”).
To address this, we: a) proposed a novel dataset with “sparse” sounds; b) designed a model which can efficiently encode seconds-long audio-visual tracks in a small set of “learnable selectors” that is, then, used for synchronization. In addition, we explored the temporal artefacts that common audio and video compression algorithms leave in data streams. To prevent a model from learning to rely on these artefacts, we introduced a list of recommendations on how to mitigate them.
This thesis provides the details of the proposed methodologies as well as a comprehensive overview of advances in relevant fields of multi-modal video understanding. In addition, we provide a discussion of potential research directions that can bring significant contributions to the field
Real-time Ultrasound Signals Processing: Denoising and Super-resolution
Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies
Machine learning approaches to video activity recognition: from computer vision to signal processing
244 p.La investigaciĂłn presentada se centra en tĂ©cnicas de clasificaciĂłn para dos tareas diferentes, aunque relacionadas, de tal forma que la segunda puede ser considerada parte de la primera: el reconocimiento de acciones humanas en vĂdeos y el reconocimiento de lengua de signos.En la primera parte, la hipĂłtesis de partida es que la transformaciĂłn de las señales de un vĂdeo mediante el algoritmo de Patrones Espaciales Comunes (CSP por sus siglas en inglĂ©s, comĂşnmente utilizado en sistemas de ElectroencefalografĂa) puede dar lugar a nuevas caracterĂsticas que serán Ăştiles para la posterior clasificaciĂłn de los vĂdeos mediante clasificadores supervisados. Se han realizado diferentes experimentos en varias bases de datos, incluyendo una creada durante esta investigaciĂłn desde el punto de vista de un robot humanoide, con la intenciĂłn de implementar el sistema de reconocimiento desarrollado para mejorar la interacciĂłn humano-robot.En la segunda parte, las tĂ©cnicas desarrolladas anteriormente se han aplicado al reconocimiento de lengua de signos, pero además de ello se propone un mĂ©todo basado en la descomposiciĂłn de los signos para realizar el reconocimiento de los mismos, añadiendo la posibilidad de una mejor explicabilidad. El objetivo final es desarrollar un tutor de lengua de signos capaz de guiar a los usuarios en el proceso de aprendizaje, dándoles a conocer los errores que cometen y el motivo de dichos errores
Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico
Conference proceedings info:
ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies
Raleigh, HI, United States, March 24-26, 2023
Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center
of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementaciĂłn sistemática de la telemedicina dentro de un gran centro de evaluaciĂłn de COVID-19 en el área de Baja California, MĂ©xico. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitaciĂłn escalable basada en datos de tecnologĂas de teleconsulta de telĂ©fonos inteligentes, celulares y video para vincular hospitales, clĂnicas y servicios mĂ©dicos de emergencia para evaluaciones de COVID en el punto de atenciĂłn. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creĂł un equipo multidisciplinario, en cooperaciĂłn con diferentes instituciones, entre ellas: la Universidad AutĂłnoma de Baja California, la SecretarĂa de Salud, el Centro de Comando, Comunicaciones y Control Informático.
de la SecretarĂa del Estado de Baja California (C4), Facultades de Medicina y Colegio de PsicĂłlogos. Nuestro objetivo es proporcionar informaciĂłn al pĂşblico y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignaciĂłn de recursos con la anticipaciĂłn de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-
Recommended from our members
Computational Methods in Multi-Messenger Astrophysics using Gravitational Waves and High Energy Neutrinos
This dissertation seeks to describe advancements made in computational methods for multi-messenger astrophysics (MMA) using gravitational waves GW and neutrinos during Advanced LIGO (aLIGO)’s first through third observing runs (O1-O3) and, looking forward, to describe novel computational techniques suited to the challenges of both the burgeoning MMA field and high-performance computing as a whole.
The first two chapters provide an overview of MMA as it pertains to gravitational wave/high energy neutrino (GWHEN) searches, including a summary of expected astrophysical sources as well as GW, neutrino, and gamma-ray detectors used in their detection. These are followed in the third chapter by an in-depth discussion of LIGO’s timing system, particularly the diagnostic subsystem, describing both its role in MMA searches and the author’s contributions to the system itself.
The fourth chapter provides a detailed description of the Low-Latency Algorithm for Multi-messenger Astrophysics (LLAMA), the GWHEN pipeline developed by the author and used in O2 and O3. Relevant past multi-messenger searches are described first, followed by the O2 and O3 analysis methods, the pipeline’s performance, scientific results, and finally, an in-depth account of the library’s structure and functionality. In particular, the author’s high-performance multi-order coordinates (MOC) HEALPix image analysis library, HPMOC, is described. HPMOC increases performance of HEALPix image manipulations by several orders of magnitude vs. naive single-resolution approaches while presenting a simple high-level interface and should prove useful for diverse future MMA searches. The performance improvements it provides for LLAMA are also covered.
The final chapter of this dissertation builds on the approaches taken in developing HPMOC, presenting several novel methods for efficiently storing and analyzing large data sets, with applications to MMA and other data-intensive fields. A family of depth-first multi-resolution ordering of HEALPix images — DEPTH9, DEPTH19, and DEPTH40 — is defined, along with algorithms and use cases where it can improve on current approaches, including high-speed streaming calculations suitable for serverless compute or FPGAs.
For performance-constrained analyses on HEALPix data (e.g. image analysis in multi-messenger search pipelines) using SIMD processors, breadth-first data structures can provide short-circuiting calculations in a data-parallel way on compressed data; a simple compression method is described with application to further improving LLAMA performance.
A new storage scheme and associated algorithms for efficiently compressing and contracting tensors of varying sparsity is presented; these demuxed tensors (D-Tensors) have equivalent asymptotic time and space complexity to optimal representations of both dense and sparse matrices, and could be used as a universal drop-in replacement to reduce code complexity and developer effort while improving performance of existing non-optimized numerical code. Finally, the big bucket hash table (B-Table), a novel type of hash table making guarantees on data layout (vs. load factor), is described, along with optimizations it allows for (like hardware acceleration, online rebuilds, and hard realtime applications) that are not possible with existing hash table approaches. These innovations are presented in the hope that some will prove useful for improving future MMA searches and other data-intensive applications
Applications in Electronics Pervading Industry, Environment and Society
This book features the manuscripts accepted for the Special Issue “Applications in Electronics Pervading Industry, Environment and Society—Sensing Systems and Pervasive Intelligence” of the MDPI journal Sensors. Most of the papers come from a selection of the best papers of the 2019 edition of the “Applications in Electronics Pervading Industry, Environment and Society” (APPLEPIES) Conference, which was held in November 2019. All these papers have been significantly enhanced with novel experimental results. The papers give an overview of the trends in research and development activities concerning the pervasive application of electronics in industry, the environment, and society. The focus of these papers is on cyber physical systems (CPS), with research proposals for new sensor acquisition and ADC (analog to digital converter) methods, high-speed communication systems, cybersecurity, big data management, and data processing including emerging machine learning techniques. Physical implementation aspects are discussed as well as the trade-off found between functional performance and hardware/system costs
LIPIcs, Volume 244, ESA 2022, Complete Volume
LIPIcs, Volume 244, ESA 2022, Complete Volum
- …