93 research outputs found

    Towards Generalized Frameworks for Object Recognition

    Get PDF
    Over the past few years, deep convolutional neural network (DCNN) based approaches have been immensely successful in tackling a diverse range of object recognition problems. Popular DCNN architectures like deep residual networks (ResNets) are highly generic, not just for classification, but also for high level tasks like detection/tracking which rely on classification DCNNs as their backbone. The generality of DCNNs however doesn't extend to image-to-image(Im2Im) regression tasks (eg: super-resolution, denoising, rgb-to-depth, relighting, etc). For such tasks, DCNNs are often highly task-specific and require specific ancillary post-processing methods. The major issue plaguing the design of generic architectures for such tasks is the tradeoff between context/locality given a fixed computation/memory budget. We first present a generic DCNN architecture for Im2Im regression that can be trained end-to-end without any further machinery. Our proposed architecture, the Recursively Branched Deconvolutional Network (RBDN), which features a cheap early multi-context image representation, an efficient recursive branching scheme with extensive parameter sharing and learnable upsampling. We provide qualitative/quantitative results on 3 diverse tasks: relighting, denoising and colorization and show that our proposed RBDN architecture obtains comparable results to the state-of-the-art on each of these tasks when used off-the-shelf without any post processing or task-specific architectural modifications. Second, we focus on gradient flow and optimization in ResNets. In particular, we theoretically analyze why pre-activation(v2) ResNets outperform the original ResNets(v1) on CIFAR datasets but not on ImageNet. Our analysis reveals that although v1-ResNets lack ensembling properties, they can have a higher effective depth in comparison to v2-ResNes. Subsequently, we show that downsampling projections (while only few in number) have a significantly detrimental effect on performance. We show that by simply replacing downsampling-projections with identity-like dense-reshape shortcuts, the classification results of standard residual architectures like ResNets, ResNeXts and SE-Nets improve by up to 1.2% on ImageNet, without any increase in computational complexity (FLOPs). Finally, we present a robust non-parametric probabilistic ensemble method for multi-classification, which outperforms the state-of-the-art ensemble methods on several machine learning and computer vision datasets for object recognition with statistically significant improvements. The approach is particularly geared towards multi-classification problems with very low training data and/or a fairly high proportion of outliers, for which training end-to-end DCNNs is not very beneficial

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table

    Methods for data-related problems in person re-ID

    Get PDF
    In the last years, the ever-increasing need for public security has attracted wide attention in person re-ID. State-of-the-art techniques have achieved impressive results on academic datasets, which are nearly saturated. However, when it comes to deploying a re-ID system in a practical surveillance scenario, several challenges arise. 1) Full person views are often unavailable, and missing body parts make the comparison very challenging due to significant misalignment of the views. 2) Low diversity in training data introduces bias in re-ID systems. 3) The available data might come from different modalities, e.g., text and images. This thesis proposes Partial Matching Net (PMN) that detects body joints, aligns partial views, and hallucinates the missing parts based on the information present in the frame and a learned model of a person. The aligned and reconstructed views are then combined into a joint representation and used for matching images. The thesis also investigates different types of bias that typically occur in re-ID scenarios when the similarity between two persons is due to the same pose, body part, or camera view, rather than to the ID-related cues. It proposes a general approach to mitigate these effects named Bias-Control (BC) framework with two training streams leveraging adversarial and multitask learning to reduce bias-related features. Finally, the thesis investigates a novel mechanism for matching data across visual and text modalities. It proposes a framework Text (TAVD) with two complementary modules: Text attribute feature aggregation (TA) that aggregates multiple semantic attributes in a bimodal space for globally matching text descriptions with images and Visual feature decomposition (VD) which performs feature embedding for locally matching image regions with text attributes. The results and comparison to state of the art on different benchmarks show that the proposed solutions are effective strategies for person re-ID.Open Acces

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing

    Design of Attention Mechanisms for Robust and Efficient Vehicle Re-Identification from Images and Videos

    Get PDF
    In this work we explore the problem of Vehicle Re-identification using images and videos with applications in smart transportation systems. One of the sectors that can greatly benefit from the value of captured data from sensors on the road is transportation. Data-driven algorithms enable transportation systems to realise intelligent applications to improve operations, safety and experience of road users. Vehicle Re-identification refers to the task of retrieving images of a particular vehicle identity in a large gallery set, composed of images taken from different times, locations, in diverse orientations and within a network of traffic cameras. This task is extremely challenging as not only vehicles with different identities can be of the same make, model and color, but also a given vehicle can appear differently depending on the view-point, occlusion and lighting conditions, making it challenging to either distinguish or associate vehicle instances. To tackle these problems, in this dissertation, we develop a series of attention mechanisms to account for local discriminative regions and generate more robust visual representations of vehicles. In our first work, we propose the Adaptive Attention Vehicle Re-identification (AAVER) model that is equipped with an attention mechanism learned in a supervised manner to locate local regions in the form of key-points of vehicles and extract discriminative features along two parallel paths. The model combines the embeddings of two paths and outputs a single visual representation of the input image. While AAVER highlights how attention can benefit the discriminative capability of a re-identification system by identifying identity-dependant cues such as key-points or vehicle parts, we note that this requires access to abundant additional annotations that are expensive to collect and more often than not are accompanied by noise. In an effort to re-design the vehicle re-identification pipeline without the need for such expensive annotations, we propose Self-supervised Vehicle Re-identification (SAVER) model to automatically highlight salient regions in a vehicle image and mine discriminative representations. SAVER generates robust embeddings; however, it requires a forward pass through a computationally expensive network to generate points of attention at inference stage which imposes a bottleneck and limits its potential adoption in real-time and large-scale applications. Therefore, in our next work, we formulated a training strategy inspired by the notion of curriculum learning and designed the Excited Vehicle Re-identification (EVER) model that benefits from a semi-supervised attention mechanism and only relies on the attention generated by SAVER in the course of training. Recent advancements in the area of self-supervised representation learning have been able to close the performance gap between self-supervised and fully-supervised methods in a spectacular manner. This motivated us to explore these findings in the context of vehicle re-identification and come up with a design that can preserve the lightweightness of EVER while matching or beating the performance of SAVER. Based on this, in our followup work, we proposed the Self-supervised Boosted Vehicle Re-identification model (SSBVER) that is trained in a hybrid manner and learns an implicit attention mechanism Finally, we propose a real-time and city-scale multi-camera vehicle tracking system that detects, tracks and re-identifies vehicles across traffic cameras on a large scale. The proposed system, has been integrated into the Regional Integrated Transportation Information System (RITIS) platform which is a data-driven platform from the University of Maryland for transportation analysis, monitoring, and data visualization

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications

    Object Tracking

    Get PDF
    Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application

    Representation learning for street-view and aerial image retrieval

    Get PDF

    Convergence of Intelligent Data Acquisition and Advanced Computing Systems

    Get PDF
    This book is a collection of published articles from the Sensors Special Issue on "Convergence of Intelligent Data Acquisition and Advanced Computing Systems". It includes extended versions of the conference contributions from the 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS’2019), Metz, France, as well as external contributions

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
    • …
    corecore