1,623 research outputs found

    Multi-Contrast Computed Tomography Atlas of Healthy Pancreas

    Full text link
    With the substantial diversity in population demographics, such as differences in age and body composition, the volumetric morphology of pancreas varies greatly, resulting in distinctive variations in shape and appearance. Such variations increase the difficulty at generalizing population-wide pancreas features. A volumetric spatial reference is needed to adapt the morphological variability for organ-specific analysis. Here, we proposed a high-resolution computed tomography (CT) atlas framework specifically optimized for the pancreas organ across multi-contrast CT. We introduce a deep learning-based pre-processing technique to extract the abdominal region of interests (ROIs) and leverage a hierarchical registration pipeline to align the pancreas anatomy across populations. Briefly, DEEDs affine and non-rigid registration are performed to transfer patient abdominal volumes to a fixed high-resolution atlas template. To generate and evaluate the pancreas atlas template, multi-contrast modality CT scans of 443 subjects (without reported history of pancreatic disease, age: 15-50 years old) are processed. Comparing with different registration state-of-the-art tools, the combination of DEEDs affine and non-rigid registration achieves the best performance for the pancreas label transfer across all contrast phases. We further perform external evaluation with another research cohort of 100 de-identified portal venous scans with 13 organs labeled, having the best label transfer performance of 0.504 Dice score in unsupervised setting. The qualitative representation (e.g., average mapping) of each phase creates a clear boundary of pancreas and its distinctive contrast appearance. The deformation surface renderings across scales (e.g., small to large volume) further illustrate the generalizability of the proposed atlas template

    Preference relations based unsupervised rank aggregation for metasearch

    Get PDF
    Rank aggregation mechanisms have been used in solving problems from various domains such as bioinformatics, natural language processing, information retrieval, etc. Metasearch is one such application where a user gives a query to the metasearch engine, and the metasearch engine forwards the query to multiple individual search engines. Results or rankings returned by these individual search engines are combined using rank aggregation algorithms to produce the final result to be displayed to the user. We identify few aspects that should be kept in mind for designing any rank aggregation algorithms for metasearch. For example, generally equal importance is given to the input rankings while performing the aggregation. However, depending on the indexed set of web pages, features considered for ranking, ranking functions used etc. by the individual search engines, the individual rankings may be of different qualities. So, the aggregation algorithm should give more weight to the better rankings while giving less weight to others. Also, since the aggregation is performed when the user is waiting for response, the operations performed in the algorithm need to be light weight. Moreover, getting supervised data for rank aggregation problem is often difficult. In this paper, we present an unsupervised rank aggregation algorithm that is suitable for metasearch and addresses the aspects mentioned above. We also perform detailed experimental evaluation of the proposed algorithm on four different benchmark datasets having ground truth information. Apart from the unsupervised Kendall-Tau distance measure, several supervised evaluation measures are used for performance comparison. Experimental results demonstrate the efficacy of the proposed algorithm over baseline methods in terms of supervised evaluation metrics. Through these experiments we also show that Kendall-Tau distance metric may not be suitable for evaluating rank aggregation algorithms for metasearch

    Promises and Pitfalls of a New Early Warning System for Gentrification in Buffalo, NY

    Get PDF
    Gentrification and its resultant displacement are one of the many "wicked problems" of social policy. The study of gentrification and displacement spans half a century, concerns a variety of spatial, temporal, and social contexts, and describes socio-political processes of across the globe and throughout history. One current iteration of this field of inquiry are efforts to identify "early indicators" of gentrification and/or displacement, or the creation of "early warning systems" (EWS). The current work adds to scholarship on the utility of developing an EWS by examining the methodological considerations required for such systems to serve a justice-oriented preventative role

    Deep Learning Architectures for Heterogeneous Face Recognition

    Get PDF
    Face recognition has been one of the most challenging areas of research in biometrics and computer vision. Many face recognition algorithms are designed to address illumination and pose problems for visible face images. In recent years, there has been significant amount of research in Heterogeneous Face Recognition (HFR). The large modality gap between faces captured in different spectrum as well as lack of training data makes heterogeneous face recognition (HFR) quite a challenging problem. In this work, we present different deep learning frameworks to address the problem of matching non-visible face photos against a gallery of visible faces. Algorithms for thermal-to-visible face recognition can be categorized as cross-spectrum feature-based methods, or cross-spectrum image synthesis methods. In cross-spectrum feature-based face recognition a thermal probe is matched against a gallery of visible faces corresponding to the real-world scenario, in a feature subspace. The second category synthesizes a visible-like image from a thermal image which can then be used by any commercial visible spectrum face recognition system. These methods also beneficial in the sense that the synthesized visible face image can be directly utilized by existing face recognition systems which operate only on the visible face imagery. Therefore, using this approach one can leverage the existing commercial-off-the-shelf (COTS) and government-off-the-shelf (GOTS) solutions. In addition, the synthesized images can be used by human examiners for different purposes. There are some informative traits, such as age, gender, ethnicity, race, and hair color, which are not distinctive enough for the sake of recognition, but still can act as complementary information to other primary information, such as face and fingerprint. These traits, which are known as soft biometrics, can improve recognition algorithms while they are much cheaper and faster to acquire. They can be directly used in a unimodal system for some applications. Usually, soft biometric traits have been utilized jointly with hard biometrics (face photo) for different tasks in the sense that they are considered to be available both during the training and testing phases. In our approaches we look at this problem in a different way. We consider the case when soft biometric information does not exist during the testing phase, and our method can predict them directly in a multi-tasking paradigm. There are situations in which training data might come equipped with additional information that can be modeled as an auxiliary view of the data, and that unfortunately is not available during testing. This is the LUPI scenario. We introduce a novel framework based on deep learning techniques that leverages the auxiliary view to improve the performance of recognition system. We do so by introducing a formulation that is general, in the sense that can be used with any visual classifier. Every use of auxiliary information has been validated extensively using publicly available benchmark datasets, and several new state-of-the-art accuracy performance values have been set. Examples of application domains include visual object recognition from RGB images and from depth data, handwritten digit recognition, and gesture recognition from video. We also design a novel aggregation framework which optimizes the landmark locations directly using only one image without requiring any extra prior which leads to robust alignment given arbitrary face deformations. Three different approaches are employed to generate the manipulated faces and two of them perform the manipulation via the adversarial attacks to fool a face recognizer. This step can decouple from our framework and potentially used to enhance other landmark detectors. Aggregation of the manipulated faces in different branches of proposed method leads to robust landmark detection. Finally we focus on the generative adversarial networks which is a very powerful tool in synthesizing a visible-like images from the non-visible images. The main goal of a generative model is to approximate the true data distribution which is not known. In general, the choice for modeling the density function is challenging. Explicit models have the advantage of explicitly calculating the probability densities. There are two well-known implicit approaches, namely the Generative Adversarial Network (GAN) and Variational AutoEncoder (VAE) which try to model the data distribution implicitly. The VAEs try to maximize the data likelihood lower bound, while a GAN performs a minimax game between two players during its optimization. GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. This causes the generator to create similar looking images with poor diversity of samples. In the last chapter of thesis, we focus to address this issue in GANs framework

    Scalable deep feature learning for person re-identification

    Get PDF
    Person Re-identification (Person Re-ID) is one of the fundamental and critical tasks of the video surveillance systems. Given a probe image of a person obtained from one Closed Circuit Television (CCTV) camera, the objective of Person Re-ID is to identify the same person from a large gallery set of images captured by other cameras within the surveillance system. By successfully associating all the pedestrians, we can quickly search, track and even plot a movement trajectory of any person of interest within a CCTV system. Currently, most search and re-identification jobs are still processed manually by police or security officers. It is desirable to automate this process in order to reduce an enormous amount of human labour and increase the pedestrian tracking and retrieval speed. However, Person Re-ID is a challenging problem because of so many uncontrolled properties of a multi-camera surveillance system: cluttered backgrounds, large illumination variations, different human poses and different camera viewing angles. The main goal of this thesis is to develop deep learning based person reidentification models for real-world deployment in surveillance system. This thesis focuses on learning and extracting robust feature representations of pedestrians. In this thesis, we first proposed two supervised deep neural network architectures. One end-to-end Siamese network is developed for real-time person matching tasks. It focuses on extracting the correspondence feature between two images. For an offline person retrieval application, we follow the commonly used feature extraction with distance metric two-stage pipline and propose a strong feature embedding extraction network. In addition, we surveyed many valuable training techniques proposed recently in the literature to integrate them with our newly proposed NP-Triplet xiii loss to construct a strong Person Re-ID feature extraction model. However, during the deployment of the online matching and offline retrieval system, we realise the poor scalability issue in most supervised models. A model trained from labelled images obtained from one system cannot perform well on other unseen systems. Aiming to make the Person Re-ID models more scalable for different surveillance systems, the third work of this thesis presents cross-Dataset feature transfer method (MMFA). MMFA can train and transfer the model learned from one system to another simultaneously. Our goal to create a more scalable and robust person reidentification system did not stop here. The last work of this thesis, we address the limitation of MMFA structure and proposed a multi-dataset feature generalisation approach (MMFA-AAE), which aims to learn a universal feature representation from multiple labelled datasets. Aiming to facilitate the research towards Person Re-ID applications in more realistic scenarios, a new datasets ROSE-IDENTITY-Outdoor (RE-ID-Outdoor) has been collected and annotated with the largest number of cameras and 40 mid-level attributes

    Real-time self-adaptive deep stereo

    Full text link
    Deep convolutional neural networks trained end-to-end are the state-of-the-art methods to regress dense disparity maps from stereo pairs. These models, however, suffer from a notable decrease in accuracy when exposed to scenarios significantly different from the training set, e.g., real vs synthetic images, etc.). We argue that it is extremely unlikely to gather enough samples to achieve effective training/tuning in any target domain, thus making this setup impractical for many applications. Instead, we propose to perform unsupervised and continuous online adaptation of a deep stereo network, which allows for preserving its accuracy in any environment. However, this strategy is extremely computationally demanding and thus prevents real-time inference. We address this issue introducing a new lightweight, yet effective, deep stereo architecture, Modularly ADaptive Network (MADNet) and developing a Modular ADaptation (MAD) algorithm, which independently trains sub-portions of the network. By deploying MADNet together with MAD we introduce the first real-time self-adaptive deep stereo system enabling competitive performance on heterogeneous datasets.Comment: Accepted at CVPR2019 as oral presentation. Code Available https://github.com/CVLAB-Unibo/Real-time-self-adaptive-deep-stere

    Determining Visual Motion in the Deep Learning Era

    Get PDF
    Determining visual motion, or optical flow, is a fundamental problem in computer vision and has stimulated continuous research interests in the past few decades. Other than pure academic pursuit, the progress made in optical flow research also has applications in many fields, including video processing, graphics, robotics and medical applications. Traditionally, optical flow estimation has been formulated as solving an optimisation problem, often by minimising an energy function. The energy function is designed based on the brightness constancy assumption, which often fails in real-world scenarios due to lighting changes, shadows and occlusions, resulting in the failure of traditional algorithms. Another weakness of traditional optimisation approaches is the slow runtime, since iterative methods are often employed when solving for the optical flow, which can take as long as a few seconds to a minute. This becomes problematic in real-world applications. The recent surge of deep learning techniques has enabled the formulation of optical flow estimation as a learning problem. Recent papers have shown significant performance improvements compared to traditional approaches as well as significantly faster runtime. Despite the recent progress in the learning approaches for optical flow, there still remain challenging cases where current approaches fail, such as occlusions, featureless regions (the aperture problem), and large motions for small objects. Current methods are also limited by the large consumption of GPU memory. An intermediate representation named cost volume is often employed which scales quadratically with the number of pixels. This 4D representation acts as a memory bottleneck for modern optical flow approaches, which prevents scaling up to high-resolution images. In this PhD thesis, we show long-range modelling and sparse representations are important cornerstones for modern optical flow estimation. We first show regularising flow prediction with an estimated essential matrix can improve flow prediction performance in mostly rigid scenes, particularly challenging cases such as featureless regions and motion blur. We then demonstrate that a sparse cost volume can just be as effective as a dense cost volume, with significantly less memory consumption. This brings hope for future optical flow research where image resolutions are further increased. Finally, we show that incorporating a self-attention module to globally aggregate motion features helps improve state-of-the-art flow prediction. Modelling long-range connections are particularly helpful for dealing with occlusions
    corecore