989 research outputs found
Towards Learning Representations in Visual Computing Tasks
abstract: The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle anomalies in the data such as missing samples and noisy input caused by the undesired, external factors of variation. It should also reduce the data redundancy. Over the years, many feature extraction processes have been invented to produce good representations of raw images and videos.
The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss.
In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.Dissertation/ThesisDoctoral Dissertation Computer Science 201
Cross Pixel Optical Flow Similarity for Self-Supervised Learning
We propose a novel method for learning convolutional neural image
representations without manual supervision. We use motion cues in the form of
optical flow, to supervise representations of static images. The obvious
approach of training a network to predict flow from a single image can be
needlessly difficult due to intrinsic ambiguities in this prediction task. We
instead propose a much simpler learning goal: embed pixels such that the
similarity between their embeddings matches that between their optical flow
vectors. At test time, the learned deep network can be used without access to
video or flow information and transferred to tasks such as image
classification, detection, and segmentation. Our method, which significantly
simplifies previous attempts at using motion for self-supervision, achieves
state-of-the-art results in self-supervision using motion cues, competitive
results for self-supervision in general, and is overall state of the art in
self-supervised pretraining for semantic image segmentation, as demonstrated on
standard benchmarks
CNN Injected Transformer for Image Exposure Correction
Capturing images with incorrect exposure settings fails to deliver a
satisfactory visual experience. Only when the exposure is properly set, can the
color and details of the images be appropriately preserved. Previous exposure
correction methods based on convolutions often produce exposure deviation in
images as a consequence of the restricted receptive field of convolutional
kernels. This issue arises because convolutions are not capable of capturing
long-range dependencies in images accurately. To overcome this challenge, we
can apply the Transformer to address the exposure correction problem,
leveraging its capability in modeling long-range dependencies to capture global
representation. However, solely relying on the window-based Transformer leads
to visually disturbing blocking artifacts due to the application of
self-attention in small patches. In this paper, we propose a CNN Injected
Transformer (CIT) to harness the individual strengths of CNN and Transformer
simultaneously. Specifically, we construct the CIT by utilizing a window-based
Transformer to exploit the long-range interactions among different regions in
the entire image. Within each CIT block, we incorporate a channel attention
block (CAB) and a half-instance normalization block (HINB) to assist the
window-based self-attention to acquire the global statistics and refine local
features. In addition to the hybrid architecture design for exposure
correction, we apply a set of carefully formulated loss functions to improve
the spatial coherence and rectify potential color deviations. Extensive
experiments demonstrate that our image exposure correction method outperforms
state-of-the-art approaches in terms of both quantitative and qualitative
metrics
- …