8,226 research outputs found

    Unsupervised and supervised approaches to color space transformation for image coding

    Get PDF
    The linear transformation of input (typically RGB) data into a color space is important in image compression. Most schemes adopt fixed transforms to decorrelate the color channels. Energy compaction transforms such as the Karhunen-Loève (KLT) do entail a complexity increase. Here, we propose a new data-dependent transform (aKLT), that achieves compression performance comparable to the KLT, at a fraction of the computational complexity. More important, we also consider an application-aware setting, in which a classifier analyzes reconstructed images at the receiver's end. In this context, KLT-based approaches may not be optimal and transforms that maximize post-compression classifier performance are more suited. Relaxing energy compactness constraints, we propose for the first time a transform which can be found offline optimizing the Fisher discrimination criterion in a supervised fashion. In lieu of channel decorrelation, we obtain spatial decorrelation using the same color transform as a rudimentary classifier to detect objects of interest in the input image without adding any computational cost. We achieve higher savings encoding these regions at a higher quality, when combined with region-of-interest capable encoders, such as JPEG 2000

    Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective

    Get PDF
    This paper takes a problem-oriented perspective and presents a comprehensive review of transfer learning methods, both shallow and deep, for cross-dataset visual recognition. Specifically, it categorises the cross-dataset recognition into seventeen problems based on a set of carefully chosen data and label attributes. Such a problem-oriented taxonomy has allowed us to examine how different transfer learning approaches tackle each problem and how well each problem has been researched to date. The comprehensive problem-oriented review of the advances in transfer learning with respect to the problem has not only revealed the challenges in transfer learning for visual recognition, but also the problems (e.g. eight of the seventeen problems) that have been scarcely studied. This survey not only presents an up-to-date technical review for researchers, but also a systematic approach and a reference for a machine learning practitioner to categorise a real problem and to look up for a possible solution accordingly

    Task-Driven Dictionary Learning

    Get PDF
    Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.Comment: final draft post-refereein
    • …
    corecore