2,881 research outputs found
Magnitude Sensitive Competitive Neural Networks
En esta Tesis se presentan un conjunto de redes neuronales llamadas Magnitude Sensitive Competitive Neural Networks (MSCNNs). Se trata de un conjunto de algoritmos de Competitive Learning que incluyen un término de magnitud como un factor de modulación de la distancia usada en la competición. Al igual que otros métodos competitivos, MSCNNs realizan la cuantización vectorial de los datos, pero el término de magnitud guÃa el entrenamiento de los centroides de modo que se representan con alto detalle las zonas deseadas, definidas por la magnitud. Estas redes se han comparado con otros algoritmos de cuantización vectorial en diversos ejemplos de interpolación, reducción de color, modelado de superficies, clasificación, y varios ejemplos sencillos de demostración. Además se introduce un nuevo algoritmo de compresión de imágenes, MSIC (Magnitude Sensitive Image Compression), que hace uso de los algoritmos mencionados previamente, y que consigue una compresión de la imagen variable según una magnitud definida por el usuario. Los resultados muestran que las nuevas redes neuronales MSCNNs son más versátiles que otros algoritmos de aprendizaje competitivo, y presentan una clara mejora en cuantización vectorial sobre ellos cuando el dato está sopesado por una magnitud que indica el ¿interés¿ de cada muestra
Steered mixture-of-experts for light field images and video : representation and coding
Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution
Earthquake Arrival Association with Backprojection and Graph Theory
The association of seismic wave arrivals with causative earthquakes becomes
progressively more challenging as arrival detection methods become more
sensitive, and particularly when earthquake rates are high. For instance,
seismic waves arriving across a monitoring network from several sources may
overlap in time, false arrivals may be detected, and some arrivals may be of
unknown phase (e.g., P- or S-waves). We propose an automated method to
associate arrivals with earthquake sources and obtain source locations
applicable to such situations. To do so we use a pattern detection metric based
on the principle of backprojection to reveal candidate sources, followed by
graph-theory-based clustering and an integer linear optimization routine to
associate arrivals with the minimum number of sources necessary to explain the
data. This method solves for all sources and phase assignments simultaneously,
rather than in a sequential greedy procedure as is common in other association
routines. We demonstrate our method on both synthetic and real data from the
Integrated Plate Boundary Observatory Chile (IPOC) seismic network of northern
Chile. For the synthetic tests we report results for cases with varying
complexity, including rates of 500 earthquakes/day and 500 false
arrivals/station/day, for which we measure true positive detection accuracy of
> 95%. For the real data we develop a new catalog between January 1, 2010 -
December 31, 2017 containing 817,548 earthquakes, with detection rates on
average 279 earthquakes/day, and a magnitude-of-completion of ~M1.8. A subset
of detections are identified as sources related to quarry and industrial site
activity, and we also detect thousands of foreshocks and aftershocks of the
April 1, 2014 Mw 8.2 Iquique earthquake. During the highest rates of aftershock
activity, > 600 earthquakes/day are detected in the vicinity of the Iquique
earthquake rupture zone
Scalar Quantization as Sparse Least Square Optimization
Quantization can be used to form new vectors/matrices with shared values
close to the original. In recent years, the popularity of scalar quantization
for value-sharing applications has been soaring as it has been found huge
utilities in reducing the complexity of neural networks. Existing
clustering-based quantization techniques, while being well-developed, have
multiple drawbacks including the dependency of the random seed, empty or
out-of-the-range clusters, and high time complexity for a large number of
clusters. To overcome these problems, in this paper, the problem of scalar
quantization is examined from a new perspective, namely sparse least square
optimization. Specifically, inspired by the property of sparse least square
regression, several quantization algorithms based on least square are
proposed. In addition, similar schemes with and
regularization are proposed. Furthermore, to compute quantization results with
a given amount of values/clusters, this paper designed an iterative method and
a clustering-based method, and both of them are built on sparse least square.
The paper shows that the latter method is mathematically equivalent to an
improved version of k-means clustering-based quantization algorithm, although
the two algorithms originated from different intuitions. The algorithms
proposed were tested with three types of data and their computational
performances, including information loss, time consumption, and the
distribution of the values of the sparse vectors, were compared and analyzed.
The paper offers a new perspective to probe the area of quantization, and the
algorithms proposed can outperform existing methods especially under some
bit-width reduction scenarios, when the required post-quantization resolution
(number of values) is not significantly lower than the original number
Sequential Optimization for Efficient High-Quality Object Proposal Generation
We are motivated by the need for a generic object proposal generation
algorithm which achieves good balance between object detection recall, proposal
localization quality and computational efficiency. We propose a novel object
proposal algorithm, BING++, which inherits the virtue of good computational
efficiency of BING but significantly improves its proposal localization
quality. At high level we formulate the problem of object proposal generation
from a novel probabilistic perspective, based on which our BING++ manages to
improve the localization quality by employing edges and segments to estimate
object boundaries and update the proposals sequentially. We propose learning
the parameters efficiently by searching for approximate solutions in a
quantized parameter space for complexity reduction. We demonstrate the
generalization of BING++ with the same fixed parameters across different object
classes and datasets. Empirically our BING++ can run at half speed of BING on
CPU, but significantly improve the localization quality by 18.5% and 16.7% on
both VOC2007 and Microhsoft COCO datasets, respectively. Compared with other
state-of-the-art approaches, BING++ can achieve comparable performance, but run
significantly faster.Comment: Accepted by TPAM
- …