492 research outputs found
Multispectral texture synthesis
Synthesizing texture involves the ordering of pixels in a 2D arrangement so as to display certain known spatial correlations, generally as described by a sample texture. In an abstract sense, these pixels could be gray-scale values, RGB color values, or entire spectral curves. The focus of this work is to develop a practical synthesis framework that maintains this abstract view while synthesizing texture with high spectral dimension, effectively achieving spectral invariance. The principle idea is to use a single monochrome texture synthesis step to capture the spatial information in a multispectral texture. The first step is to use a global color space transform to condense the spatial information in a sample texture into a principle luminance channel. Then, a monochrome texture synthesis step generates the corresponding principle band in the synthetic texture. This spatial information is then used to condition the generation of spectral information. A number of variants of this general approach are introduced. The first uses a multiresolution transform to decompose the spatial information in the principle band into an equivalent scale/space representation. This information is encapsulated into a set of low order statistical constraints that are used to iteratively coerce white noise into the desired texture. The residual spectral information is then generated using a non-parametric Markov Ran dom field model (MRF). The remaining variants use a non-parametric MRF to generate the spatial and spectral components simultaneously. In this ap proach, multispectral texture is grown from a seed region by sampling from the set of nearest neighbors in the sample texture as identified by a template matching procedure in the principle band. The effectiveness of both algorithms is demonstrated on a number of texture examples ranging from greyscale to RGB textures, as well as 16, 22, 32 and 63 band spectral images. In addition to the standard visual test that predominates the literature, effort is made to quantify the accuracy of the synthesis using informative and effective metrics. These include first and second order statistical comparisons as well as statistical divergence tests
Statistical methods for topology inference, denoising, and bootstrapping in networks
Quite often, the data we observe can be effectively represented using graphs. The underlying structure of the resulting graph, however, might contain noise and does not always hold constant across scales. With the right tools, we could possibly address these two problems. This thesis focuses on developing the right tools and provides insights in looking at them. Specifically, I study several problems that incorporate network data within the multi-scale framework, aiming at identifying common patterns and differences, of signals over networks across different scales. Additional topics in network denoising and network bootstrapping will also be discussed.
The first problem we consider is the connectivity changes in dynamic networks constructed from multiple time series data. Multivariate time series data is often non-stationary. Furthermore, it is not uncommon to expect changes in a system across multiple time scales. Motivated by these observations, we in-corporate the traditional Granger-causal type of modeling within the multi-scale framework and propose a new method to detect the connectivity changes and recover the dynamic network structure.
The second problem we consider is how to denoise and approximate signals over a network adjacency matrix. We propose an adaptive unbalanced Haar wavelet based transformation of the network data, and show that it is efficient in approximation and denoising of the graph signals over a network adjacency matrix. We focus on the exact decompositions of the network, the corresponding approximation theory, and denoising signals over graphs, particularly from the perspective of compression of the networks. We also provide a real data application on denoising EEG signals over a DTI network.
The third problem we consider is in network denoising and network inference. Network representation is popular in characterizing complex systems. However, errors observed in the original measurements will propagate to network statistics and hence induce uncertainties to the summaries of the networks. We propose a spectral-denoising based resampling method to produce confidence intervals that propagate the inferential errors for a number of Lipschitz continuous net- work statistics. We illustrate the effectiveness of the method through a series of simulation studies
Topology control and data handling in wireless sensor networks
Our work in this thesis have provided two distinctive contributions to WSNs in the
areas of data handling and topology control.
In the area of data handling, we have demonstrated a solution to improve the
power efficiency whilst preserving the important data features by data compression
and the use of an adaptive sampling strategy, which are applicable to the specific
application for oceanography monitoring required by the SECOAS project. Our work
on oceanographic data analysis is important for the understanding of the data we are
dealing with, such that suitable strategies can be deployed and system performance
can be analysed. The Basic Adaptive Sampling Scheduler (BASS) algorithm uses
the statistics of the data to adjust the sampling behaviour in a sensor node according
to the environment in order to conserve energy and minimise detection delay.
The motivation of topology control (TC) is to maintain the connectivity of the
network, to reduce node degree to ease congestion in a collision-based medium access
scheme; and to reduce power consumption in the sensor nodes. We have developed
an algorithm Subgraph Topology Control (STC) that is distributed and does not
require additional equipment to be implemented on the SECOAS nodes. STC uses
a metric called subgraph number, which measures the 2-hops connectivity in the
neighbourhood of a node. It is found that STC consistently forms topologies that
have lower node degrees and higher probabilities of connectivity, as compared to k-Neighbours, an alternative algorithm that does not rely on special hardware on sensor
node. Moreover, STC also gives better results in terms of the minimum degree in the
network, which implies that the network structure is more robust to a single point
of failure. As STC is an iterative algorithm, it is very scalable and adaptive and is
well suited for the SECOAS applications
Adaptive value function approximation in reinforcement learning using wavelets
A thesis submitted to the Faculty of Science, School of Computational and Applied Mathematics University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy. Johannesburg, South Africa, July 2015.Reinforcement learning agents solve tasks by finding policies that maximise their reward
over time. The policy can be found from the value function, which represents the value
of each state-action pair. In continuous state spaces, the value function must be approximated.
Often, this is done using a fixed linear combination of functions across all
dimensions.
We introduce and demonstrate the wavelet basis for reinforcement learning, a basis
function scheme competitive against state of the art fixed bases. We extend two online
adaptive tiling schemes to wavelet functions and show their performance improvement
across standard domains. Finally we introduce the Multiscale Adaptive Wavelet Basis
(MAWB), a wavelet-based adaptive basis scheme which is dimensionally scalable and insensitive
to the initial level of detail. This scheme adaptively grows the basis function
set by combining across dimensions, or splitting within a dimension those candidate functions
which have a high estimated projection onto the Bellman error. A number of novel
measures are used to find this estimate.
Super Resolution of Wavelet-Encoded Images and Videos
In this dissertation, we address the multiframe super resolution reconstruction problem for wavelet-encoded images and videos. The goal of multiframe super resolution is to obtain one or more high resolution images by fusing a sequence of degraded or aliased low resolution images of the same scene. Since the low resolution images may be unaligned, a registration step is required before super resolution reconstruction. Therefore, we first explore in-band (i.e. in the wavelet-domain) image registration; then, investigate super resolution. Our motivation for analyzing the image registration and super resolution problems in the wavelet domain is the growing trend in wavelet-encoded imaging, and wavelet-encoding for image/video compression. Due to drawbacks of widely used discrete cosine transform in image and video compression, a considerable amount of literature is devoted to wavelet-based methods. However, since wavelets are shift-variant, existing methods cannot utilize wavelet subbands efficiently. In order to overcome this drawback, we establish and explore the direct relationship between the subbands under a translational shift, for image registration and super resolution. We then employ our devised in-band methodology, in a motion compensated video compression framework, to demonstrate the effective usage of wavelet subbands. Super resolution can also be used as a post-processing step in video compression in order to decrease the size of the video files to be compressed, with downsampling added as a pre-processing step. Therefore, we present a video compression scheme that utilizes super resolution to reconstruct the high frequency information lost during downsampling. In addition, super resolution is a crucial post-processing step for satellite imagery, due to the fact that it is hard to update imaging devices after a satellite is launched. Thus, we also demonstrate the usage of our devised methods in enhancing resolution of pansharpened multispectral images
- …