745 research outputs found
Enhanced Compressive Wideband Frequency Spectrum Sensing for Dynamic Spectrum Access
Wideband spectrum sensing detects the unused spectrum holes for dynamic
spectrum access (DSA). Too high sampling rate is the main problem. Compressive
sensing (CS) can reconstruct sparse signal with much fewer randomized samples
than Nyquist sampling with high probability. Since survey shows that the
monitored signal is sparse in frequency domain, CS can deal with the sampling
burden. Random samples can be obtained by the analog-to-information converter.
Signal recovery can be formulated as an L0 norm minimization and a linear
measurement fitting constraint. In DSA, the static spectrum allocation of
primary radios means the bounds between different types of primary radios are
known in advance. To incorporate this a priori information, we divide the whole
spectrum into subsections according to the spectrum allocation policy. In the
new optimization model, the minimization of the L2 norm of each subsection is
used to encourage the cluster distribution locally, while the L0 norm of the L2
norms is minimized to give sparse distribution globally. Because the L0/L2
optimization is not convex, an iteratively re-weighted L1/L2 optimization is
proposed to approximate it. Simulations demonstrate the proposed method
outperforms others in accuracy, denoising ability, etc.Comment: 23 pages, 6 figures, 4 table. arXiv admin note: substantial text
overlap with arXiv:1005.180
Model-Based Calibration of Filter Imperfections in the Random Demodulator for Compressive Sensing
The random demodulator is a recent compressive sensing architecture providing
efficient sub-Nyquist sampling of sparse band-limited signals. The compressive
sensing paradigm requires an accurate model of the analog front-end to enable
correct signal reconstruction in the digital domain. In practice, hardware
devices such as filters deviate from their desired design behavior due to
component variations. Existing reconstruction algorithms are sensitive to such
deviations, which fall into the more general category of measurement matrix
perturbations. This paper proposes a model-based technique that aims to
calibrate filter model mismatches to facilitate improved signal reconstruction
quality. The mismatch is considered to be an additive error in the discretized
impulse response. We identify the error by sampling a known calibrating signal,
enabling least-squares estimation of the impulse response error. The error
estimate and the known system model are used to calibrate the measurement
matrix. Numerical analysis demonstrates the effectiveness of the calibration
method even for highly deviating low-pass filter responses. The proposed method
performance is also compared to a state of the art method based on discrete
Fourier transform trigonometric interpolation.Comment: 10 pages, 8 figures, submitted to IEEE Transactions on Signal
Processin
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Recommended from our members
Wavelet and Multiscale Methods
Various scientific models demand finer and finer resolutions of relevant features. Paradoxically, increasing computational power serves to even heighten this demand. Namely, the wealth of available data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information leads to tasks that are not tractable by standard numerical techniques. The last decade has seen the emergence of several new computational methodologies to address this situation. Their common features are the nonlinearity of the solution methods as well as the ability of separating solution characteristics living on different length scales. Perhaps the most prominent examples lie in multigrid methods and adaptive grid solvers for partial differential equations. These have substantially advanced the frontiers of computability for certain problem classes in numerical analysis. Other highly visible examples are: regression techniques in nonparametric statistical estimation, the design of universal estimators in the context of mathematical learning theory and machine learning; the investigation of greedy algorithms in complexity theory, compression techniques and encoding in signal and image processing; the solution of global operator equations through the compression of fully populated matrices arising from boundary integral equations with the aid of multipole expansions and hierarchical matrices; attacking problems in high spatial dimensions by sparse grid or hyperbolic wavelet concepts. This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computation and to promote the exchange of ideas emerging in various disciplines
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Space adaptive and hierarchical Bayesian variational models for image restoration
The main contribution of this thesis is the proposal of novel space-variant regularization or penalty terms motivated by a strong statistical rational. In light of the connection between the classical variational framework and the Bayesian formulation, we will focus on the design of highly flexible priors characterized by a large number of unknown parameters. The latter will be automatically estimated by setting up a hierarchical modeling framework, i.e. introducing informative or non-informative hyperpriors depending on the information at hand on the parameters.
More specifically, in the first part of the thesis we will focus on the restoration of natural images, by introducing highly parametrized distribution to model the local behavior of the gradients in the image. The resulting regularizers hold the potential to adapt to the local smoothness, directionality and sparsity in the data. The estimation of the unknown parameters will be addressed by means of non-informative hyperpriors, namely uniform distributions over the parameter domain, thus leading to the classical Maximum Likelihood approach.
In the second part of the thesis, we will address the problem of designing suitable penalty terms for the recovery of sparse signals. The space-variance in the proposed penalties, corresponding to a family of informative hyperpriors, namely generalized gamma hyperpriors, will follow directly from the assumption of the independence of the components in the signal. The study of the properties of the resulting energy functionals will thus lead to the introduction of two hybrid algorithms, aimed at combining the strong sparsity promotion characterizing non-convex penalty terms with the desirable guarantees of convex optimization
Using state-of-the-art inverse problem techniques to develop reconstruction methods for fluorescence diffuse optical
An inverse problem is a mathematical framework that is used to obtain info about a
physical object or system from observed measurements. It usually appears when we wish to
obtain information about internal data from outside measurements and has many
applications in science and technology such as medical imaging, geophysical imaging,
image deblurring, image inpainting, electromagnetic scattering, acoustics, machine
learning, mathematical finance, physics, etc.
The main goal of this PhD thesis was to use state-of-the-art inverse problem
techniques to develop modern reconstruction methods for solving the fluorescence diffuse
optical tomography (fDOT) problem. fDOT is a molecular imaging technique that enables
the quantification of tomographic (3D) bio-distributions of fluorescent tracers in small
animals.
One of the main difficulties in fDOT is that the high absorption and scattering
properties of biological tissues lead to an ill-posed inverse problem, yielding multiple nonunique
and unstable solutions to the reconstruction problem. Thus, the problem requires
regularization to achieve a stable solution.
The so called “non-contact fDOT scanners” are based on using CCDs as virtual
detectors instead of optic fibers in contact with the sample. These non-contact systems
generate huge datasets that lead to computationally demanding inverse problem. Therefore,
techniques to minimize the size of the acquired datasets without losing image performance
are highly advisable.
The first part of this thesis addresses the optimization of experimental setups to
reduce the dataset size, by using l₂–based regularization techniques. The second part, based
on the success of l₁ regularization techniques for denoising and image reconstruction, is devoted to advanced regularization problem using l₁–based techniques, and the last part
introduces compressed sensing (CS) theory, which enables further reduction of the
acquired dataset size.
The main contributions of this thesis are:
1) A feasibility study (the first one for fDOT to our knowledge) of the automatic Ucurve
method to select the regularization parameter (l₂–norm). The U-curve method has
shown to be an excellent automatic method to deal with large datasets because it reduces
the regularization parameter search to a suitable interval.
2) Once we found an automatic method to choose the l₂ regularization parameter for
fDOT, singular value analysis (SVA) of fDOT forward matrix was used to maximize the
information content in acquired measurements and minimize the computational cost. It was
shown for the first time that large meshes can be reduced in the z direction, without any
loss in imaging performance but reducing computational times and memory requirements.
3) Dealing with l₁–based regularization techniques, we presented a novel iterative
algorithm, ART-SB, that combines the advantage of Algebraic reconstruction method
(ART) in handling large datasets with Split Bregman (SB) denoising, an approach which
has been shown to be optimum for Total Variation (TV) denoising. SB has been
implemented in a cost-efficient way to handle large datasets. This makes ART-SB more
computationally efficient than previous TV-based reconstruction algorithms and most
splitting approaches.
4) Finally, we proposed a novel approach to CS for fDOT, named the SB-SVA
iterative method. This approach is based on the analysis-based co-sparse representation model, where an analysis operator multiplies the image transforming it in a sparse one.
Taking advantage of the CS-SB algorithm, we restrict the solution reached at each CS-SB
iteration to a certain space where the singular values of the forward matrix and the sparsity
structure combine in beneficial manner. In this way, SB-SVA forces indirectly the wellconditioninig
of the forward matrix while designing (learning) the analysis operator and
finding the solution. Furthermore, SB-SVA outperforms the CS-SB algorithm in terms of
image quality and needs fewer acquisition parameters.
The approaches presented here have been validated with experimental. -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------El problema inverso consiste en un conjunto de técnicas matemáticas para obtener
información sobre un fenómeno físico a partir de una serie de observaciones, medidas o
datos. Dicho problema aparece en muchas aplicaciones científicas y tecnológicas como
pueden ser imagen médica, imagen geofísica, acústica, aprendizaje máquina, física, etc.
El principal objetivo de esta tesis doctoral fue utilizar la teoría del problema inverso
para desarrollar nuevos métodos de reconstrucción para el problema de tomografía óptica
difusiva por fluorescencia (fDOT), también llamada tomografía molecular de fluorescencia
(FMT). fDOT es una modalidad de imagen médica que permite obtener de manera noinvasiva
la distribución espacial 3D de la concentración de sondas moleculares
fluorescentes en animales pequeños in-vivo.
Una de las dificultades principales del problema inverso en fDOT, es que, debido a
la alta difusión y absorción de los tejidos biológicos, es un problema fuertemente mal
condicionado. Su solución no es única y presenta fuertes inestabilidades, por lo que el
problema debe ser regularizado para obtener una solución estable.
Los llamados escáneres fDOT “sin contacto” se basan en utilizar cámaras CCD
como detectores virtuales, en vez de fibras ópticas en contacto con la muestras. Estos
sistemas, necesitan un volumen de datos muy elevado para obtener una buena calidad de
imagen y el coste computacional de hallar la solución llega a ser muy grande. Por esta
razón, es importante optimizar el sistema, es decir, maximizar la información contenida en
los datos adquiridos a la vez que minimizamos el coste computacional.
La primera parte de esta tesis se centra en optimizar el sistema de adquisición,
reduciendo el volumen de datos necesario usando técnicas de regularización basadas en la
norma l₂. La segunda parte se inspira en el gran éxito de las técnicas de regularización basadas en la norma l₁ para la reconstrucción de imagen, y se centra en regularizar el
problema fDOT mediante dichas técnicas. El trabajo finaliza introduciendo la técnica de
“compressed sensing” (CS), que permite también reducir el número de datos necesarios sin
por ello perder calidad de imagen.
Las contribuciones principales de esta tesis son:
1) Realización de un estudio de viabilidad, por primera vez en fDOT, del método
automático U-curva para seleccionar el parámetro de regularización (norma l₂). U-curva
mostró ser un método óptimo para problemas con un volumen elevado de datos, ya que
dicho método ofrece un intervalo donde encontrar el parámetro de regularización.
2) Una vez encontrado el método automático de selección de parámetro de
regularización se realizó un estudio de la matriz del sistema de fDOT basado en el análisis
de valores singulares (SVA), con la finalidad de maximizar la información contenida en los
datos adquiridos y minimizar el coste computacional. Por primera vez se demostró que el
uso de un mallado con menor densidad en la dirección perpendicular al plano obtiene
mejores resultados que el uso convencional de una distribución isotrópica del mismo.
3) En la segunda parte de esta tesis, usando técnicas de regularización basadas en la
norma l₁, se presenta un nuevo algoritmo iterativo, ART-SB, que combina la capacidad de
la técnica de reconstrucción algebraica (ART) para lidiar con problemas con muchos datos
con la efectividad del método Split Bregman (SB) para reducir ruido en la imagen
mediante su variación total (TV). SB fue implementado de forma eficiente para procesar un
elevado volumen de datos, de manera que ART-SB es computacionalmente más eficiente
que otros algoritmos de reconstrucción presentados previamente en la literatura, basados en la TV de la imagen y que la mayoría de las técnicas llamadas de “splitting”.
4) Finalmente, proponemos una nueva aproximación iterativa a CS para fDOT,
llamada SB-SVA. Esta aproximación se basa en el llamado modelo analítico co-disperso
(co-sparse), donde un operador analítico multiplica la imagen convirtiéndola en una
imagen dispersa. Este método aprovecha el método SB para CS (CS-SB) para restringir la
solución alcanzada en cada iteración a un espacio determinado, donde los valores
singulares de la matriz del sistema y la dispersión (“sparsity”) de la solución en dicha
iteración combinen beneficiosamente; es decir, donde valores singulares muy pequeños no
estén asociados a valores distintos de cero de la solución “sparse”. SB-SVA mejora el mal
condicionamiento de la matriz del sistema a la vez que diseña el operador apropiado a
través del cual la imagen se puede representar de forma dispersa y soluciona el problema de CS. Además, SB-SVA mostró mejores resultados que CS-SB en cuanto a calidad de
imagen, requiriendo menor número de parámetros de adquisición.
Todas las aproximaciones que presentamos en esta tesis fueron validadas con datos
experimentales
Recommended from our members
Hypothesis testing and causal inference with heterogeneous medical data
Learning from data which associations hold and are likely to hold in the future is a fundamental part of scientific discovery. With increasingly heterogeneous data collection practices, exemplified by passively collected electronic health records or high-dimensional genetic data with only few observed samples, biases and spurious correlations are prevalent. These are called spurious because they do not contribute to the effect being studied. In this context, the modelling assumptions of existing statistical tests and causal inference methods are often found inadequate and their practical utility diminished even though these models are increasingly used as decision-support tools in practice. This thesis investigates how modern computational techniques may broaden the fields of hypothesis testing and causal inference to handle the subtleties of large heterogeneous data sets, as well as simultaneously improve the robustness and theoretical understanding of machine learning algorithms using insights from causality and statistics.
The first part of this thesis is concerned with hypothesis testing. We develop a framework for hypothesis testing on set-valued data, a representation that faithfully describes many real-world phenomena including patient biomarker trajectories in the hospital. Using similar techniques, we develop next a two-sample test for making inference on selection-biased data, in the sense that not all individuals are equally likely to be included in the study, a fact that biases tests if not accounted for and if the desideratum is to obtain conclusions that are generally applicable. We conclude this section with an investigation of conditional independence in high-dimensional data, such as found in gene expression data, and propose a test using generative adversarial networks. The second part of this thesis is concerned with causal inference and discovery, with a special focus on the influence of unobserved confounders that distort the observed associations between variables and yet may not be ruled out or adjusted for using data alone. We start by demonstrating that unobserved confounders may bias substantially the generalization performance of machine learning algorithms trained with conventional learning paradigms such as empirical risk minimization. Acknowledging this spurious effect, we develop a new learning principle inspired by causal insights that provably generalizes to test data sampled from a larger set of distributions different from the training distribution. In the last chapter we consider the influence of unobserved confounders for causal discovery. We show that with some assumptions on the type and influence on the nature of unobserved confounding one may develop provably consistent causal discovery algorithms, formulated as a solution to a continuous optimization program
- …