1,345 research outputs found

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Lithic technological responses to Late Pleistocene glacial cycling at Pinnacle Point Site 5-6, South Africa

    Get PDF
    There are multiple hypotheses for human responses to glacial cycling in the Late Pleistocene, including changes in population size, interconnectedness, and mobility. Lithic technological analysis informs us of human responses to environmental change because lithic assemblage characteristics are a reflection of raw material transport, reduction, and discard behaviors that depend on hunter-gatherer social and economic decisions. Pinnacle Point Site 5-6 (PP5-6), Western Cape, South Africa is an ideal locality for examining the influence of glacial cycling on early modern human behaviors because it preserves a long sequence spanning marine isotope stages (MIS) 5, 4, and 3 and is associated with robust records of paleoenvironmental change. The analysis presented here addresses the question, what, if any, lithic assemblage traits at PP5-6 represent changing behavioral responses to the MIS 5-4-3 interglacial-glacial cycle? It statistically evaluates changes in 93 traits with no a priori assumptions about which traits may significantly associate with MIS. In contrast to other studies that claim that there is little relationship between broad-scale patterns of climate change and lithic technology, we identified the following characteristics that are associated with MIS 4: increased use of quartz, increased evidence for outcrop sources of quartzite and silcrete, increased evidence for earlier stages of reduction in silcrete, evidence for increased flaking efficiency in all raw material types, and changes in tool types and function for silcrete. Based on these results, we suggest that foragers responded to MIS 4 glacial environmental conditions at PP5-6 with increased population or group sizes, 'place provisioning', longer and/or more intense site occupations, and decreased residential mobility. Several other traits, including silcrete frequency, do not exhibit an association with MIS. Backed pieces, once they appear in the PP5-6 record during MIS 4, persist through MIS 3. Changing paleoenvironments explain some, but not all temporal technological variability at PP5-6.Social Science and Humanities Research Council of Canada; NORAM; American-Scandinavian Foundation; Fundacao para a Ciencia e Tecnologia [SFRH/BPD/73598/2010]; IGERT [DGE 0801634]; Hyde Family Foundations; Institute of Human Origins; National Science Foundation [BCS-9912465, BCS-0130713, BCS-0524087, BCS-1138073]; John Templeton Foundation to the Institute of Human Origins at Arizona State Universit

    Seismic wave propagation in Iran and eastern Indian shield

    Get PDF
    This dissertation addresses several important aspects of observational earthquake seismology: 1) methods for data management and processing large datasets, 2) analysis of seismic wave propagation at local to regional (up to about 700 km) source-receiver distances, 3) analysis of seismic coda, and 4) critical re-evaluation of the fundamental problem of seismic wave attenuation and measurement of the seismic “quality” factor (Q). These studies are carried out using new and previously analyzed earthquake data from Iran. In each of the four application areas above, innovative methods are used and significant new results are obtained. First, for efficient managing and processing of large earthquake datasets, I use a flexible, exploration-style open-source seismic processing system. Custom and problem-oriented scripts using Matlab or Octave software are included as tools in this processing system, allowing interactive and non-interactive analysis of earthquake records. In the second application, I note that the existing models for body-wave amplitudes are hampered by several difficulties, such as inaccurate accounts for the contributions of source and receiver effects and insufficient accuracy at the transition between the local and regional distances. Finding a reliable model for body-wave amplitudes is critical for many studies. To achieve such a reliable model, I use a joint inversion method based on a new parameterization of seismic attenuation and additional constraints on model quality. The joint inversion provides a correct model for geometrical spreading and attenuation. The geometrical-spreading model reveals the existence of an increase of body S wave amplitudes from 90 to about 115 km from the source which might be caused by waves reflecting from the crust‐mantle boundary. Outside of this distance range, amplitude decays are significantly faster than usually assumed in similar models. Third, in two chapters of this dissertation devoted to coda studies, I consider the concept of the frequency-dependent coda Q (Qc). Although this quantity is usually attributed to the subsurface, I argue that because of subjective selections of model assumptions and algorithms, Qc cannot be rigorously viewed as a function of surface or subsurface points. Also, frequency dependence of the measured Qc strongly trades off with the subjectively selected parameters of the measurement procedure. To mitigate these problems, instead of mapping a hypothetical in-situ Qc, I obtain maps of physically justified parameters of the subsurface: exponents of geometrical spreading (denoted ) and effective attenuation (denoted qe). For the areas of this study, parameter ranges from 0.005 s-1 to 0.05 s-1 (within Zagros area of Iran) and 0.010 s-1 to 0.013 s-1 (within the eastern Indian Shield). Finally, from both body- and coda-wave studies, I derive estimates of seismic attenuation within the study areas. In two areas of Iran and within the Indian Shield, weak attenuation with Q-factors of 2000–6000 or higher is found. In particular, coda envelopes can be explained by wave reverberations within elastic crustal structures, and the Q-type attenuation appears undetectable
    corecore