6,679 research outputs found
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Sample Complexity Analysis for Learning Overcomplete Latent Variable Models through Tensor Methods
We provide guarantees for learning latent variable models emphasizing on the
overcomplete regime, where the dimensionality of the latent space can exceed
the observed dimensionality. In particular, we consider multiview mixtures,
spherical Gaussian mixtures, ICA, and sparse coding models. We provide tight
concentration bounds for empirical moments through novel covering arguments. We
analyze parameter recovery through a simple tensor power update algorithm. In
the semi-supervised setting, we exploit the label or prior information to get a
rough estimate of the model parameters, and then refine it using the tensor
method on unlabeled samples. We establish that learning is possible when the
number of components scales as , where is the observed
dimension, and is the order of the observed moment employed in the tensor
method. Our concentration bound analysis also leads to minimax sample
complexity for semi-supervised learning of spherical Gaussian mixtures. In the
unsupervised setting, we use a simple initialization algorithm based on SVD of
the tensor slices, and provide guarantees under the stricter condition that
(where constant can be larger than ), where the
tensor method recovers the components under a polynomial running time (and
exponential in ). Our analysis establishes that a wide range of
overcomplete latent variable models can be learned efficiently with low
computational and sample complexity through tensor decomposition methods.Comment: Title change
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
Nonlinear analysis of composite shells with application to glass structures
Laminated glass is a special composite material, which is characterised by an alternating stiff/soft lay-up owing to the significant stiffness mismatch between glass and PVB. This work is motivated by the need for an efficient and accurate nonlinear model for the analysis of laminated glass structures, which describes well the through-thickness variation of displacement fields and the transverse shear strains and enables large displacement analysis.
An efficient lamination model is proposed for the analysis of laminated composites with an alternating stiff/soft lay-up, where the zigzag variation of planar displacements is taken into account by adding to the Reissner-Mindlin formulation a specific set of zigzag functions. Furthermore, a piecewise linear through-thickness distribution of the material transverse shear strain is assumed, which agrees well with the real distribution, yet it avoids layer coupling by not imposing continuity constraints on transverse shear stresses.
Local formulations of curved multi-layer shell elements are established employing the proposed lamination model, which are framed within local co-rotational systems to allow large displacement analysis for small-strain problems. In order to eliminate the locking phenomenon for the shell elements, an assumed strain method is employed and improved, which readily addresses shear locking, membrane locking, and distortion locking for each constitutive layer. Furthermore, a local shell system is proposed for the direct definition of the additional zigzag displacement fields and associated parameters, which allows the additional displacement variables to be coupled directly between adjacent elements without being subject to the large displacement co-rotational transformations.
The developed multi-layer shell elements are employed in this work for typical laminated glass problems, including double glazing systems for which a novel volume-pressure control algorithm is proposed. Several case studies are finally presented to illustrate the effectiveness and efficiency of the proposed modelling approach for the nonlinear analysis of glass structures.Open Acces
Paths forward for sustainable maritime transport : A techno-economic optimization framework for next generation vessels
Climate change is omnipresent in our society. It is known that climate change is occurring, and that additional warming is unavoidable. Therefore, the decarbonization of industrial sectors has gained increased importance in the last years. The maritime transport sector is one of the most targeted industries as it contributes to approximately 3% of global GHG emissions. Nevertheless, maritime transport accounts for up to 80% of the global trade volume, underlying its importance for the world economy. A technical feasible and reliable solution is, thus, essential for the shipping industry to reach the ambitious climate goals established by the Paris Agreement. In the past, the maritim sector has been highly reliant on fossil fuels, using heavy fuel oil as the major energy input. Heavy fuel oil has been the most dominant fuel in the industry due to its cost advantage and high energy density. Recent developments in the maritime industry promote the emergence of dual fuel engines (e.g. LNG and HFO). Even though increased efficiencies and low carbon fuels can reduce maritime pollution, they cannot achieve carbon neutrality. In the long-term, it will be necessary to implement zero emission fuels including green hydrogen, ammonia, methanol, and LNG. The implementation of new sustainable technologies and fuels in the maritime sector will however depend on their economic competitiveness compared to alternative solutions. Therefore, the following research question arises: When can sustainable maritime transport achieve cost parity compared to conventional technologies? The master thesis investigates the break-even point of sustainable shipping technologies in order to achieve climate targets. Thereby, the focus is set on the life cycle costs of different maritime technologies. A techno-economic framework is necessary to decide on the most suitable options for the industry in prospective years. The framework should be able to analyze current as well as prospective technologies, and guide during the technological decision-making process. Therefore, the definition of key performance indicators (KPI) is essential to set a standard for further assessments. The KPIs will be the main value to compare technologies from an economic perspective. In order to answer the research question a case study is developed. The case study is formed by an extensive literature review on current and next-generation sustainable energy systems for vessels. A priority lies on potential carbon neutral technologies and engines such as fuel cells and battery systems based on a predetermined shipping route and shipping class. In a first step, a simulation model for the developed case is established. The output of the simulation model will then be used in the techno-economic framework, connecting components of the system through thermodynamic and physical properties. In a last step, cost functions translate the systems behavior into economic behavior. Once the case study is analyzed, a statistical model is applied on the results in order to evaluate the system under varying boundary conditions. This sensitivity approach is further necessary to underline the impact of the aforementioned KPIs. By that, the robustness of the framework is tested and secured. Finally, the results of the analysis are explained and interpreted with regard to the research question. A conclusion is drawn regarding the potential economic benefits of sustainable maritime transport technologies within the light of potential market access.The results of the thesis are to be documented in a scientifically appropriate manner and discussed within the context of existing literature and regulatory targets for the industry
Structured Sparse Methods for Imaging Genetics
abstract: Imaging genetics is an emerging and promising technique that investigates how genetic variations affect brain development, structure, and function. By exploiting disorder-related neuroimaging phenotypes, this class of studies provides a novel direction to reveal and understand the complex genetic mechanisms. Oftentimes, imaging genetics studies are challenging due to the relatively small number of subjects but extremely high-dimensionality of both imaging data and genomic data. In this dissertation, I carry on my research on imaging genetics with particular focuses on two tasks---building predictive models between neuroimaging data and genomic data, and identifying disorder-related genetic risk factors through image-based biomarkers. To this end, I consider a suite of structured sparse methods---that can produce interpretable models and are robust to overfitting---for imaging genetics. With carefully-designed sparse-inducing regularizers, different biological priors are incorporated into learning models. More specifically, in the Allen brain image--gene expression study, I adopt an advanced sparse coding approach for image feature extraction and employ a multi-task learning approach for multi-class annotation. Moreover, I propose a label structured-based two-stage learning framework, which utilizes the hierarchical structure among labels, for multi-label annotation. In the Alzheimer's disease neuroimaging initiative (ADNI) imaging genetics study, I employ Lasso together with EDPP (enhanced dual polytope projections) screening rules to fast identify Alzheimer's disease risk SNPs. I also adopt the tree-structured group Lasso with MLFre (multi-layer feature reduction) screening rules to incorporate linkage disequilibrium information into modeling. Moreover, I propose a novel absolute fused Lasso model for ADNI imaging genetics. This method utilizes SNP spatial structure and is robust to the choice of reference alleles of genotype coding. In addition, I propose a two-level structured sparse model that incorporates gene-level networks through a graph penalty into SNP-level model construction. Lastly, I explore a convolutional neural network approach for accurate predicting Alzheimer's disease related imaging phenotypes. Experimental results on real-world imaging genetics applications demonstrate the efficiency and effectiveness of the proposed structured sparse methods.Dissertation/ThesisDoctoral Dissertation Computer Science 201
An Impulse Detection Methodology and System with Emphasis on Weapon Fire Detection
This dissertation proposes a methodology for detecting impulse signatures. An algorithm with specific emphasis on weapon fire detection is proposed. Multiple systems in which the detection algorithm can operate, are proposed. In order for detection systems to be used in practical application, they must have high detection performance, minimizing false alarms, be cost effective, and utilize available hardware. Most applications require real time processing and increased range performance, and some applications require detection from mobile platforms. This dissertation intends to provide a methodology for impulse detection, demonstrated for the specific application case of weapon fire detection, that is intended for real world application, taking into account acceptable algorithm performance, feasible system design, and practical implementation. The proposed detection algorithm is implemented with multiple sensors, allowing spectral waveband versatility in system design. The proposed algorithm is also shown to operate at a variety of video frame rates, allowing for practical design using available common, commercial off the shelf hardware. Detection, false alarm, and classification performance are provided, given the use of different sensors and associated wavebands. The false alarms are further mitigated through use of an adaptive, multi-layer classification scheme, leading to potential on-the-move application. The algorithm is shown to work in real time. The proposed system, including algorithm and hardware, is provided. Additional systems are proposed which attempt to complement the strengths and alleviate the weaknesses of the hardware and algorithm. Systems are proposed to mitigate saturation clutter signals and increase detection of saturated targets through the use of position, navigation, and timing sensors, acoustic sensors, and imaging sensors. Furthermore, systems are provided which increase target detection and provide increased functionality, improving the cost effectiveness of the system. The resulting algorithm is shown to enable detection of weapon fire targets, while minimizing false alarms, for real-world, fieldable applications. The work presented demonstrates the complexity of detection algorithm and system design for practical applications in complex environments and also emphasizes the complex interactions and considerations when designing a practical system, where system design is the intersection of algorithm performance and design, hardware performance and design, and size, weight, power, cost, and processing
- …