199 research outputs found
Performance Evaluation of Channel Decoding With Deep Neural Networks
With the demand of high data rate and low latency in fifth generation (5G),
deep neural network decoder (NND) has become a promising candidate due to its
capability of one-shot decoding and parallel computing. In this paper, three
types of NND, i.e., multi-layer perceptron (MLP), convolution neural network
(CNN) and recurrent neural network (RNN), are proposed with the same parameter
magnitude. The performance of these deep neural networks are evaluated through
extensive simulation. Numerical results show that RNN has the best decoding
performance, yet at the price of the highest computational overhead. Moreover,
we find there exists a saturation length for each type of neural network, which
is caused by their restricted learning abilities.Comment: 6 pages, 11 figures, Latex; typos corrected; IEEE ICC 2018 to appea
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
Medical Image Segmentation Review: The success of U-Net
Automatic medical image segmentation is a crucial topic in the medical domain
and successively a critical counterpart in the computer-aided diagnosis
paradigm. U-Net is the most widespread image segmentation architecture due to
its flexibility, optimized modular design, and success in all medical image
modalities. Over the years, the U-Net model achieved tremendous attention from
academic and industrial researchers. Several extensions of this network have
been proposed to address the scale and complexity created by medical tasks.
Addressing the deficiency of the naive U-Net model is the foremost step for
vendors to utilize the proper U-Net variant model for their business. Having a
compendium of different variants in one place makes it easier for builders to
identify the relevant research. Also, for ML researchers it will help them
understand the challenges of the biological tasks that challenge the model. To
address this, we discuss the practical aspects of the U-Net model and suggest a
taxonomy to categorize each network variant. Moreover, to measure the
performance of these strategies in a clinical application, we propose fair
evaluations of some unique and famous designs on well-known datasets. We
provide a comprehensive implementation library with trained models for future
research. In addition, for ease of future studies, we created an online list of
U-Net papers with their possible official implementation. All information is
gathered in https://github.com/NITR098/Awesome-U-Net repository.Comment: Submitted to the IEEE Transactions on Pattern Analysis and Machine
Intelligence Journa
NeRF: Neural Radiance Field in 3D Vision, A Comprehensive Review
Neural Radiance Field (NeRF), a new novel view synthesis with implicit scene
representation has taken the field of Computer Vision by storm. As a novel view
synthesis and 3D reconstruction method, NeRF models find applications in
robotics, urban mapping, autonomous navigation, virtual reality/augmented
reality, and more. Since the original paper by Mildenhall et al., more than 250
preprints were published, with more than 100 eventually being accepted in tier
one Computer Vision Conferences. Given NeRF popularity and the current interest
in this research area, we believe it necessary to compile a comprehensive
survey of NeRF papers from the past two years, which we organized into both
architecture, and application based taxonomies. We also provide an introduction
to the theory of NeRF based novel view synthesis, and a benchmark comparison of
the performance and speed of key NeRF models. By creating this survey, we hope
to introduce new researchers to NeRF, provide a helpful reference for
influential works in this field, as well as motivate future research directions
with our discussion section
A survey on deep geometry learning: from a representation perspective
Researchers have achieved great success in dealing with 2D images using deep learning. In recent years, 3D computer vision and geometry deep learning have gained ever more attention. Many advanced techniques for 3D shapes have been proposed for different applications. Unlike 2D images, which can be uniformly represented by a regular grid of pixels, 3D shapes have various representations, such as depth images, multi-view images, voxels, point clouds, meshes, implicit surfaces, etc. The performance achieved in different applications largely depends on the representation used, and there is no unique representation that works well for all applications. Therefore, in this survey, we review recent developments in deep learning for 3D geometry from a representation perspective, summarizing the advantages and disadvantages of different representations for different applications. We also present existing datasets in these representations and further discuss future research directions
Machine Learning for Metasurfaces Design and Their Applications
Metasurfaces (MTSs) are increasingly emerging as enabling technologies to
meet the demands for multi-functional, small form-factor, efficient,
reconfigurable, tunable, and low-cost radio-frequency (RF) components because
of their ability to manipulate waves in a sub-wavelength thickness through
modified boundary conditions. They enable the design of reconfigurable
intelligent surfaces (RISs) for adaptable wireless channels and smart radio
environments, wherein the inherently stochastic nature of the wireless
environment is transformed into a programmable propagation channel. In
particular, space-limited RF applications, such as communications and radar,
that have strict radiation requirements are currently being investigated for
potential RIS deployment. The RIS comprises sub-wavelength units or meta-atoms,
which are independently controlled and whose geometry and material determine
the spectral response of the RIS. Conventionally, designing RIS to yield the
desired EM response requires trial and error by iteratively investigating a
large possibility of various geometries and materials through thousands of
full-wave EM simulations. In this context, machine/deep learning (ML/DL)
techniques are proving critical in reducing the computational cost and time of
RIS inverse design. Instead of explicitly solving Maxwell's equations, DL
models learn physics-based relationships through supervised training data. The
ML/DL techniques also aid in RIS deployment for numerous wireless applications,
which requires dealing with multiple channel links between the base station
(BS) and the users. As a result, the BS and RIS beamformers require a joint
design, wherein the RIS elements must be rapidly reconfigured. This chapter
provides a synopsis of DL techniques for both inverse RIS design and
RIS-assisted wireless systems.Comment: Book chapter, 70 pages, 12 figures, 2 tables. arXiv admin note:
substantial text overlap with arXiv:2101.09131, arXiv:2009.0254
On Computable Protein Functions
Proteins are biological machines that perform the majority of functions necessary for life. Nature has evolved many different proteins, each of which perform a subset of an organism’s functional repertoire. One aim of biology is to solve the sparse high dimensional problem of annotating all proteins with their true functions. Experimental characterisation remains the gold standard for assigning function, but is a major bottleneck due to resource scarcity. In this thesis, we develop a variety of computational methods to predict protein function, reduce the functional search space for proteins, and guide the design of experimental studies. Our methods take two distinct approaches: protein-centric methods that predict the functions of a given protein, and function-centric methods that predict which proteins perform a given function. We applied our methods to help solve a number of open problems in biology. First, we identified new proteins involved in the progression of Alzheimer’s disease using proteomics data of brains from a fly model of the disease. Second, we predicted novel plastic hydrolase enzymes in a large data set of 1.1 billion protein sequences from metagenomes. Finally, we optimised a neural network method that extracts a small number of informative features from protein networks, which we used to predict functions of fission yeast proteins
Coarse-grained modeling for molecular discovery:Applications to cardiolipin-selectivity
The development of novel materials is pivotal for addressing global challenges such as achieving sustainability, technological progress, and advancements in medical technology. Traditionally, developing or designing new molecules was a resource-intensive endeavor, often reliant on serendipity. Given the vast space of chemically feasible drug-like molecules, estimated between 106 - 10100 compounds, traditional in vitro techniques fall short.Consequently, in silico tools such as virtual screening and molecular modeling have gained increasing recognition. However, the computational cost and the limited precision of the utilized molecular models still limit computational molecular design.This thesis aimed to enhance the molecular design process by integrating multiscale modeling and free energy calculations. Employing a coarse-grained model allowed us to efficiently traverse a significant portion of chemical space and reduce the sampling time required by molecular dynamics simulations. The physics-informed nature of the applied Martini force field and its level of retained structural detail make the model a suitable starting point for the focused learning of molecular properties.We applied our proposed approach to a cardiolipin bilayer, posing a relevant and challenging problem and facilitating reasonable comparison to experimental measurements.We identified promising molecules with defined properties within the resolution limit of a coarse-grained representation. Furthermore, we were able to bridge the gap from in silico predictions to in vitro and in vivo experiments, supporting the validity of the theoretical concept. The findings underscore the potential of multiscale modeling and free-energy calculations in enhancing molecular discovery and design and offer a promising direction for future research
- …