674 research outputs found
Memristor-Based Digital Systems Design and Architectures
Memristor is considered as a suitable alternative solution to resolve the scaling limitation of CMOS technology. In recent years, the use of memristors in circuits design has rapidly increased and attracted researcher’s interest. Advances have been made to both size and complexity of memristor designs. The development of CMOS transistors shows major concerns, such as, increased leakage power, reduced reliability, and high fabrication cost. These factors have affected chip manufacturing process and functionality severely. Therefore, the demand for new devices is increasing. Memristor, is considered as one of the key element in memory and information processing design due to its small size, long-term data storage, low power, and CMOS compatibility. The main objective in this research is to design memristor-based arithmetic circuits and to overcome some of the Memristor based logic design issues. In this thesis, a fast, low area and low power hybrid CMOS memristor based digital circuit design were implemented. Small and large-scale memristor based digital circuits are implemented and provided a solutions for overcoming the memristor degradation and fan-out challenges. As an example, a 4- bit LFSR has been implemented by using MRL scheme with 64 CMOS devices and 64 memristors. The proposed design is more efficient in terms of the area when compared with CMOS- based LFSR circuits. The simulation results proves the functionality of the design. This approach presents acceptable speed in comparison with CMOS-based design and it is faster than IMPLY-based memrisitive LFSR. The propped LFSR has 841 ps de-lay. Furthermore, the proposed design has a significant power reduction of over 66% less than CMOS-based approach. This thesis proposes implementation of memristive 2-D median filter and extends previously published works on memristive Filter design to include this emerging technology characteristics in image processing. The proposed circuit was designed based on Pt/TaOx/Ta redox-based device and Memristor Ratioed Logic (MRL). The proposed filter is designed in Cadence and the memristive median approved tested circuit is translated to Verilog-XL as a behavioral model. Different 512 _ 512 pixels input images contain salt and pepper noise with various noise density ratios are applied to the proposed median filter and the design successfully has substantially removed the noise. The implementation results in comparison with the conventional filters, it gives better Peak Signal to Noise Ratio (PSNR) and Mean Absolute Error (MAE) for different images with different noise density ratios while it saves more area as compared to CMOS-based design. This dissertation proposes a comprehensive framework for design, mapping and synthesis of large-scale memristor-CMOS circuits. This framework provides a synthesis approach that can be applied to all memristor-based digital logic designs. In particular, it is a proposal for a characterization methodology of memristor-based logic cells to generate a standard cell library for large scale simulation. The proposed framework is implemented in the Cadence Virtuoso schematic-level environment and was veri_ed with Verilog-XL, MATLAB, and the Electronic Design Automation (EDA) Synopses compiler after being translated to the behavioral level. The proposed method can be applied to implement any digital logic design. The frame work is deployed for design of the memristor-based parallel 8-bit adder/subtractor and a 2-D memristive-based median filter
A Learning Framework for Morphological Operators using Counter-Harmonic Mean
We present a novel framework for learning morphological operators using
counter-harmonic mean. It combines concepts from morphology and convolutional
neural networks. A thorough experimental validation analyzes basic
morphological operators dilation and erosion, opening and closing, as well as
the much more complex top-hat transform, for which we report a real-world
application from the steel industry. Using online learning and stochastic
gradient descent, our system learns both the structuring element and the
composition of operators. It scales well to large datasets and online settings.Comment: Submitted to ISMM'1
Always Clear Days: Degradation Type and Severity Aware All-In-One Adverse Weather Removal
All-in-one adverse weather removal is an emerging topic on image restoration,
which aims to restore multiple weather degradation in an unified model, and the
challenging are twofold. First, discovering and handling the property of
multi-domain in target distribution formed by multiple weather conditions.
Second, design efficient and effective operations for different degradation
types. To address this problem, most prior works focus on the multi-domain
caused by weather type. Inspired by inter\&intra-domain adaptation literature,
we observed that not only weather type but also weather severity introduce
multi-domain within each weather type domain, which is ignored by previous
methods, and further limit their performance. To this end, we proposed a
degradation type and severity aware model, called \textbf{UtilityIR}, for blind
all-in-one bad weather image restoration. To extract weather information from
single image, we proposed a novel Marginal Quality Ranking Loss (MQRL) and
utilized Contrastive Loss (CL) to guide weather severity and type extraction,
and leverage a bag of novel techniques such as Multi-Head Cross Attention
(MHCA) and Local-Global Adaptive Instance Normalization (LG-AdaIN) to
efficiently restore spatial varying weather degradation. The proposed method
can significantly outperform the SOTA methods subjectively and objectively on
different weather restoration tasks with a large margin, and enjoy less model
parameters. Proposed method even can restore \textbf{unseen} domain combined
multiple degradation images, and modulating restoration level. Implementation
code will be available at
{https://github.com/fordevoted/UtilityIR}{\textit{this repository}}Comment: 12 pages, 12 figure
Diffused Redundancy in Pre-trained Representations
Representations learned by pre-training a neural network on a large dataset
are increasingly used successfully to perform a variety of downstream tasks. In
this work, we take a closer look at how features are encoded in such
pre-trained representations. We find that learned representations in a given
layer exhibit a degree of diffuse redundancy, i.e., any randomly chosen subset
of neurons in the layer that is larger than a threshold size shares a large
degree of similarity with the full layer and is able to perform similarly as
the whole layer on a variety of downstream tasks. For example, a linear probe
trained on of randomly picked neurons from a ResNet50 pre-trained on
ImageNet1k achieves an accuracy within of a linear probe trained on the
full layer of neurons for downstream CIFAR10 classification. We conduct
experiments on different neural architectures (including CNNs and Transformers)
pre-trained on both ImageNet1k and ImageNet21k and evaluate a variety of
downstream tasks taken from the VTAB benchmark. We find that the loss & dataset
used during pre-training largely govern the degree of diffuse redundancy and
the "critical mass" of neurons needed often depends on the downstream task,
suggesting that there is a task-inherent redundancy-performance Pareto
frontier. Our findings shed light on the nature of representations learned by
pre-trained deep neural networks and suggest that entire layers might not be
necessary to perform many downstream tasks. We investigate the potential for
exploiting this redundancy to achieve efficient generalization for downstream
tasks and also draw caution to certain possible unintended consequences.Comment: Under revie
A robust framework for medical image segmentation through adaptable class-specific representation
Medical image segmentation is an increasingly important component in virtual pathology, diagnostic imaging and computer-assisted surgery. Better hardware for image acquisition and a variety of advanced visualisation methods have paved the way for the development of computer based tools for medical image analysis and interpretation. The routine use of medical imaging scans of multiple modalities has been growing over the last decades and data sets such as the Visible Human Project have introduced a new modality in the form of colour cryo section data. These developments have given rise to an increasing need for better automatic and semiautomatic segmentation methods. The work presented in this thesis concerns the development of a new framework for robust semi-automatic segmentation of medical imaging data of multiple modalities. Following the specification of a set of conceptual and technical requirements, the framework known as ACSR (Adaptable Class-Specific Representation) is developed in the first case for 2D colour cryo section
segmentation. This is achieved through the development of a novel algorithm for adaptable class-specific sampling of point neighbourhoods, known as the PGA (Path Growing Algorithm), combined with Learning Vector Quantization. The framework is extended to accommodate 3D volume segmentation of cryo section data and subsequently segmentation of single and multi-channel greyscale MRl data. For the latter the issues of inhomogeneity and noise are specifically addressed. Evaluation is based on comparison with previously published results on standard simulated and real data sets, using visual presentation, ground truth comparison and human observer experiments. ACSR provides the user with a simple and intuitive visual initialisation process followed by a fully automatic segmentation. Results on both cryo section and MRI data compare favourably to existing methods, demonstrating robustness both to common artefacts and multiple user initialisations. Further developments into specific clinical applications are discussed in the future work section
Recommended from our members
The Bitonic Filter: Linear Filtering in an Edge-preserving Morphological Framework.
A new filter is presented which has better edge and detail preserving properties than a median, noise reduction capability similar to a Gaussian, and is applicable to many signal and noise types. It is built on a definition of signal as bitonic, i.e. containing only one local maxima or minima within the filter range. This definition is based on data ranking rather than value, hence the bitonic filter comprises a combination of non-linear morphological and linear operators. It has no data-level-sensitive parameters and can locally adapt to the signal and noise levels in an image, precisely preserving both smooth and discontinuous signals of any level when there is no noise, but also reducing noise in other areas without creating additional artefactual noise. Both the basis and the performance of the filter are examined in detail, and it is shown to be a significant improvement on the Gaussian and median. It is also compared over various noisy images to the image-guided filter, anisotropic diffusion, non-local means, the grain filter, and self-dual forms of levelling and rank filters. In terms of signal-to-noise, the bitonic filter outperforms all these except non-local means, and sometimes anisotropic diffusion. However it gives good visual results in all circumstances, with characteristics which make it appropriate particularly for signals or images with varying noise, or features at varying levels. The bitonic has very few parameters, does not require optimisation nor prior knowledge of noise levels, does not have any problems with stability, and is reasonably fast to implement. Despite its non-linearity, it hence represents a very practical operation with general applicability
Aerospace Medicine and Biology. A continuing bibliography (Supplement 226)
This bibliography lists 129 reports, articles, and other documents introduced into the NASA scientific and technical information system in November 1981
- …