69 research outputs found

    Determination of tip transfer function for quantitative MFM using frequency domain filtering and least squares method

    Get PDF
    Magnetic force microscopy has unsurpassed capabilities in analysis of nanoscale and microscale magnetic samples and devices. Similar to other Scanning Probe Microscopy techniques, quantitative analysis remains a challenge. Despite large theoretical and practical progress in this area, present methods are seldom used due to their complexity and lack of systematic understanding of related uncertainties and recommended best practice. Use of the Tip Transfer Function (TTF) is a key concept in making Magnetic Force Microscopy measurements quantitative. We present a numerical study of several aspects of TTF reconstruction using multilayer samples with perpendicular magnetisation. We address the choice of numerical approach, impact of non-periodicity and windowing, suitable conventions for data normalisation and units, criteria for choice of regularisation parameter and experimental effects observed in real measurements. We present a simple regularisation parameter selection method based on TTF width and verify this approach via numerical experiments. Examples of TTF estimation are shown on both 2D and 3D experimental datasets. We give recommendations on best practices for robust TTF estimation, including the choice of windowing function, measurement strategy and dealing with experimental error sources. A method for synthetic MFM data generation, suitable for large scale numerical experiments is also presented

    ASKI: full-sky lensing map making algorithms

    Full text link
    Within the context of upcoming full-sky lensing surveys, the edge-preserving non- linear algorithm Aski is presented. Using the framework of Maximum A Posteriori inversion, it aims at recovering the full-sky convergence map from surveys with masks. It proceeds in two steps: CCD images of crowded galactic fields are deblurred using automated edge-preserving deconvolution; once the reduced shear is estimated, the convergence map is also inverted via an edge- preserving method. For the deblurring, it is found that when the observed field is crowded, this gain can be quite significant for realistic ground-based surveys when both positivity and edge-preserving penalties are imposed during the iterative deconvolution. For the convergence inversion, the quality of the reconstruction is investigated on noisy maps derived from the horizon N-body simulation, with and without Galactic cuts, and quantified using one-point statistics, power spectra, cluster counts, peak patches and the skeleton. It is found that the reconstruction is able to interpolate and extrapolate within the Galactic cuts/non-uniform noise; its sharpness-preserving penalization avoids strong biasing near the clusters of the map; it reconstructs well the shape of the PDF as traced by its skewness and kurtosis; the geometry and topology of the reconstructed map is close to the initial map as traced by the peak patch distribution and the skeleton's differential length; the two-points statistics of the recovered map is consistent with the corresponding smoothed version of the initial map; the distribution of point sources is also consistent with the corresponding smoothing, with a significant improvement when edge preserving prior is applied. The contamination of B-modes when realistic Galactic cuts are present is also investigated. Leakage mainly occurs on large scales.Comment: 24 pages, 21 figures accepted for publication to MNRAS

    High-Throughput Image Analysis of Zebrafish Models of Parkinson’s Disease

    Get PDF

    Deep learning-based diagnostic system for malignant liver detection

    Get PDF
    Cancer is the second most common cause of death of human beings, whereas liver cancer is the fifth most common cause of mortality. The prevention of deadly diseases in living beings requires timely, independent, accurate, and robust detection of ailment by a computer-aided diagnostic (CAD) system. Executing such intelligent CAD requires some preliminary steps, including preprocessing, attribute analysis, and identification. In recent studies, conventional techniques have been used to develop computer-aided diagnosis algorithms. However, such traditional methods could immensely affect the structural properties of processed images with inconsistent performance due to variable shape and size of region-of-interest. Moreover, the unavailability of sufficient datasets makes the performance of the proposed methods doubtful for commercial use. To address these limitations, I propose novel methodologies in this dissertation. First, I modified a generative adversarial network to perform deblurring and contrast adjustment on computed tomography (CT) scans. Second, I designed a deep neural network with a novel loss function for fully automatic precise segmentation of liver and lesions from CT scans. Third, I developed a multi-modal deep neural network to integrate pathological data with imaging data to perform computer-aided diagnosis for malignant liver detection. The dissertation starts with background information that discusses the proposed study objectives and the workflow. Afterward, Chapter 2 reviews a general schematic for developing a computer-aided algorithm, including image acquisition techniques, preprocessing steps, feature extraction approaches, and machine learning-based prediction methods. The first study proposed in Chapter 3 discusses blurred images and their possible effects on classification. A novel multi-scale GAN network with residual image learning is proposed to deblur images. The second method in Chapter 4 addresses the issue of low-contrast CT scan images. A multi-level GAN is utilized to enhance images with well-contrast regions. Thus, the enhanced images improve the cancer diagnosis performance. Chapter 5 proposes a deep neural network for the segmentation of liver and lesions from abdominal CT scan images. A modified Unet with a novel loss function can precisely segment minute lesions. Similarly, Chapter 6 introduces a multi-modal approach for liver cancer variants diagnosis. The pathological data are integrated with CT scan images to diagnose liver cancer variants. In summary, this dissertation presents novel algorithms for preprocessing and disease detection. Furthermore, the comparative analysis validates the effectiveness of proposed methods in computer-aided diagnosis

    Long Range Automated Persistent Surveillance

    Get PDF
    This dissertation addresses long range automated persistent surveillance with focus on three topics: sensor planning, size preserving tracking, and high magnification imaging. field of view should be reserved so that camera handoff can be executed successfully before the object of interest becomes unidentifiable or untraceable. We design a sensor planning algorithm that not only maximizes coverage but also ensures uniform and sufficient overlapped camera’s field of view for an optimal handoff success rate. This algorithm works for environments with multiple dynamic targets using different types of cameras. Significantly improved handoff success rates are illustrated via experiments using floor plans of various scales. Size preserving tracking automatically adjusts the camera’s zoom for a consistent view of the object of interest. Target scale estimation is carried out based on the paraperspective projection model which compensates for the center offset and considers system latency and tracking errors. A computationally efficient foreground segmentation strategy, 3D affine shapes, is proposed. The 3D affine shapes feature direct and real-time implementation and improved flexibility in accommodating the target’s 3D motion, including off-plane rotations. The effectiveness of the scale estimation and foreground segmentation algorithms is validated via both offline and real-time tracking of pedestrians at various resolution levels. Face image quality assessment and enhancement compensate for the performance degradations in face recognition rates caused by high system magnifications and long observation distances. A class of adaptive sharpness measures is proposed to evaluate and predict this degradation. A wavelet based enhancement algorithm with automated frame selection is developed and proves efficient by a considerably elevated face recognition rate for severely blurred long range face images

    Image Enhancement via Deep Spatial and Temporal Networks

    Get PDF
    Image enhancement is a classic problem in computer vision and has been studied for decades. It includes various subtasks such as super-resolution, image deblurring, rain removal and denoise. Among these tasks, image deblurring and rain removal have become increasingly active, as they play an important role in many areas such as autonomous driving, video surveillance and mobile applications. In addition, there exists connection between them. For example, blur and rain often degrade images simultaneously, and the performance of their removal rely on the spatial and temporal learning. To help generate sharp images and videos, in this thesis, we propose efficient algorithms based on deep neural networks for solving the problems of image deblurring and rain removal. In the first part of this thesis, we study the problem of image deblurring. Four deep learning based image deblurring methods are proposed. First, for single image deblurring, a new framework is presented which firstly learns how to transfer sharp images to realistic blurry images via a learning-to-blur Generative Adversarial Network (GAN) module, and then trains a learning-to-deblur GAN module to learn how to generate sharp images from blurry versions. In contrast to prior work which solely focuses on learning to deblur, the proposed method learns to realistically synthesize blurring effects using unpaired sharp and blurry images. Second, for video deblurring, spatio-temporal learning and adversarial training methods are used to recover sharp and realistic video frames from input blurry versions. 3D convolutional kernels on the basis of deep residual neural networks are employed to capture better spatio-temporal features, and train the proposed network with both the content loss and adversarial loss to drive the model to generate realistic frames. Third, the problem of extracting sharp image sequences from a single motion-blurred image is tackled. A detail-aware network is presented, which is a cascaded generator to handle the problems of ambiguity, subtle motion and loss of details. Finally, this thesis proposes a level-attention deblurring network, and constructs a new large-scale dataset including images with blur caused by various factors. We use this dataset to evaluate current deep deblurring methods and our proposed method. In the second part of this thesis, we study the problem of image deraining. Three deep learning based image deraining methods are proposed. First, for single image deraining, the problem of joint removal of raindrops and rain streaks is tackled. In contrast to most of prior works which solely focus on the raindrops or rain streaks removal, a dual attention-in-attention model is presented, which removes raindrops and rain streaks simultaneously. Second, for video deraining, a novel end-to-end framework is proposed to obtain the spatial representation, and temporal correlations based on ResNet-based and LSTM-based architectures, respectively. The proposed method can generate multiple deraining frames at a time, which outperforms the state-of-the-art methods in terms of quality and speed. Finally, for stereo image deraining, a deep stereo semantic-aware deraining network is proposed for the first time in computer vision. Different from the previous methods which only learn from pixel-level loss function or monocular information, the proposed network advances image deraining by leveraging semantic information and visual deviation between two views
    corecore