36,582 research outputs found
Informed baseline subtraction of proteomic mass spectrometry data aided by a novel sliding window algorithm
Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear
time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein
profiles from biological samples with the aim of discovering biomarkers for
disease. However, the raw protein profiles suffer from several sources of bias
or systematic variation which need to be removed via pre-processing before
meaningful downstream analysis of the data can be undertaken. Baseline
subtraction, an early pre-processing step that removes the non-peptide signal
from the spectra, is complicated by the following: (i) each spectrum has, on
average, wider peaks for peptides with higher mass-to-charge ratios (m/z), and
(ii) the time-consuming and error-prone trial-and-error process for optimising
the baseline subtraction input arguments. With reference to the aforementioned
complications, we present an automated pipeline that includes (i) a novel
`continuous' line segment algorithm that efficiently operates over data with a
transformed m/z-axis to remove the relationship between peptide mass and peak
width, and (ii) an input-free algorithm to estimate peak widths on the
transformed m/z scale. The automated baseline subtraction method was deployed
on six publicly available proteomic MS datasets using six different m/z-axis
transformations. Optimality of the automated baseline subtraction pipeline was
assessed quantitatively using the mean absolute scaled error (MASE) when
compared to a gold-standard baseline subtracted signal. Near-optimal baseline
subtraction was achieved using the automated pipeline. The advantages of the
proposed pipeline include informed and data specific input arguments for
baseline subtraction methods, the avoidance of time-intensive and subjective
piecewise baseline subtraction, and the ability to automate baseline
subtraction completely. Moreover, individual steps can be adopted as
stand-alone routines.Comment: 50 pages, 19 figure
Speckle Reduction with Attenuation Compensation for Skin OCT Images Enhancement
The enhancement of skin image in optical coherence tomography (OCT) imaging can help
dermatologists to investigate tissue layers more accurately, hence the more efficient diagnosis. In this paper, we
propose an image enhancement technique including speckle reduction, attenuation compensation and cleaning to
improve the quality of OCT skin images. A weighted median filter is designed to reduce the level of speckle
noise while preserving the contrast. A novel border detection technique is designed to outline the main skin layers,
stratum corneum, epidermis and dermis. A model of the light attenuation is then used to estimate the absorption
coefficient of epidermis and dermis layers and compensate the brightness of the structures at deeper levels. The
undesired part of the image is removed using a simple cleaning algorithm. The performance of the algorithm has
been evaluated visually and numerically using the commonly used no-reference quality metrics. The results shows
an improvement in the quality of the images.
Keywords: Optical coherence tomography (OCT), Skin, Image enhancement, Speckle reduction, Attenuation
compensation
An Improved Algorithm for Eye Corner Detection
In this paper, a modified algorithm for the detection of nasal and temporal
eye corners is presented. The algorithm is a modification of the Santos and
Proenka Method. In the first step, we detect the face and the eyes using
classifiers based on Haar-like features. We then segment out the sclera, from
the detected eye region. From the segmented sclera, we segment out an
approximate eyelid contour. Eye corner candidates are obtained using Harris and
Stephens corner detector. We introduce a post-pruning of the Eye corner
candidates to locate the eye corners, finally. The algorithm has been tested on
Yale, JAFFE databases as well as our created database
A Cosmic Watershed: the WVF Void Detection Technique
On megaparsec scales the Universe is permeated by an intricate filigree of
clusters, filaments, sheets and voids, the Cosmic Web. For the understanding of
its dynamical and hierarchical history it is crucial to identify objectively
its complex morphological components. One of the most characteristic aspects is
that of the dominant underdense Voids, the product of a hierarchical process
driven by the collapse of minor voids in addition to the merging of large ones.
In this study we present an objective void finder technique which involves a
minimum of assumptions about the scale, structure and shape of voids. Our void
finding method, the Watershed Void Finder (WVF), is based upon the Watershed
Transform, a well-known technique for the segmentation of images. Importantly,
the technique has the potential to trace the existing manifestations of a void
hierarchy. The basic watershed transform is augmented by a variety of
correction procedures to remove spurious structure resulting from sampling
noise. This study contains a detailed description of the WVF. We demonstrate
how it is able to trace and identify, relatively parameter free, voids and
their surrounding (filamentary and planar) boundaries. We test the technique on
a set of Kinematic Voronoi models, heuristic spatial models for a cellular
distribution of matter. Comparison of the WVF segmentations of low noise and
high noise Voronoi models with the quantitatively known spatial characteristics
of the intrinsic Voronoi tessellation shows that the size and shape of the
voids are succesfully retrieved. WVF manages to even reproduce the full void
size distribution function.Comment: 24 pages, 15 figures, MNRAS accepted, for full resolution, see
http://www.astro.rug.nl/~weygaert/tim1publication/watershed.pd
- …