246 research outputs found

    Continuum removal in H\alpha\ extragalactic measurements

    Full text link
    We point out an important source of error in measurements of extragalactic H-alpha emission and suggest ways to reduce it. The H-alpha line, used for estimating star formation rates, is commonly measured by imaging in a narrow band and a wide band, both which include the line. The image analysis relies on the accurate removal of the underlying continuum. We discuss in detail the derivation of the emission line's equivalent width and flux for extragalactic extended sources, and the required photometric calibrations. We describe commonly used continuum-subtraction procedures, and discuss the uncertainties that they introduce. Specifically, we analyse errors introduced by colour effects. We show that the errors in the measured H-alpha equivalent width induced by colour effects can lead to underestimates as large as 40% and overestimates as large as 10%, depending on the underlying galaxy's stellar population and the continuum-subtraction procedure used. We also show that these errors may lead to biases in results of surveys, and to the underestimation of the cosmic star formation rate at low redshifts (the low z points in the Madau plot). We suggest a method to significantly reduce these errors using a single colour measurement.Comment: 8 pages, 3 figures, MNRAS in pres

    A deep level set method for image segmentation

    Full text link
    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types of medical imaging data (liver CT and left ven-tricle MRI data), we show that the integrated method achieves goodperformance even when little training data is available, outperformingthe FCN or the level set model alone

    Stromgren Photometry from z=0 to z~1. The Method

    Get PDF
    We use rest-frame Stromgren photometry to observe clusters of galaxies in a self-consistent manner from z=0 to z=0.8. Stromgren photometry of galaxies is an efficient compromise between standard broad-band photometry and spectroscopy, in the sense that it is more sensitive to subtle variations in spectral energy distributions than the former, yet much less time-consuming than the latter. Principal Component Analysis (PCA) is used to extract maximum information from the Stromgren data. By calibrating the Principal Components using well-studied galaxies (and stellar population models), we develop a purely empirical method to detect, and subsequently classify, cluster galaxies at all redshifts smaller than 0.8. Interlopers are discarded with unprecedented efficiency (up to 100%). The first Principal Component essentially reproduces the Hubble Sequence, and can thus be used to determine the global star formation history of cluster members. The (PC2, PC3) plane allows us to identify Seyfert galaxies (and distinguish them from starbursts) based on photometric colors alone. In the case of E/S0 galaxies with known redshift, we are able to resolve the age-dust- metallicity degeneracy, albeit at the accuracy limit of our present observations. This technique will allow us to probe galaxy clusters well beyond their cores and to fainter magnitudes than spectroscopy can achieve. We are able to directly compare these data over the entire redshift range without a priori assumptions because our observations do not require k-corrections. The compilation of such data for different cluster types over a wide redshift range is likely to set important constraints on the evolution of galaxies and on the clustering process.Comment: 35 pages, 18 figures, accepted by ApJ

    Tversky loss function for image segmentation using 3D fully convolutional deep networks

    Full text link
    Fully convolutional deep neural networks carry out excellent potential for fast and accurate image segmentation. One of the main challenges in training these networks is data imbalance, which is particularly problematic in medical imaging applications such as lesion segmentation where the number of lesion voxels is often much lower than the number of non-lesion voxels. Training with unbalanced data can lead to predictions that are severely biased towards high precision but low recall (sensitivity), which is undesired especially in medical applications where false negatives are much less tolerable than false positives. Several methods have been proposed to deal with this problem including balanced sampling, two step training, sample re-weighting, and similarity loss functions. In this paper, we propose a generalized loss function based on the Tversky index to address the issue of data imbalance and achieve much better trade-off between precision and recall in training 3D fully convolutional deep neural networks. Experimental results in multiple sclerosis lesion segmentation on magnetic resonance images show improved F2 score, Dice coefficient, and the area under the precision-recall curve in test data. Based on these results we suggest Tversky loss function as a generalized framework to effectively train deep neural networks

    Database Search Strategies for Proteomic Data Sets Generated by Electron Capture Dissociation Mass Spectrometry

    Get PDF
    Large data sets of electron capture dissociation (ECD) mass spectra from proteomic experiments are rich in information; however, extracting that information in an optimal manner is not straightforward. Protein database search engines currently available are designed for low resolution CID data, from which Fourier transform ion cyclotron resonance (FT-ICR) ECD data differs significantly. ECD mass spectra contain both z-prime and z-dot fragment ions (and c-prime and c-dot); ECD mass spectra contain abundant peaks derived from neutral losses from charge-reduced precursor ions; FT-ICR ECD spectra are acquired with a larger precursor m/z isolation window than their low-resolution CID counterparts. Here, we consider three distinct stages of postacquisition analysis: (1) processing of ECD mass spectra prior to the database search; (2) the database search step itself and (3) postsearch processing of results. We demonstrate that each of these steps has an effect on the number of peptides identified, with the postsearch processing of results having the largest effect. We compare two commonly used search engines: Mascot and OMSSA. Using an ECD data set of modest size (3341 mass spectra) from a complex sample (mouse whole cell lysate), we demonstrate that search results can be improved from 630 identifications (19% identification success rate) to 1643 identifications (49% identification success rate). We focus in particular on improving identification rates for doubly charged precursors, which are typically low for ECD fragmentation. We compare our presearch processing algorithm with a similar algorithm recently developed for electron transfer dissociation (ETD) data
    corecore