87 research outputs found

    Implicitly Constrained Semi-Supervised Least Squares Classification

    Full text link
    We introduce a novel semi-supervised version of the least squares classifier. This implicitly constrained least squares (ICLS) classifier minimizes the squared loss on the labeled data among the set of parameters implied by all possible labelings of the unlabeled data. Unlike other discriminative semi-supervised methods, our approach does not introduce explicit additional assumptions into the objective function, but leverages implicit assumptions already present in the choice of the supervised least squares classifier. We show this approach can be formulated as a quadratic programming problem and its solution can be found using a simple gradient descent procedure. We prove that, in a certain way, our method never leads to performance worse than the supervised classifier. Experimental results corroborate this theoretical result in the multidimensional case on benchmark datasets, also in terms of the error rate.Comment: 12 pages, 2 figures, 1 table. The Fourteenth International Symposium on Intelligent Data Analysis (2015), Saint-Etienne, Franc

    Resolution learning in deep convolutional networks using scale-space theory

    Full text link
    Resolution in deep convolutional neural networks (CNNs) is typically bounded by the receptive field size through filter sizes, and subsampling layers or strided convolutions on feature maps. The optimal resolution may vary significantly depending on the dataset. Modern CNNs hard-code their resolution hyper-parameters in the network architecture which makes tuning such hyper-parameters cumbersome. We propose to do away with hard-coded resolution hyper-parameters and aim to learn the appropriate resolution from data. We use scale-space theory to obtain a self-similar parametrization of filters and make use of the N-Jet: a truncated Taylor series to approximate a filter by a learned combination of Gaussian derivative filters. The parameter sigma of the Gaussian basis controls both the amount of detail the filter encodes and the spatial extent of the filter. Since sigma is a continuous parameter, we can optimize it with respect to the loss. The proposed N-Jet layer achieves comparable performance when used in state-of-the art architectures, while learning the correct resolution in each layer automatically. We evaluate our N-Jet layer on both classification and segmentation, and we show that learning sigma is especially beneficial for inputs at multiple sizes

    Image Reconstruction from Multiscale Critical Points

    Full text link
    A minimal variance reconstruction scheme is derived using derivatives of the Gaussian as filters. A closed form mixed correlation matrix for reconstructions from multiscale points and their local derivatives up to the second order is presented. With the inverse of this mixed correlation matrix, a reconstruction of the image can be easily calculated.Some interesting results of reconstructions from multiscale critical points are presented. The influence of limited calculation precision is considered, using the condition number of the mixed correlation matrix

    Human origins in Southern African palaeo-wetlands? Strong claims from weak evidence

    Get PDF
    Attempts to identify a ‘homeland’ for our species from genetic data are widespread in the academic literature. However, even when putting aside the question of whether a ‘homeland’ is a useful concept, there are a number of inferential pitfalls in attempting to identify the geographic origin of a species from contemporary patterns of genetic variation. These include making strong claims from weakly informative data, treating genetic lineages as representative of populations, assuming a high degree of regional population continuity over hundreds of thousands of years, and using circumstantial observations as corroborating evidence without considering alternative hypotheses on an equal footing, or formally evaluating any hypothesis. In this commentary we review the recent publication that claims to pinpoint the origins of ‘modern humans’ to a very specific region in Africa (Chan et al., 2019), demonstrate how it fell into these inferential pitfalls, and discuss how this can be avoided

    On the Link between Gaussian Homotopy Continuation and Convex Envelopes

    Full text link
    Abstract. The continuation method is a popular heuristic in computer vision for nonconvex optimization. The idea is to start from a simpli-fied problem and gradually deform it to the actual task while tracking the solution. It was first used in computer vision under the name of graduated nonconvexity. Since then, it has been utilized explicitly or im-plicitly in various applications. In fact, state-of-the-art optical flow and shape estimation rely on a form of continuation. Despite its empirical success, there is little theoretical understanding of this method. This work provides some novel insights into this technique. Specifically, there are many ways to choose the initial problem and many ways to progres-sively deform it to the original task. However, here we show that when this process is constructed by Gaussian smoothing, it is optimal in a specific sense. In fact, we prove that Gaussian smoothing emerges from the best affine approximation to Vese’s nonlinear PDE. The latter PDE evolves any function to its convex envelope, hence providing the optimal convexification

    Modified Chrispin-Norman chest radiography score for cystic fibrosis: observer agreement and correlation with lung function

    Get PDF
    Contains fulltext : 96114.pdf ( ) (Closed access)OBJECTIVE: To test observer agreement and two strategies for possible improvement (consensus meeting and reference images) for the modified Chrispin-Norman score for children with cystic fibrosis (CF). METHODS: Before and after a consensus meeting and after developing reference images three observers scored sets of 25 chest radiographs from children with CF. Observer agreement was tested for line, ring, mottled and large soft shadows, for overinflation and for the composite modified Chrispin-Norman score. Correlation with lung function was assessed. RESULTS: Before the consensus meeting agreement between observers 1 and 2 was moderate-good, but with observer 3 agreement was poor-fair. Scores correlated significantly with spirometry for observers 1 and 2 (-0.72<R<-0.42, P < 0.05), but not for observer 3. Agreement with observer 3 improved after the consensus meeting. Reference images improved agreement for overinflation and mottled and large shadows and correlation with lung function, but agreement for the modified Chrispin-Norman score did not improve further. CONCLUSION: Consensus meetings and reference images improve among-observer agreement for the modified Chrispin-Norman score, but good agreement was not achieved among all observers for the modified Chrispin-Norman score and for bronchial line and ring shadows

    Natural Variation in Arabidopsis Cvi-0 Accession Reveals an Important Role of MPK12 in Guard Cell CO2 Signaling

    Get PDF
    Plant gas exchange is regulated by guard cells that form stomatal pores. Stomatal adjustments are crucial for plant survival; they regulate uptake of CO2 for photosynthesis, loss of water, and entrance of air pollutants such as ozone. We mapped ozone hypersensitivity, more open stomata, and stomatal CO2-insensitivity phenotypes of the Arabidopsis thaliana accession Cvi-0 to a single amino acid substitution in MITOGEN-ACTIVATED PROTEIN (MAP) KINASE 12 (MPK12). In parallel, we showed that stomatal CO2-insensitivity phenotypes of a mutant cis (CO2-insensitive) were caused by a deletion of MPK12. Lack of MPK12 impaired bicarbonate-induced activation of S-type anion channels. We demonstrated that MPK12 interacted with the protein kinase HIGH LEAF TEMPERATURE 1 (HT1)-a central node in guard cell CO2 signaling-and that MPK12 functions as an inhibitor of HT1. These data provide a new function for plant MPKs as protein kinase inhibitors and suggest a mechanism through which guard cell CO2 signaling controls plant water management.</p

    LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images

    Get PDF
    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy

    Synthetic lethality: a framework for the development of wiser cancer therapeutics

    Get PDF
    The challenge in medical oncology has always been to identify compounds that will kill, or at least tame, cancer cells while leaving normal cells unscathed. Most chemotherapeutic agents in use today were selected primarily for their ability to kill rapidly dividing cancer cells grown in cell culture and in mice, with their selectivity determined empirically during subsequent animal and human testing. Unfortunately, most of the drugs developed in this way have relatively low therapeutic indices (low toxic dose relative to the therapeutic dose). Recent advances in genomics are leading to a more complete picture of the range of mutations, both driver and passenger, present in human cancers. Synthetic lethality provides a conceptual framework for using this information to arrive at drugs that will preferentially kill cancer cells relative to normal cells. It also provides a possible way to tackle 'undruggable' targets. Two genes are synthetically lethal if mutation of either gene alone is compatible with viability but simultaneous mutation of both genes leads to death. If one is a cancer-relevant gene, the task is to discover its synthetic lethal interactors, because targeting these would theoretically kill cancer cells mutant in the cancer-relevant gene while sparing cells with a normal copy of that gene. All cancer drugs in use today, including conventional cytotoxic agents and newer 'targeted' agents, target molecules that are present in both normal cells and cancer cells. Their therapeutic indices almost certainly relate to synthetic lethal interactions, even if those interactions are often poorly understood. Recent technical advances enable unbiased screens for synthetic lethal interactors to be undertaken in human cancer cells. These approaches will hopefully facilitate the discovery of safer, more efficacious anticancer drugs that exploit vulnerabilities that are unique to cancer cells by virtue of the mutations they have accrued during tumor progression
    • …
    corecore