79 research outputs found

    End-to-end representation learning for Correlation Filter based tracking

    Full text link
    The Correlation Filter is an algorithm that trains a linear template to discriminate between images and their translations. It is well suited to object tracking because its formulation in the Fourier domain provides a fast solution, enabling the detector to be re-trained once per frame. Previous works that use the Correlation Filter, however, have adopted features that were either manually designed or trained for a different task. This work is the first to overcome this limitation by interpreting the Correlation Filter learner, which has a closed-form solution, as a differentiable layer in a deep neural network. This enables learning deep features that are tightly coupled to the Correlation Filter. Experiments illustrate that our method has the important practical benefit of allowing lightweight architectures to achieve state-of-the-art performance at high framerates.Comment: To appear at CVPR 201

    Learning feed-forward one-shot learners

    Full text link
    One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.Comment: The first three authors contributed equally, and are listed in alphabetical orde

    Devon: Deformable Volume Network for Learning Optical Flow

    Full text link
    State-of-the-art neural network models estimate large displacement optical flow in multi-resolution and use warping to propagate the estimation between two resolutions. Despite their impressive results, it is known that there are two problems with the approach. First, the multi-resolution estimation of optical flow fails in situations where small objects move fast. Second, warping creates artifacts when occlusion or dis-occlusion happens. In this paper, we propose a new neural network module, Deformable Cost Volume, which alleviates the two problems. Based on this module, we designed the Deformable Volume Network (Devon) which can estimate multi-scale optical flow in a single high resolution. Experiments show Devon is more suitable in handling small objects moving fast and achieves comparable results to the state-of-the-art methods in public benchmarks

    Learning feed-forward one-shot learners

    Get PDF
    Abstract One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark

    The miR-15/107 Family of microRNA Genes Regulates CDK5R1/p35 with Implications for Alzheimer’s Disease Pathogenesis

    Get PDF
    Cyclin-dependent kinase 5 regulatory subunit 1 (CDK5R1) encodes p35, the main activatory subunit of cyclin-dependent kinase 5 (CDK5). The p35/CDK5 active complex plays a fundamental role in brain development and functioning, but its deregulated activity has also been implicated in various neurodegenerative disorders, including Alzheimer\u2019s disease (AD). CDK5R1 displays a large and highly evolutionarily conserved 3\u2032-untranslated region (3\u2032-UTR), a fact that has suggested a role for this region in the post-transcriptional control of CDK5R1 expression. Our group has recently demonstrated that two miRNAs, miR-103 and miR-107, regulate CDK5R1 expression and affect the levels of p35. MiR-103 and miR-107 belong to the miR-15/107 family, a group of evolutionarily conserved miRNAs highly expressed in human cerebral cortex. In this work, we tested the hypothesis that other members of this group of miRNAs, in addition to miR-103 and miR-107, were able to modulate CDK5R1 expression. We provide evidence that several miRNAs belonging to the miR-15/107 family regulate p35 levels. BACE1 expression levels were also found to be modulated by different members of this family. Furthermore, overexpression of these miRNAs led to reduced APP phosphorylation levels at the CDK5-specific Thr668 residue. We also show that miR-15/107 miRNAs display reduced expression levels in hippocampus and temporal cortex, but not in cerebellum, of AD brains. Moreover, increased CDK5R1 mRNA levels were observed in AD hippocampus tissues. Our results suggest that the downregulation of the miR-15/107 family might have a role in the pathogenesis of AD by increasing the levels of CDK5R1/p35 and consequently enhancing CDK5 activity

    NOVA: rendering virtual worlds with humans for computer vision tasks

    Get PDF
    Today, the cutting edge of computer vision research greatly depends on the availability of large datasets, which are critical for effectively training and testing new methods. Manually annotating visual data, however, is not only a labor-intensive process but also prone to errors. In this study, we present NOVA, a versatile framework to create realistic-looking 3D rendered worlds containing procedurally generated humans with rich pixel-level ground truth annotations. NOVA can simulate various environmental factors such as weather conditions or different times of day, and bring an exceptionally diverse set of humans to life, each having a distinct body shape, gender and age. To demonstrate NOVA's capabilities, we generate two synthetic datasets for person tracking. The first one includes 108 sequences, each with different levels of difficulty like tracking in crowded scenes or at nighttime and aims for testing the limits of current state-of-the-art trackers. A second dataset of 97 sequences with normal weather conditions is used to show how our synthetic sequences can be utilized to train and boost the performance of deep-learning based trackers. Our results indicate that the synthetic data generated by NOVA represents a good proxy of the real-world and can be exploited for computer vision tasks

    AD51B in Familial Breast Cancer

    Get PDF
    Common variation on 14q24.1, close to RAD51B, has been associated with breast cancer: rs999737 and rs2588809 with the risk of female breast cancer and rs1314913 with the risk of male breast cancer. The aim of this study was to investigate the role of RAD51B variants in breast cancer predisposition, particularly in the context of familial breast cancer in Finland. We sequenced the coding region of RAD51B in 168 Finnish breast cancer patients from the Helsinki region for identification of possible recurrent founder mutations. In addition, we studied the known rs999737, rs2588809, and rs1314913 SNPs and RAD51B haplotypes in 44,791 breast cancer cases and 43,583 controls from 40 studies participating in the Breast Cancer Association Consortium (BCAC) that were genotyped on a custom chip (iCOGS). We identified one putatively pathogenic missense mutation c.541C>T among the Finnish cancer patients and subsequently genotyped the mutation in additional breast cancer cases (n = 5259) and population controls (n = 3586) from Finland and Belarus. No significant association with breast cancer risk was seen in the meta-analysis of the Finnish datasets or in the large BCAC dataset. The association with previously identified risk variants rs999737, rs2588809, and rs1314913 was replicated among all breast cancer cases and also among familial cases in the BCAC dataset. The most significant association was observed for the haplotype carrying the risk-alleles of all the three SNPs both among all cases (odds ratio (OR): 1.15, 95% confidence interval (CI): 1.11–1.19, P = 8.88 x 10−16) and among familial cases (OR: 1.24, 95% CI: 1.16–1.32, P = 6.19 x 10−11), compared to the haplotype with the respective protective alleles. Our results suggest that loss-of-function mutations in RAD51B are rare, but common variation at the RAD51B region is significantly associated with familial breast cancer risk

    Copy Number Variants Are Ovarian Cancer Risk Alleles at Known and Novel Risk Loci

    Get PDF

    Comprehensive analyses of somatic TP53 mutation in tumors with variable mutant allele frequency

    Full text link
    © The Author(s) 2017. Somatic mutation of the tumor suppressor gene TP53 is reported in at least 50% of human malignancies. Most high-grade serous ovarian cancers (HGSC) have a mutant TP53 allele. Accurate detection of these mutants in heterogeneous tumor tissue is paramount as therapies emerge to target mutant p53. We used a Fluidigm Access Array™ System with Massively Parallel Sequencing (MPS) to analyze DNA extracted from 76 serous ovarian tumors. This dataset has been made available to researchers through the European Genome-phenome Archive (EGA; EGAS00001002200). Herein, we present analyses of this dataset using HaplotypeCaller and MuTect2 through the Broad Institute's Genome Analysis Toolkit (GATK). We anticipate that this TP53 mutation dataset will be useful to researchers developing and testing new software to accurately determine high and low frequency variant alleles in heterogeneous aneuploid tumor tissue. Furthermore, the analysis pipeline we present provides a valuable framework for determining somatic variants more broadly in tumor tissue
    corecore