754 research outputs found

    Recurrent Convolutional Neural Networks for Scene Parsing

    Get PDF
    Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time

    Recurrent Convolutional Neural Networks for Scene Labeling

    Get PDF
    The goal of the scene labeling task is to assign a class label to each pixel in an image. To ensure a good visual coherence and a high class accu-racy, it is essential for a model to capture long range (pixel) label dependencies in images. In a feed-forward architecture, this can be achieved simply by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach that consists of a re-current convolutional neural network which al-lows us to consider a large input context while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation technique nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and mod-els complex spatial dependencies with low infer-ence cost. As the context size increases with the built-in recurrence, the system identifies and cor-rects its own errors. Our approach yields state-of-the-art performance on both the Stanford Back-ground Dataset and the SIFT Flow Dataset, while remaining very fast at test time. 1

    Weakly Supervised Object Segmentation with 004 dwaeConvolutional Neural Networks

    Get PDF
    Can a machine learn how to segment different objects in real world images without having any prior knowledge about the delineation of the classes? In this paper, we demonstrate that this task is indeed possible. We address the problem by training a Convolutional Neural Networks (CNN) model with weakly labeled images, \emph{i.e.}, images in which the only knowledge assumed on each sample is the presence or not of an object. The model, trained in an one--vs-all scheme, learns representations that distinguish image patches that belong to the class of interest from those that belong to the background. The per-pixel segmentation is obtained by applying the model to the patch surrounding the pixel and assigning the inferred class to that pixel. Our system is trained using a subset of the Imagenet dataset. The experiments are validated on two challenging classes for segmentation: cats and dogs. We show both quantitatively and qualitatively that the model achieves good accuracy results for these classes on the Pascal VOC 2012 competition, without using any prior segmentation knowledge. This model is powerful in the sense that it learns how to segment objects without the use of costly fully-labeled segmentation datasets

    Twitter Sentiment Analysis (Almost) from Scratch

    Get PDF
    A popular application in Natural Language Processing (NLP) is the Sentiment Analysis (SA), i.e., the task of extracting contextual polarity from a given text. The social network Twitter provides an immense amount of text (called tweets) generated by users with a maximum number of 140 characters. In this project, we plan to learn a tweet representation from publicly provided data from Tweets in order to infer sentiment from them. One challenge on this task is the fact that tweets are generated from very different users, making the data very heterogeneous (different from regular data which is written in proper English). Another challenge is, clearly, the large scale of the problem. We propose a deep learning sentence representation (called tweet representation) from user generated data to infer sentiment from tweets. This representation is learned from scratch (directly from the words in tweet) over a large unlabeled corpus of tweets. We demonstrate that we achieve state-of-the-art results for SA on tweets

    Learning to Segments Objects Candidates

    Get PDF
    Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-the-art object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation

    Simple Image Description Generator via a Linear Phrase-based Model

    Get PDF
    Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the- art models, achieves comparable results on the recently release Microsoft COCO dataset

    Monocarboxylate transporter 4 (MCT4) and CD147 overexpression is associated with poor prognosis in prostate cancer

    Get PDF
    BACKGROUND. Monocarboxylate transporters (MCTs) are transmembrane proteins involved in the transport of monocarboxylates across the plasma membrane, which appear to play an important role in solid tumours, however the role of MCTs in prostate cancer is largely unknown.The aim of the present work was to evaluate the clinico-pathological value of monocarboxylate transporters (MCTs) expression, namely MCT1, MCT2 and MCT4, together with CD147 and gp70 as MCT1/4 and MCT2 chaperones, respectively, in prostate carcinoma. METHODS. Prostate tissues were obtained from 171 patients, who performed radical prostatectomy and 14 patients who performed cystoprostatectomy. Samples and clinico-pathological data were retrieved and organized into tissue microarray (TMAs) blocks. Protein expression was evaluated by immunohistochemistry in neoplastic (n= 171), adjacent non-neoplastic tissues (n= 135), PIN lesions (n=40) and normal prostatic tissue (n=14). Protein expression was correlated with patients' clinicopathologic characteristics. RESULTS. In the present study, a significant increase of MCT2 and MCT4 expression in the cytoplasm of tumour cells and a significant decrease in both MCT1 and CD147 expression in prostate tumour cells was observed when compared to normal tissue. All MCT isoforms and CD147 were expressed in PIN lesions. Importantly, for MCT2 and MCT4 the expression levels in PIN lesions were between normal and tumour tissue, which might indicate a role for these MCTs in the malignant transformation. Associations were found between MCT1, MCT4 and CD147 expressions and poor prognosis markers; importantly MCT4 and CD147 overexpression correlated with higher PSA levels, Gleason score and pT stage, as well as with perineural invasion and biochemical recurrence. CONCLUSIONS. Our data provides novel evidence for the involvement of MCTs in prostate cancer. According to our results, we consider that MCT2 should be further explored as tumour marker and both MCT4 and CD147 as markers of poor prognosis in prostate cancer.NPG, CP and VMG received fellowships from the Portuguese Foundation for Science and Technology (FCT), refs. SFRH/BD/61027/2009, SFRH/BPD/69479/ 2010 and SFRH/BI/33503/2008, respectively. This work was supported by the FCT grant ref. PTDC/SAU-FCF/104347/2008, under the scope of Programa Operacional Temático Factores de Competitividade” (COMPETE) of Quadro Comunitário de Apoio III and co-financed by Fundo Comunitário Europeu FEDER

    An intriguing shift occurs in the novel protein phosphatase 1 binding partner, TCTEX1D4: evidence of positive selection in a pika model

    Get PDF
    T-complex testis expressed protein 1 domain containing 4 (TCTEX1D4) contains the canonical phosphoprotein phosphatase 1 (PPP1) binding motif, composed by the amino acid sequence RVSF. We identified and validated the binding of TCTEX1D4 to PPP1 and demonstrated that indeed this protein is a novel PPP1 interacting protein. Analyses of twenty-one mammalian species available in public databases and seven Lagomorpha sequences obtained in this work showed that the PPP1 binding motif 90RVSF93 is present in all of them and is flanked by a palindromic sequence, PLGS, except in three species of pikas (Ochotona princeps, O. dauurica and O. pusilla). Furthermore, for the Ochotona species an extra glycosylation site, motif 96NLS98, and the loss of the palindromic sequence were observed. Comparison with other lagomorphs suggests that this event happened before the Ochotona radiation. The dN/dS for the sequence region comprising the PPP1 binding motif and the flanking palindrome highly supports the hypothesis that for Ochotona species this region has been evolving under positive selection. In addition, mutational screening shows that the ability of pikas TCTEX1D4 to bind to PPP1 is maintained, although the PPP1 binding motif is disrupted, and the N- and C-terminal surrounding residues are also abrogated. These observations suggest pika as an ideal model to study novel PPP1 complexes regulatory mechanisms.publishe
    • …
    corecore