435 research outputs found

    Pattern Matching and Discourse Processing in Information Extraction from Japanese Text

    Full text link
    Information extraction is the task of automatically picking up information of interest from an unconstrained text. Information of interest is usually extracted in two steps. First, sentence level processing locates relevant pieces of information scattered throughout the text; second, discourse processing merges coreferential information to generate the output. In the first step, pieces of information are locally identified without recognizing any relationships among them. A key word search or simple pattern search can achieve this purpose. The second step requires deeper knowledge in order to understand relationships among separately identified pieces of information. Previous information extraction systems focused on the first step, partly because they were not required to link up each piece of information with other pieces. To link the extracted pieces of information and map them onto a structured output format, complex discourse processing is essential. This paper reports on a Japanese information extraction system that merges information using a pattern matcher and discourse processor. Evaluation results show a high level of system performance which approaches human performance.Comment: See http://www.jair.org/ for any accompanying file

    Inter- and intra-locus linkage analysis in Sordaria fimicola

    Get PDF
    Research in Sordaria fimicola has been discontinued since Kihara Institute for Biological Research closed in 1984

    Learning Temporal Transformations From Time-Lapse Videos

    Full text link
    Based on life-long observations of physical, chemical, and biologic phenomena in the natural world, humans can often easily picture in their minds what an object will look like in the future. But, what about computers? In this paper, we learn computational models of object transformations from time-lapse videos. In particular, we explore the use of generative models to create depictions of objects at future times. These models explore several different prediction tasks: generating a future state given a single depiction of an object, generating a future state given two depictions of an object at different times, and generating future states recursively in a recurrent framework. We provide both qualitative and quantitative evaluations of the generated results, and also conduct a human evaluation to compare variations of our models.Comment: ECCV201

    Predicting Future Instance Segmentation by Forecasting Convolutional Features

    Get PDF
    Anticipating future events is an important prerequisite towards intelligent behavior. Video forecasting has been studied as a proxy task towards this goal. Recent work has shown that to predict semantic segmentation of future frames, forecasting at the semantic level is more effective than forecasting RGB frames and then segmenting these. In this paper we consider the more challenging problem of future instance segmentation, which additionally segments out individual objects. To deal with a varying number of output labels per image, we develop a predictive model in the space of fixed-sized convolutional features of the Mask R-CNN instance segmentation model. We apply the "detection head'" of Mask R-CNN on the predicted features to produce the instance segmentation of future frames. Experiments show that this approach significantly improves over strong baselines based on optical flow and repurposed instance segmentation architectures

    Forecasting Human-Object Interaction: Joint Prediction of Motor Attention and Actions in First Person Video

    Full text link
    We address the challenging task of anticipating human-object interaction in first person videos. Most existing methods ignore how the camera wearer interacts with the objects, or simply consider body motion as a separate modality. In contrast, we observe that the international hand movement reveals critical information about the future activity. Motivated by this, we adopt intentional hand movement as a future representation and propose a novel deep network that jointly models and predicts the egocentric hand motion, interaction hotspots and future action. Specifically, we consider the future hand motion as the motor attention, and model this attention using latent variables in our deep model. The predicted motor attention is further used to characterise the discriminative spatial-temporal visual features for predicting actions and interaction hotspots. We present extensive experiments demonstrating the benefit of the proposed joint model. Importantly, our model produces new state-of-the-art results for action anticipation on both EGTEA Gaze+ and the EPIC-Kitchens datasets. Our project page is available at https://aptx4869lm.github.io/ForecastingHOI

    Knowledge transfer for scene-specific motion prediction

    Get PDF
    When given a single frame of the video, humans can not only interpret the content of the scene, but also they are able to forecast the near future. This ability is mostly driven by their rich prior knowledge about the visual world, both in terms of (i) the dynamics of moving agents, as well as (ii) the semantic of the scene. In this work we exploit the interplay between these two key elements to predict scene-specific motion patterns. First, we extract patch descriptors encoding the probability of moving to the adjacent patches, and the probability of being in that particular patch or changing behavior. Then, we introduce a Dynamic Bayesian Network which exploits this scene specific knowledge for trajectory prediction. Experimental results demonstrate that our method is able to accurately predict trajectories and transfer predictions to a novel scene characterized by similar elements

    Production of scFv-Conjugated Affinity Silk Powder by Transgenic Silkworm Technology

    Get PDF
    Bombyx mori (silkworm) silk proteins are being utilized as unique biomaterials for medical applications. Chemical modification or post-conjugation of bioactive ligands expand the applicability of silk proteins; however, the processes are elaborate and costly. In this study, we used transgenic silkworm technology to develop single-chain variable fragment (scFv)-conjugated silk fibroin. The cocoons of the transgenic silkworm contain fibroin L-chain linked with scFv as a fusion protein. After dissolving the cocoons in lithium bromide, the silk solution was dialyzed, concentrated, freeze-dried, and crushed into powder. Immunoprecipitation analyses demonstrate that the scFv domain retains its specific binding activity to the target molecule after multiple processing steps. These results strongly suggest the promise of scFv-conjugated silk fibroin as an alternative affinity reagent, which can be manufactured using transgenic silkworm technology at lower cost than traditional affinity carriers

    Genome Sequence of Kitasatospora setae NBRC 14216T: An Evolutionary Snapshot of the Family Streptomycetaceae

    Get PDF
    Kitasatospora setae NBRC 14216T (=KM-6054T) is known to produce setamycin (bafilomycin B1) possessing antitrichomonal activity. The genus Kitasatospora is morphologically similar to the genus Streptomyces, although they are distinguishable from each other on the basis of cell wall composition and the 16S rDNA sequence. We have determined the complete genome sequence of K. setae NBRC 14216T as the first Streptomycetaceae genome other than Streptomyces. The genome is a single linear chromosome of 8 783 278 bp with terminal inverted repeats of 127 148 bp, predicted to encode 7569 protein-coding genes, 9 rRNA operons, 1 tmRNA and 74 tRNA genes. Although these features resemble those of Streptomyces, genome-wide comparison of orthologous genes between K. setae and Streptomyces revealed smaller extent of synteny. Multilocus phylogenetic analysis based on amino acid sequences unequivocally placed K. setae outside the Streptomyces genus. Although many of the genes related to morphological differentiation identified in Streptomyces were highly conserved in K. setae, there were some differences such as the apparent absence of the AmfS (SapB) class of surfactant protein and differences in the copy number and variation of paralogous components involved in cell wall synthesis

    Anti-Prion Activity of Brilliant Blue G

    Get PDF
    BACKGROUND: Prion diseases are fatal neurodegenerative disorders with no effective therapy currently available. Accumulating evidence has implicated over-activation of P2X7 ionotropic purinergic receptor (P2X7R) in the progression of neuronal loss in several neurodegenerative diseases. This has led to the speculation that simultaneous blockade of this receptor and prion replication can be an effective therapeutic strategy for prion diseases. We have focused on Brilliant Blue G (BBG), a well-known P2X7R antagonist, possessing a chemical structure expected to confer anti-prion activity and examined its inhibitory effect on the accumulation of pathogenic isoforms of prion protein (PrPres) in a cellular and a mouse model of prion disease in order to determine its therapeutic potential. PRINCIPAL FINDINGS: BBG prevented PrPres accumulation in infected MG20 microglial and N2a neural cells at 50% inhibitory concentrations of 14.6 and 3.2 µM, respectively. Administration of BBG in vivo also reduced PrPres accumulation in the brains of mice with prion disease. However, it did not appear to alleviate the disease progression compared to the vehicle-treated controls, implying a complex role of P2X7R on the neuronal degeneration in prion diseases. SIGNIFICANCE: These results provide novel insights into the pathophysiology of prion diseases and have important implications for the treatment
    corecore