14 research outputs found

    A combination of a DNA-chimera siRNA against PLK-1 and zoledronic acid suppresses the growth of malignant mesothelioma cells in vitro.

    Get PDF
    Although novel agents effective against malignant mesothelioma (MM) have been developed, the prognosis of patients with MM is still poor. We generated a DNA-chimeric siRNA against polo-like kinase-1 (PLK-1), which was more stable in human serum than the non-chimeric siRNA. The chimeric PLK-1 siRNA inhibited MM cell proliferation through the induction of apoptosis. Next, we investigated the effects of zoledronic acid (ZOL) on MM cells, and found that ZOL also induced apoptosis in MM cells. Furthermore, ZOL augmented the inhibitory effects of the PLK-1 siRNA. In conclusion, combining a PLK-1 siRNA with ZOL treatment is an attractive strategy against MM

    Improving topic modeling through homophily for legal documents

    Get PDF
    International audienceTopic modeling that can automatically assign topics to legal documents is very important in the domain of computational law. The relevance of the modeled topics strongly depends on the legal context they are used in. On the other hand, references to laws and prior cases are key elements for judges to rule on a case. Taken together, these references form a network, whose structure can be analysed with network analysis. However, the content of the referenced documents may not be always accessed. Even in that case, the reference structure itself shows that documents share latent similar characteristics. We propose to use this latent structure to improve topic modeling of law cases using document homophily. In this paper, we explore the use of homophily networks extracted from two types of references: prior cases and statute laws, to enhance topic modeling on legal case documents. We conduct in detail, an analysis on a dataset consisting of rich legal cases, i.e., the COLIEE dataset, to create these networks. The homophily networks consist of nodes for legal cases, and edges with weights for the two families of references between the case nodes. We further propose models to use the edge weights for topic modeling. In particular, we propose a cutting model and a weighting model to improve the relational topic model (RTM). The cutting model uses edges with weights higher than a threshold as document links in RTM; the weighting model uses the edge weights to weight the link probability function in RTM. The weights can be obtained either from the co-citations or from the cosine similarity based on an embedding of the homophily networks. Experiments show that the use of the homophily networks for topic modeling significantly outperforms previous studies, and the weighting model is more effective than the cutting model

    Information Extraction from Public Meeting Articles

    No full text
    Public meeting articles are the key to understanding the history of public opinion and public sphere in Australia. Information extraction from public meeting articles can obtain new insights into Australian history. In this paper, we create an information extraction dataset in the public meeting domain. We manually annotate the date and time, place, purpose, people who requested the meeting, people who convened the meeting, and people who were convened of 1258 public meeting articles. We further present an information extraction system, which formulates information extraction from public meeting articles as a machine reading comprehension task. Experiments indicate that our system can achieve an F1 score of 74.98% for information extraction from public meeting articles

    Contextualized Word Representations for Multi-Sense Embedding

    No full text
    Distributed word representations are used in many natural language processing tasks. When dealing with ambiguous words, it is desired to generate multi-sense embeddings, i.e., multiple representations per word. Therefore, several methods have been proposed to generate different word representations based on parts of speech or topic, but these methods tend to be too coarse to deal with ambiguity. In this paper, we propose methods to generate multiple word representations for each word based on dependency structure relations. In order to deal with the data sparseness problem due to the increase in the size of vocabulary, the initial value for each word representations is determined using pre-trained word representations. It is expected that the representations of low frequency words will remain in the vicinity of the initial value, which will in turn reduce the negative effects of data sparseness. Extensive evaluation results confirm the effectiveness of our methods that significantly outperformed state-of-the-art methods for multi-sense embeddings. Detailed analysis of our method shows that the data sparseness problem is resolved due to the pre-training

    依存構造に基づく単語から語義の分散表現への細分化

    No full text
    多くの自然言語処理タスクにおいて単語分散表現が利用されている.しかし,各単語に 1 つの分散表現を割り当てるアプローチでは,多義語における各語義の情報が混在してしまう.この問題に対処するために,先行研究では品詞やトピックごとに異なる分散表現を割り当てたが,これらの手法には多義性を扱う粒度が粗いという課題がある.本研究では,単語間の依存関係を手がかりとして各単語に複数の分散表現を割り当てる手法を提案する.提案手法は,先行研究よりも細かい粒度で多義性を扱うことができる反面,データスパースネス問題が危惧される.そこで我々は,多義語における各分散表現の初期値として,語義を区別せずに事前学習した分散表現を用いることでこの問題に対処する.単語間の意味的類似度推定タスクおよび語彙的換言タスクにおける実験の結果,提案手法は各単語に複数の分散表現を割り当てる先行研究よりも高い性能を発揮した.また,各単語の出現頻度が与える影響についての分析の結果,事前学習した分散表現を用いることがデータスパースネス問題を解決するために有効であることも確認できた
    corecore