8,263 research outputs found

    TechMiner: Extracting Technologies from Academic Publications

    Get PDF
    In recent years we have seen the emergence of a variety of scholarly datasets. Typically these capture ‘standard’ scholarly entities and their connections, such as authors, affiliations, venues, publications, citations, and others. However, as the repositories grow and the technology improves, researchers are adding new entities to these repositories to develop a richer model of the scholarly domain. In this paper, we introduce TechMiner, a new approach, which combines NLP, machine learning and semantic technologies, for mining technologies from research publications and generating an OWL ontology describing their relationships with other research entities. The resulting knowledge base can support a number of tasks, such as: richer semantic search, which can exploit the technology dimension to support better retrieval of publications; richer expert search; monitoring the emergence and impact of new technologies, both within and across scientific fields; studying the scholarly dynamics associated with the emergence of new technologies; and others. TechMiner was evaluated on a manually annotated gold standard and the results indicate that it significantly outperforms alternative NLP approaches and that its semantic features improve performance significantly with respect to both recall and precision

    Property rights and externalities: The uneasy case of knowledge

    Get PDF
    Drawing from Coase's methodological lesson, this article discusses the specific case of knowledge, which was for a long time chiefly governed by exchange mechanisms lying outside the market, and has only recently been brought into the market. Its recent, heavy "colonization" by the property paradigm has progressively elicited criticism from commentators who, for various reasons, believe that the market can play only a limited role in pursuing efficiency in the knowledge domain. The article agrees with the enounced thesis and tries to provide an explanation of it that relates to the fact that in specific circumstances property-rights can produce distinct market failures that affect the social cost and can consequently prevent attainment of social welfare. In particular, the arguments set forth here concern three distinct externalities that arise when enforcing a property rights system over knowledge. First, the existence of a property right may itself alter individual preferences and social norms, thus causing specific changes in individuals' behaviour. Second, the idiosyncratic nature of knowledge, as a collective and inherently indivisible entity, means that its full propertization can be expected to produce significant harm. Third, property rights can cause endogenous drifts in the market structure arising from the exclusive power granted to the right holder: though generally intended as a necessary mechanism for extracting a price from the consumer, in the knowledge domain property rights can become a device for extracting rents from the market.property rights, knowledge, invention, externalities, efficiency

    Facilitating Technology Transfer by Patent Knowledge Graph

    Get PDF
    Technologies are one of the most important driving forces of our societal development and realizing the value of technologies heavily depends on the transfer of technologies. Given the importance of technologies and technology transfer, an increasingly large amount of money has been invested to encourage technological innovation and technology transfer worldwide. However, while numerous innovative technologies are invented, most of them remain latent and un-transferred. The comprehension of technical documents and the identification of appropriate technologies for given needs are challenging problems in technology transfer due to information asymmetry and information overload problems. There is a lack of common knowledge base that can reveal the technical details of technical documents and assist with the identification of suitable technologies. To bridge this gap, this research proposes to construct knowledge graph for facilitating technology transfer. A case study is conducted to show the construction of a patent knowledge graph and to illustrate its benefit to finding relevant patents, the most common and important form of technologies

    Template Mining for Information Extraction from Digital Documents

    Get PDF
    published or submitted for publicatio

    THE TECHNOLOGY FORECASTING OF NEW MATERIALS: THE EXAMPLE OF NANOSIZED CERAMIC POWDERS

    Get PDF
    New materials have been recognized as significant drivers for corporate growth and profitability in today’s fast changing environments. The nanosized ceramic powders played important parts in new materials field nowadays. However, little has been done in discussing the technology forecasting for the new materials development. Accordingly, this study applied the growth curve method to investigate the technology performances of nanosized ceramic powders. We adopted the bibliometric analysis through EI database and trademark office (USPTO) database to gain the useful data for this work. The effort resulted in nanosized ceramic powders were all in the initial growth periods of technological life cycles. The technology performances of nanosized ceramic powders through the EI and USPTO databases were similar and verified by each other. And there were parts of substitutions between traditional and nanosized ceramic powders. The bibliometric analysis was proposed as the simple and efficient tools to link the science and technology activities, and to obtain quantitative and historical data for helping researchers in technology forecasting, especially in rare historical data available fields, such as the new materials fields.new materials, bibliometric analysis, technology forecasting.

    Natural Language Processing in-and-for Design Research

    Full text link
    We review the scholarly contributions that utilise Natural Language Processing (NLP) methods to support the design process. Using a heuristic approach, we collected 223 articles published in 32 journals and within the period 1991-present. We present state-of-the-art NLP in-and-for design research by reviewing these articles according to the type of natural language text sources: internal reports, design concepts, discourse transcripts, technical publications, consumer opinions, and others. Upon summarizing and identifying the gaps in these contributions, we utilise an existing design innovation framework to identify the applications that are currently being supported by NLP. We then propose a few methodological and theoretical directions for future NLP in-and-for design research

    Automated data analysis of unstructured grey literature in health research: A mapping review

    Get PDF
    \ua9 2023 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. The amount of grey literature and ‘softer’ intelligence from social media or websites is vast. Given the long lead-times of producing high-quality peer-reviewed health information, this is causing a demand for new ways to provide prompt input for secondary research. To our knowledge, this is the first review of automated data extraction methods or tools for health-related grey literature and soft data, with a focus on (semi)automating horizon scans, health technology assessments (HTA), evidence maps, or other literature reviews. We searched six databases to cover both health- and computer-science literature. After deduplication, 10% of the search results were screened by two reviewers, the remainder was single-screened up to an estimated 95% sensitivity; screening was stopped early after screening an additional 1000 results with no new includes. All full texts were retrieved, screened, and extracted by a single reviewer and 10% were checked in duplicate. We included 84 papers covering automation for health-related social media, internet fora, news, patents, government agencies and charities, or trial registers. From each paper, we extracted data about important functionalities for users of the tool or method; information about the level of support and reliability; and about practical challenges and research gaps. Poor availability of code, data, and usable tools leads to low transparency regarding performance and duplication of work. Financial implications, scalability, integration into downstream workflows, and meaningful evaluations should be carefully planned before starting to develop a tool, given the vast amounts of data and opportunities those tools offer to expedite research

    Transfomer Models: From Model Inspection to Applications in Patents

    Get PDF
    L'elaborazione del linguaggio naturale viene utilizzata per affrontare diversi compiti, sia di tipo linguistico, come ad esempio l'etichettatura della parte del discorso, il parsing delle dipendenze, sia più specifiche, come ad esempio la traduzione automatica e l'analisi del sentimento. Per affrontare questi compiti, nel tempo sono stati sviluppati approcci dedicati.Una metodologia che aumenta le prestazioni in tutti questi casi in modo unificato è la modellazione linguistica, che consiste nel preaddestrare un modello per sostituire i token mascherati in grandi quantità di testo, in modo casuale all'interno di pezzi di testo o in modo sequenziale uno dopo l'altro, per sviluppare rappresentazioni di uso generale che possono essere utilizzate per migliorare le prestazioni in molti compiti contemporaneamente.L'architettura di rete neurale che attualmente svolge al meglio questo compito è il transformer, inoltre, le dimensioni del modello e la quantità dei dati sono essenziali per lo sviluppo di rappresentazioni ricche di informazioni. La disponibilità di insiemi di dati su larga scala e l'uso di modelli con miliardi di parametri sono attualmente il percorso più efficace verso una migliore rappresentazione del testo.Tuttavia, i modelli di grandi dimensioni comportano una maggiore difficoltà nell'interpretazione dell'output che forniscono. Per questo motivo, sono stati condotti diversi studi per indagare le rappresentazioni fornite da modelli di transformers.In questa tesi indago questi modelli da diversi punti di vista, studiando le proprietà linguistiche delle rappresentazioni fornite da BERT, per capire se le informazioni che codifica sono localizzate all'interno di specifiche elementi della rappresentazione vettoriale. A tal fine, identifico pesi speciali che mostrano un'elevata rilevanza per diversi compiti di sondaggio linguistico. In seguito, analizzo la causa di questi particolari pesi e li collego alla distribuzione dei token e ai token speciali.Per completare questa analisi generale ed estenderla a casi d'uso più specifici, studio l'efficacia di questi modelli sui brevetti. Utilizzo modelli dedicati, per identificare entità specifiche del dominio, come le tecnologie o per segmentare il testo dei brevetti. Studio sempre l'analisi delle prestazioni integrandola con accurate misurazioni dei dati e delle proprietà del modello per capire se le conclusioni tratte per i modelli generici valgono anche in questo contesto.Natural Language Processing is used to address several tasks, linguistic related ones, e.g. part of speech tagging, dependency parsing, and downstream tasks, e.g. machine translation, sentiment analysis. To tackle these tasks, dedicated approaches have been developed over time.A methodology that increases performance on all tasks in a unified manner is language modeling, this is done by pre-training a model to replace masked tokens in large amounts of text, either randomly within chunks of text or sequentially one after the other, to develop general purpose representations that can be used to improve performance in many downstream tasks at once.The neural network architecture currently best performing this task is the transformer, moreover, model size and data scale are essential to the development of information-rich representations. The availability of large scale datasets and the use of models with billions of parameters is currently the most effective path towards better representations of text.However, with large models, comes the difficulty in interpreting the output they provide. Therefore, several studies have been carried out to investigate the representations provided by transformers models trained on large scale datasets.In this thesis I investigate these models from several perspectives, I study the linguistic properties of the representations provided by BERT, a language model mostly trained on the English Wikipedia, to understand if the information it codifies is localized within specific entries of the vector representation. Doing this I identify special weights that show high relevance to several distinct linguistic probing tasks. Subsequently, I investigate the cause of these special weights, and link them to token distribution and special tokens.To complement this general purpose analysis and extend it to more specific use cases, given the wide range of applications for language models, I study their effectiveness on technical documentation, specifically, patents. I use both general purpose and dedicated models, to identify domain-specific entities such as users of the inventions and technologies or to segment patents text. I always study performance analysis complementing it with careful measurements of data and model properties to understand if the conclusions drawn for general purpose models hold in this context as well
    • 

    corecore