19,751 research outputs found

    Innovation through pertinent patents research based on physical phenomena involved

    Get PDF
    One can find innovative solutions to complex industrial problems by looking for knowledge in patents. Traditional search using keywords in databases of patents has been widely used. Currently, different computational methods that limit human intervention have been developed. We aim to define a method to improve the search for relevant patents in order to solve industrial problems and specifically to deduce evolution opportunities. The non-automatic, semi-automatic, and automatic search methods use keywords. For a detailed keyword search, we propose as a basis the functional decomposition and the analysis of the physical phenomena involved in the achievement of the function to fulfill. The search for solutions to design a bi-phasic separator in deep offshore shows the method presented in this paper

    Template Mining for Information Extraction from Digital Documents

    Get PDF
    published or submitted for publicatio

    Natural language processing

    Get PDF
    Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems

    The Patent Spiral

    Get PDF
    Examination — the process of reviewing a patent application and deciding whether to grant the requested patent — improves patent quality in two ways. It acts as a substantive screen, filtering out meritless applications and improving meritorious ones. It also acts as a costly screen, discouraging applicants from seeking low-value patents. Yet despite these dual roles, the patent system has a substantial quality problem: it is both too easy to get a patent (because examiners grant invalid patents that should be filtered out by a substantive screen) and too cheap to do so (because examiners grant low-value nuisance patents that should be filtered out by a costly screen). This Article argues that these flaws in patent screening are both worse and better than has been recognized. The flaws are worse because they are not static, but dynamic, interacting to reinforce each other. This interaction leads to a vicious cycle of more and more patents that should never have been granted. When patents are too easily obtained, that undermines the costly screen, because even a plainly invalid patent has a nuisance value greater than its cost. And when patents are too cheaply obtained, that undermines the substantive screen, because there will be more patent applications, and the examination system cannot scale indefinitely without sacrificing accuracy. The result is a cycle of more and more applications, being screened less and less accurately, to give more and more low-quality patents. And although it is hard to test directly if the quality of patent examination is falling, there is evidence suggesting that this cycle is affecting the patent system. At the same time, these flaws are not as bad as they seem because this cycle may be surprisingly easy to solve. The cycle gives policymakers substantial flexibility in designing patent reforms, because the effect of a reform on one piece of the cycle will propagate to the rest of the cycle. Reformers can concentrate on the easiest places to make reforms (like the litigation system) instead of trying to do the impossible (like eliminating examination errors). Such reforms would not only have local effects, but could help make the entire patent system work better

    Community Detection and Growth Potential Prediction from Patent Citation Networks

    Full text link
    The scoring of patents is useful for technology management analysis. Therefore, a necessity of developing citation network clustering and prediction of future citations for practical patent scoring arises. In this paper, we propose a community detection method using the Node2vec. And in order to analyze growth potential we compare three ''time series analysis methods'', the Long Short-Term Memory (LSTM), ARIMA model, and Hawkes Process. The results of our experiments, we could find common technical points from those clusters by Node2vec. Furthermore, we found that the prediction accuracy of the ARIMA model was higher than that of other models.Comment: arXiv admin note: text overlap with arXiv:1607.00653 by other author

    Forecasting the Spreading of Technologies in Research Communities

    Get PDF
    Technologies such as algorithms, applications and formats are an important part of the knowledge produced and reused in the research process. Typically, a technology is expected to originate in the context of a research area and then spread and contribute to several other fields. For example, Semantic Web technologies have been successfully adopted by a variety of fields, e.g., Information Retrieval, Human Computer Interaction, Biology, and many others. Unfortunately, the spreading of technologies across research areas may be a slow and inefficient process, since it is easy for researchers to be unaware of potentially relevant solutions produced by other research communities. In this paper, we hypothesise that it is possible to learn typical technology propagation patterns from historical data and to exploit this knowledge i) to anticipate where a technology may be adopted next and ii) to alert relevant stakeholders about emerging and relevant technologies in other fields. To do so, we propose the Technology-Topic Framework, a novel approach which uses a semantically enhanced technology-topic model to forecast the propagation of technologies to research areas. A formal evaluation of the approach on a set of technologies in the Semantic Web and Artificial Intelligence areas has produced excellent results, confirming the validity of our solution

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed
    • …
    corecore