28,417 research outputs found

    Predictive Analytics for Fantasy Football: Predicting Player Performance Across the NFL

    Get PDF
    The goal of this research is to develop a quantitative method of ranking and listing players in terms of performance. These rankings can then be used to evaluate players prior to and during a fantasy football draft. To produce these rankings, we develop a methodology for forecasting the performance of each individual player (on different metrics) for the upcoming season (16 games) and use these forecasts to estimate player fantasy football scores for the 2018 season. More specifically, this work answers the following: In what order should players be drafted in a 2018 fantasy football draft and why? Which players can be expected to perform the best at their given position (Quarterback, Running back, Wide Receiver, Kicker, Team Defense) in 2018, and which players should we expect to perform poorly

    Patent Citation Dynamics Modeling via Multi-Attention Recurrent Networks

    Full text link
    Modeling and forecasting forward citations to a patent is a central task for the discovery of emerging technologies and for measuring the pulse of inventive progress. Conventional methods for forecasting these forward citations cast the problem as analysis of temporal point processes which rely on the conditional intensity of previously received citations. Recent approaches model the conditional intensity as a chain of recurrent neural networks to capture memory dependency in hopes of reducing the restrictions of the parametric form of the intensity function. For the problem of patent citations, we observe that forecasting a patent's chain of citations benefits from not only the patent's history itself but also from the historical citations of assignees and inventors associated with that patent. In this paper, we propose a sequence-to-sequence model which employs an attention-of-attention mechanism to capture the dependencies of these multiple time sequences. Furthermore, the proposed model is able to forecast both the timestamp and the category of a patent's next citation. Extensive experiments on a large patent citation dataset collected from USPTO demonstrate that the proposed model outperforms state-of-the-art models at forward citation forecasting

    THE TECHNOLOGY FORECASTING OF NEW MATERIALS: THE EXAMPLE OF NANOSIZED CERAMIC POWDERS

    Get PDF
    New materials have been recognized as significant drivers for corporate growth and profitability in today’s fast changing environments. The nanosized ceramic powders played important parts in new materials field nowadays. However, little has been done in discussing the technology forecasting for the new materials development. Accordingly, this study applied the growth curve method to investigate the technology performances of nanosized ceramic powders. We adopted the bibliometric analysis through EI database and trademark office (USPTO) database to gain the useful data for this work. The effort resulted in nanosized ceramic powders were all in the initial growth periods of technological life cycles. The technology performances of nanosized ceramic powders through the EI and USPTO databases were similar and verified by each other. And there were parts of substitutions between traditional and nanosized ceramic powders. The bibliometric analysis was proposed as the simple and efficient tools to link the science and technology activities, and to obtain quantitative and historical data for helping researchers in technology forecasting, especially in rare historical data available fields, such as the new materials fields.new materials, bibliometric analysis, technology forecasting.

    Innovation, skills and performance in the downturn: an analysis of the UK innovation survey 2011

    Get PDF
    The link between firms’ innovation performance and economic cycles, especially major downturns such as that of 2008-10, is a matter of great policy significance, but is relatively under-researched at least at the level of micro data on business behaviour. It is, for example, often argued that economies need to ‘innovate out of recessions’ since innovation is positively associated with improvements in productivity that then lead to growth and better employment (Nesta, 2009). The issues of how individual firms respond to downturns through their investment in innovation, and how this impacts on innovation outputs and ultimately business performance and growth during and after downturns, has been less studied because relevant data has not been readily available. The UK Innovation Survey (UKIS) 2011 now makes this possible. The UKIS 2011 with reference period 2008 to 2010 covers the downturn in economic activity generated by the global financial crash. The build-up of panels over the life of the UKIS also supports analysis of the longer-term interactions between innovation and the business cycle. This report analyses the last four waves of the surveys. Further, the latest survey includes questions on whether firms employ a specific set of skills, which adds materially to the ability to research the role of skills and human capital in innovation at the micro level

    Exploratory topic modeling with distributional semantics

    Full text link
    As we continue to collect and store textual data in a multitude of domains, we are regularly confronted with material whose largely unknown thematic structure we want to uncover. With unsupervised, exploratory analysis, no prior knowledge about the content is required and highly open-ended tasks can be supported. In the past few years, probabilistic topic modeling has emerged as a popular approach to this problem. Nevertheless, the representation of the latent topics as aggregations of semi-coherent terms limits their interpretability and level of detail. This paper presents an alternative approach to topic modeling that maps topics as a network for exploration, based on distributional semantics using learned word vectors. From the granular level of terms and their semantic similarity relations global topic structures emerge as clustered regions and gradients of concepts. Moreover, the paper discusses the visual interactive representation of the topic map, which plays an important role in supporting its exploration.Comment: Conference: The Fourteenth International Symposium on Intelligent Data Analysis (IDA 2015

    Prediction of Emerging Technologies Based on Analysis of the U.S. Patent Citation Network

    Full text link
    The network of patents connected by citations is an evolving graph, which provides a representation of the innovation process. A patent citing another implies that the cited patent reflects a piece of previously existing knowledge that the citing patent builds upon. A methodology presented here (i) identifies actual clusters of patents: i.e. technological branches, and (ii) gives predictions about the temporal changes of the structure of the clusters. A predictor, called the {citation vector}, is defined for characterizing technological development to show how a patent cited by other patents belongs to various industrial fields. The clustering technique adopted is able to detect the new emerging recombinations, and predicts emerging new technology clusters. The predictive ability of our new method is illustrated on the example of USPTO subcategory 11, Agriculture, Food, Textiles. A cluster of patents is determined based on citation data up to 1991, which shows significant overlap of the class 442 formed at the beginning of 1997. These new tools of predictive analytics could support policy decision making processes in science and technology, and help formulate recommendations for action

    Patents and the Survival of Internet-related IPOs

    Get PDF
    We examine the effect of patenting on the survival prospects of 356 internet-related firms that IPO'd at the height of the stock market bubble of the late 1990s. By March 2005, nearly 2/3 of these firms had delisted from the NASDAQ exchange. Although changes in the legal environment in the US in the 1990s made it much easier to obtain patents on software and, ultimately, on business methods, less than half of the firms in this sample obtained, or attempted to obtain, patents. For those that did, we hypothesize that patents conferred competitive advantages that translate into higher probability of survival, though they may also simply be a signal of firm quality. Controlling for age, venture-capital backing, financial characteristics, and stock market conditions, patenting is positively associated with survival. Quite different processes appear to govern exit via acquisition compared to exit via delisting from the exchange due to business failure. Firms that applied for more patents were less likely to be acquired, though obtaining unusually highly cited patents may make them more attractive acquisition target. These findings do not hold for business method patents, which do not appear to confer a survival advantage.

    Prospect patents, data markets, and the commons in data-driven medicine : openness and the political economy of intellectual property rights

    Get PDF
    Scholars who point to political influences and the regulatory function of patent courts in the USA have long questioned the courts’ subjective interpretation of what ‘things’ can be claimed as inventions. The present article sheds light on a different but related facet: the role of the courts in regulating knowledge production. I argue that the recent cases decided by the US Supreme Court and the Federal Circuit, which made diagnostics and software very difficult to patent and which attracted criticism for a wealth of different reasons, are fine case studies of the current debate over the proper role of the state in regulating the marketplace and knowledge production in the emerging information economy. The article explains that these patents are prospect patents that may be used by a monopolist to collect data that everybody else needs in order to compete effectively. As such, they raise familiar concerns about failure of coordination emerging as a result of a monopolist controlling a resource such as datasets that others need and cannot replicate. In effect, the courts regulated the market, primarily focusing on ensuring the free flow of data in the emerging marketplace very much in the spirit of the ‘free the data’ language in various policy initiatives, yet at the same time with an eye to boost downstream innovation. In doing so, these decisions essentially endorse practices of personal information processing which constitute a new type of public domain: a source of raw materials which are there for the taking and which have become most important inputs to commercial activity. From this vantage point of view, the legal interpretation of the private and the shared legitimizes a model of data extraction from individuals, the raw material of information capitalism, that will fuel the next generation of data-intensive therapeutics in the field of data-driven medicine
    corecore