71,370 research outputs found

    Simulation in manufacturing and business: A review

    Get PDF
    Copyright @ 2009 Elsevier B.V.This paper reports the results of a review of simulation applications published within peer-reviewed literature between 1997 and 2006 to provide an up-to-date picture of the role of simulation techniques within manufacturing and business. The review is characterised by three factors: wide coverage, broad scope of the simulation techniques, and a focus on real-world applications. A structured methodology was followed to narrow down the search from around 20,000 papers to 281. Results include interesting trends and patterns. For instance, although discrete event simulation is the most popular technique, it has lower stakeholder engagement than other techniques, such as system dynamics or gaming. This is highly correlated with modelling lead time and purpose. Considering application areas, modelling is mostly used in scheduling. Finally, this review shows an increasing interest in hybrid modelling as an approach to cope with complex enterprise-wide systems

    Selection Bias in News Coverage: Learning it, Fighting it

    Get PDF
    News entities must select and filter the coverage they broadcast through their respective channels since the set of world events is too large to be treated exhaustively. The subjective nature of this filtering induces biases due to, among other things, resource constraints, editorial guidelines, ideological affinities, or even the fragmented nature of the information at a journalist's disposal. The magnitude and direction of these biases are, however, widely unknown. The absence of ground truth, the sheer size of the event space, or the lack of an exhaustive set of absolute features to measure make it difficult to observe the bias directly, to characterize the leaning's nature and to factor it out to ensure a neutral coverage of the news. In this work, we introduce a methodology to capture the latent structure of media's decision process on a large scale. Our contribution is multi-fold. First, we show media coverage to be predictable using personalization techniques, and evaluate our approach on a large set of events collected from the GDELT database. We then show that a personalized and parametrized approach not only exhibits higher accuracy in coverage prediction, but also provides an interpretable representation of the selection bias. Last, we propose a method able to select a set of sources by leveraging the latent representation. These selected sources provide a more diverse and egalitarian coverage, all while retaining the most actively covered events

    Projections of epidemic transmission and estimation of vaccination impact during an ongoing Ebola virus disease outbreak in Northeastern Democratic Republic of Congo, as of Feb. 25, 2019.

    Get PDF
    BackgroundAs of February 25, 2019, 875 cases of Ebola virus disease (EVD) were reported in North Kivu and Ituri Provinces, Democratic Republic of Congo. Since the beginning of October 2018, the outbreak has largely shifted into regions in which active armed conflict has occurred, and in which EVD cases and their contacts have been difficult for health workers to reach. We used available data on the current outbreak, with case-count time series from prior outbreaks, to project the short-term and long-term course of the outbreak.MethodsFor short- and long-term projections, we modeled Ebola virus transmission using a stochastic branching process that assumes gradually quenching transmission rates estimated from past EVD outbreaks, with outbreak trajectories conditioned on agreement with the course of the current outbreak, and with multiple levels of vaccination coverage. We used two regression models to estimate similar projection periods. Short- and long-term projections were estimated using negative binomial autoregression and Theil-Sen regression, respectively. We also used Gott's rule to estimate a baseline minimum-information projection. We then constructed an ensemble of forecasts to be compared and recorded for future evaluation against final outcomes. From August 20, 2018 to February 25, 2019, short-term model projections were validated against known case counts.ResultsDuring validation of short-term projections, from one week to four weeks, we found models consistently scored higher on shorter-term forecasts. Based on case counts as of February 25, the stochastic model projected a median case count of 933 cases by February 18 (95% prediction interval: 872-1054) and 955 cases by March 4 (95% prediction interval: 874-1105), while the auto-regression model projects median case counts of 889 (95% prediction interval: 876-933) and 898 (95% prediction interval: 877-983) cases for those dates, respectively. Projected median final counts range from 953 to 1,749. Although the outbreak is already larger than all past Ebola outbreaks other than the 2013-2016 outbreak of over 26,000 cases, our models do not project that it is likely to grow to that scale. The stochastic model estimates that vaccination coverage in this outbreak is lower than reported in its trial setting in Sierra Leone.ConclusionsOur projections are concentrated in a range up to about 300 cases beyond those already reported. While a catastrophic outbreak is not projected, it is not ruled out, and prevention and vigilance are warranted. Prospective validation of our models in real time allowed us to generate more accurate short-term forecasts, and this process may prove useful for future real-time short-term forecasting. We estimate that transmission rates are higher than would be seen under target levels of 62% coverage due to contact tracing and vaccination, and this model estimate may offer a surrogate indicator for the outbreak response challenges

    Measuring internet activity: a (selective) review of methods and metrics

    Get PDF
    Two Decades after the birth of the World Wide Web, more than two billion people around the world are Internet users. The digital landscape is littered with hints that the affordances of digital communications are being leveraged to transform life in profound and important ways. The reach and influence of digitally mediated activity grow by the day and touch upon all aspects of life, from health, education, and commerce to religion and governance. This trend demands that we seek answers to the biggest questions about how digitally mediated communication changes society and the role of different policies in helping or hindering the beneficial aspects of these changes. Yet despite the profusion of data the digital age has brought upon us—we now have access to a flood of information about the movements, relationships, purchasing decisions, interests, and intimate thoughts of people around the world—the distance between the great questions of the digital age and our understanding of the impact of digital communications on society remains large. A number of ongoing policy questions have emerged that beg for better empirical data and analyses upon which to base wider and more insightful perspectives on the mechanics of social, economic, and political life online. This paper seeks to describe the conceptual and practical impediments to measuring and understanding digital activity and highlights a sample of the many efforts to fill the gap between our incomplete understanding of digital life and the formidable policy questions related to developing a vibrant and healthy Internet that serves the public interest and contributes to human wellbeing. Our primary focus is on efforts to measure Internet activity, as we believe obtaining robust, accurate data is a necessary and valuable first step that will lead us closer to answering the vitally important questions of the digital realm. Even this step is challenging: the Internet is difficult to measure and monitor, and there is no simple aggregate measure of Internet activity—no GDP, no HDI. In the following section we present a framework for assessing efforts to document digital activity. The next three sections offer a summary and description of many of the ongoing projects that document digital activity, with two final sections devoted to discussion and conclusions

    A Dynamic Embedding Model of the Media Landscape

    Full text link
    Information about world events is disseminated through a wide variety of news channels, each with specific considerations in the choice of their reporting. Although the multiplicity of these outlets should ensure a variety of viewpoints, recent reports suggest that the rising concentration of media ownership may void this assumption. This observation motivates the study of the impact of ownership on the global media landscape and its influence on the coverage the actual viewer receives. To this end, the selection of reported events has been shown to be informative about the high-level structure of the news ecosystem. However, existing methods only provide a static view into an inherently dynamic system, providing underperforming statistical models and hindering our understanding of the media landscape as a whole. In this work, we present a dynamic embedding method that learns to capture the decision process of individual news sources in their selection of reported events while also enabling the systematic detection of large-scale transformations in the media landscape over prolonged periods of time. In an experiment covering over 580M real-world event mentions, we show our approach to outperform static embedding methods in predictive terms. We demonstrate the potential of the method for news monitoring applications and investigative journalism by shedding light on important changes in programming induced by mergers and acquisitions, policy changes, or network-wide content diffusion. These findings offer evidence of strong content convergence trends inside large broadcasting groups, influencing the news ecosystem in a time of increasing media ownership concentration

    Natural language processing

    Get PDF
    Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems

    Recommender Systems

    Get PDF
    The ongoing rapid expansion of the Internet greatly increases the necessity of effective recommender systems for filtering the abundant information. Extensive research for recommender systems is conducted by a broad range of communities including social and computer scientists, physicists, and interdisciplinary researchers. Despite substantial theoretical and practical achievements, unification and comparison of different approaches are lacking, which impedes further advances. In this article, we review recent developments in recommender systems and discuss the major challenges. We compare and evaluate available algorithms and examine their roles in the future developments. In addition to algorithms, physical aspects are described to illustrate macroscopic behavior of recommender systems. Potential impacts and future directions are discussed. We emphasize that recommendation has a great scientific depth and combines diverse research fields which makes it of interests for physicists as well as interdisciplinary researchers.Comment: 97 pages, 20 figures (To appear in Physics Reports

    An Internet Heartbeat

    Get PDF
    Obtaining sound inferences over remote networks via active or passive measurements is difficult. Active measurement campaigns face challenges of load, coverage, and visibility. Passive measurements require a privileged vantage point. Even networks under our own control too often remain poorly understood and hard to diagnose. As a step toward the democratization of Internet measurement, we consider the inferential power possible were the network to include a constant and predictable stream of dedicated lightweight measurement traffic. We posit an Internet "heartbeat," which nodes periodically send to random destinations, and show how aggregating heartbeats facilitates introspection into parts of the network that are today generally obtuse. We explore the design space of an Internet heartbeat, potential use cases, incentives, and paths to deployment
    • …
    corecore