131 research outputs found

    A computer vision system for detecting and analysing critical events in cities

    Get PDF
    Whether for commuting or leisure, cycling is a growing transport mode in many cities worldwide. However, it is still perceived as a dangerous activity. Although serious incidents related to cycling leading to major injuries are rare, the fear of getting hit or falling hinders the expansion of cycling as a major transport mode. Indeed, it has been shown that focusing on serious injuries only touches the tip of the iceberg. Near miss data can provide much more information about potential problems and how to avoid risky situations that may lead to serious incidents. Unfortunately, there is a gap in the knowledge in identifying and analysing near misses. This hinders drawing statistically significant conclusions to provide measures for the built-environment that ensure a safer environment for people on bikes. In this research, we develop a method to detect and analyse near misses and their risk factors using artificial intelligence. This is accomplished by analysing video streams linked to near miss incidents within a novel framework relying on deep learning and computer vision. This framework automatically detects near misses and extracts their risk factors from video streams before analysing their statistical significance. It also provides practical solutions implemented in a camera with embedded AI (URBAN-i Box) and a cloud-based service (URBAN-i Cloud) to tackle the stated issue in the real-world settings for use by researchers, policy-makers, or citizens. The research aims to provide human-centred evidence that may enable policy-makers and planners to provide a safer built environment for cycling in London, or elsewhere. More broadly, this research aims to contribute to the scientific literature with the theoretical and empirical foundations of a computer vision system that can be utilised for detecting and analysing other critical events in a complex environment. Such a system can be applied to a wide range of events, such as traffic incidents, crime or overcrowding

    Robustness of Defenses against Deception Attacks

    Get PDF

    Theory and Application of Dynamic Spatial Time Series Models

    Get PDF
    Stochastic economic processes are often characterized by dynamic interactions between variables that are dependent in both space and time. Analyzing these processes raises a number of questions about the econometric methods used that are both practically and theoretically interesting. This work studies econometric approaches to analyze spatial data that evolves dynamically over time. The book provides a background on least squares and maximum likelihood estimators, and discusses some of the limits of basic econometric theory. It then discusses the importance of addressing spatial heterogeneity in policies. The next chapters cover parametric modeling of linear and nonlinear spatial time series, non-parametric modeling of nonlinearities in panel data, modeling of multiple spatial time series variables that exhibit long and short memory, and probabilistic causality in spatial time series settings

    MODELING THE LEADERSHIP OF LANGUAGE CHANGE FROM DIACHRONIC TEXT

    Get PDF
    Natural languages constantly change over time. These changes are modulated by social factors such as influence which are not always directly observable. However, large-scale computational modeling of language change using timestamped text can uncover the latent organization and social structure. In turn, the social dynamics of language change can potentially illuminate our understanding of innovation, influence, and identity: Who leads? Who follows? Who diverges? This thesis contributes to the growing body of research on using computational methods to model language change with a focus on quantifying linguistic leadership of change. A series of studies highlight the unique contributions of this thesis: methods that scale to huge volumes of data; measures that quantify leadership at the level of individuals or in aggregate; and analysis that links linguistic leadership to other forms of influence. First, temporal and predictive models of event cascades on a network of millions of Twitter users are used to show that lexical change spreads in the form of a contagion and influence from densely embedded ties is crucial for the adoption of non-standard terms. A Granger-causal test for detecting social influence in event cascades on a network is then presented, which is robust to both the presence of confounds such as homophily and can be applied to model both linguistic or non-linguistic change in a network. Next, a novel scheme to score and identify documents that lead semantic change in progress is introduced. This linguistic measure of influence on the documents is strongly predictive of their influence in terms of the number of citations that they receive for both US court opinions and scientific articles. Subsequently, a measure of lead on any semantic change between a pair of document sources (e.g. newspapers) and a method to aggregate multiple lead-lag relationships into a network is presented. Analysis on an induced network of nineteenth century abolitionist newspapers, following the proposed method, reveals the important yet understated role of women and Black editors in shaping the discourse on abolitionism. Finally, a method to induce an aggregate semantic leadership network using contextual word representations is proposed to investigate the link between semantic leadership and influence in the form of citations among publication venues that are part of the Association of Computational Linguistics. Taken together, these studies illustrate the utility of finding leaders of language change to gain insights in sociolinguistics and for applications in social science and digital humanities.Ph.D

    Analyzing Granger causality in climate data with time series classification methods

    Get PDF
    Attribution studies in climate science aim for scientifically ascertaining the influence of climatic variations on natural or anthropogenic factors. Many of those studies adopt the concept of Granger causality to infer statistical cause-effect relationships, while utilizing traditional autoregressive models. In this article, we investigate the potential of state-of-the-art time series classification techniques to enhance causal inference in climate science. We conduct a comparative experimental study of different types of algorithms on a large test suite that comprises a unique collection of datasets from the area of climate-vegetation dynamics. The results indicate that specialized time series classification methods are able to improve existing inference procedures. Substantial differences are observed among the methods that were tested

    A New Framework for Decomposing Multivariate Information

    Get PDF
    What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much-criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. This thesis presents a new framework for information decomposition that is based upon the decomposition of pointwise mutual information rather than mutual information. The framework is derived in two separate ways. The first of these derivations is based upon a modified version of the original axiomatic approach taken by Williams and Beer. However, to overcome the difficulty associated with signed pointwise mutual information, the decomposition is applied separately to the unsigned entropic components of pointwise mutual information which are referred to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Based upon an operational interpretation of redundancy, measures of redundant specificity and redundant ambiguity are defined which enables one to evaluate the partial information atoms separately for each lattice. These separate atoms can then be recombined to yield the sought-after multivariate information decomposition. This framework is applied to canonical examples from the literature and the results and various properties of the decomposition are discussed. In particular, the pointwise decomposition using specificity and ambiguity is shown to satisfy a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. The second approach begins by considering the distinct ways in which two marginal observers can share their information with the non-observing individual third party. Several novel measures of information content are introduced, namely the union, intersection and unique information contents. Next, the algebraic structure of these new measures of shared marginal information is explored, and it is shown that the structure of shared marginal information is that of a distributive lattice. Furthermore, by using the fundamental theorem of distributive lattices, it is shown that these new measures are isomorphic to a ring of sets. Finally, by combining this structure together with the semi-lattice of joint information, the redundancy lattice form partial information decomposition is found to be embedded within this larger algebraic structure. However, since this structure considers information contents, it is actually equivalent to the specificity lattice from the first derivation of pointwise partial information decomposition. The thesis then closes with a discussion about whether or not one should combine the information contents from the specificity and ambiguity lattices

    Feature Papers of Forecasting

    Get PDF
    Nowadays, forecast applications are receiving unprecedent attention thanks to their capability to improve the decision-making processes by providing useful indications. A large number of forecast approaches related to different forecast horizons and to the specific problem that have to be predicted have been proposed in recent scientific literature, from physical models to data-driven statistic and machine learning approaches. In this Special Issue, the most recent and high-quality researches about forecast are collected. A total of nine papers have been selected to represent a wide range of applications, from weather and environmental predictions to economic and management forecasts. Finally, some applications related to the forecasting of the different phases of COVID in Spain and the photovoltaic power production have been presented
    corecore