39 research outputs found

    Architectures and GPU-Based Parallelization for Online Bayesian Computational Statistics and Dynamic Modeling

    Get PDF
    Recent work demonstrates that coupling Bayesian computational statistics methods with dynamic models can facilitate the analysis of complex systems associated with diverse time series, including those involving social and behavioural dynamics. Particle Markov Chain Monte Carlo (PMCMC) methods constitute a particularly powerful class of Bayesian methods combining aspects of batch Markov Chain Monte Carlo (MCMC) and the sequential Monte Carlo method of Particle Filtering (PF). PMCMC can flexibly combine theory-capturing dynamic models with diverse empirical data. Online machine learning is a subcategory of machine learning algorithms characterized by sequential, incremental execution as new data arrives, which can give updated results and predictions with growing sequences of available incoming data. While many machine learning and statistical methods are adapted to online algorithms, PMCMC is one example of the many methods whose compatibility with and adaption to online learning remains unclear. In this thesis, I proposed a data-streaming solution supporting PF and PMCMC methods with dynamic epidemiological models and demonstrated several successful applications. By constructing an automated, easy-to-use streaming system, analytic applications and simulation models gain access to arriving real-time data to shorten the time gap between data and resulting model-supported insight. The well-defined architecture design emerging from the thesis would substantially expand traditional simulation models' potential by allowing such models to be offered as continually updated services. Contingent on sufficiently fast execution time, simulation models within this framework can consume the incoming empirical data in real-time and generate informative predictions on an ongoing basis as new data points arrive. In a second line of work, I investigated the platform's flexibility and capability by extending this system to support the use of a powerful class of PMCMC algorithms with dynamic models while ameliorating such algorithms' traditionally stiff performance limitations. Specifically, this work designed and implemented a GPU-enabled parallel version of a PMCMC method with dynamic simulation models. The resulting codebase readily has enabled researchers to adapt their models to the state-of-art statistical inference methods, and ensure that the computation-heavy PMCMC method can perform significant sampling between the successive arrival of each new data point. Investigating this method's impact with several realistic PMCMC application examples showed that GPU-based acceleration allows for up to 160x speedup compared to a corresponding CPU-based version not exploiting parallelism. The GPU accelerated PMCMC and the streaming processing system can complement each other, jointly providing researchers with a powerful toolset to greatly accelerate learning and securing additional insight from the high-velocity data increasingly prevalent within social and behavioural spheres. The design philosophy applied supported a platform with broad generalizability and potential for ready future extensions. The thesis discusses common barriers and difficulties in designing and implementing such systems and offers solutions to solve or mitigate them

    Software Engineering for Big Data Systems

    Get PDF
    Software engineering is the application of a systematic approach to designing, operating and maintaining software systems and the study of all the activities involved in achieving the same. The software engineering discipline and research into software systems flourished with the advent of computers and the technological revolution ushered in by the World Wide Web and the Internet. Software systems have grown dramatically to the point of becoming ubiquitous. They have a significant impact on the global economy and on how we interact and communicate with each other and with computers using software in our daily lives. However, there have been major changes in the type of software systems developed over the years. In the past decade owing to breakthrough advancements in cloud and mobile computing technologies, unprecedented volumes of hitherto inaccessible data, referred to as big data, has become available to technology companies and business organizations farsighted and discerning enough to use it to create new products, and services generating astounding profits. The advent of big data and software systems utilizing big data has presented a new sphere of growth for the software engineering discipline. Researchers, entrepreneurs and major corporations are all looking into big data systems to extract the maximum value from data available to them. Software engineering for big data systems is an emergent field that is starting to witness a lot of important research activity. This thesis investigates the application of software engineering knowledge areas and standard practices, established over the years by the software engineering research community, into developing big data systems by: - surveying the existing software engineering literature on applying software engineering principles into developing and supporting big data systems; - identifying the fields of application for big data systems; - investigating the software engineering knowledge areas that have seen research related to big data systems; - revealing the gaps in the knowledge areas that require more focus for big data systems development; and - determining the open research challenges in each software engineering knowledge area that need to be met. The analysis and results obtained from this thesis reveal that recent advances made in distributed computing, non-relational databases, and machine learning applications have lured the software engineering research and business communities primarily into focusing on system design and architecture of big data systems. Despite the instrumental role played by big data systems in the success of several businesses organizations and technology companies by transforming them into market leaders, developing and maintaining stable, robust, and scalable big data systems is still a distant milestone. This can be attributed to the paucity of much deserved research attention into more fundamental and equally important software engineering activities like requirements engineering, testing, and creating good quality assurance practices for big data systems

    We need to go deeper: measuring electoral violence using convolutional neural networks and social media

    Get PDF
    Electoral violence is conceived of as violence that occurs contemporaneously with elections, and as violence that would not have occurred in the absence of an election. While measuring the temporal aspect of this phenomenon is straightforward, measuring whether occurrences of violence are truly related to elections is more difficult. Using machine learning, we measure electoral violence across three elections using disaggregated reporting in social media. We demonstrate that our methodology is more than 30 percent more accurate in measuring electoral violence than previously utilized models. Additionally, we show that our measures of electoral violence conform to theoretical expectations of this conflict more so than those that exist in event datasets commonly utilized to measure electoral violence including ACLED, ICEWS, and SCAD. Finally, we demonstrate the validity of our data by developing a qualitative coding ontology

    Towards a National Security Analysis Approach via Machine Learning and Social Media Analytics

    Get PDF
    Various severe threats at national and international level, such as health crises, radicalisation, or organised crime, have the potential of unbalancing a nation's stability. Such threats impact directly on elements linked to people's security, known in the literature as human security components. Protecting the citizens from such risks is the primary objective of the various organisations that have as their main objective the protection of the legitimacy, stability and security of the state. Given the importance of maintaining security and stability, governments across the globe have been developing a variety of strategies to diminish or negate the devastating effects of the aforementioned threats. Technological progress plays a pivotal role in the evolution of these strategies. Most recently, artificial intelligence has enabled the examination of large volumes of data and the creation of bespoke analytical tools that are able to perform complex tasks towards the analysis of multiple scenarios, tasks that would usually require significant amounts of human resources. Several research projects have already proposed and studied the use of artificial intelligence to analyse crucial problems that impact national security components, such as violence or ideology. However, the focus of all this prior research was examining isolated components. However, understanding national security issues requires studying and analysing a multitude of closely interrelated elements and constructing a holistic view of the problem. The work documented in this thesis aims at filling this gap. Its main contribution is the creation of a complete pipeline for constructing a big picture that helps understand national security problems. The proposed pipeline covers different stages and begins with the analysis of the unfolding event, which produces timely detection points that indicate that society might head toward a disruptive situation. Then, a further examination based on machine learning techniques enables the interpretation of an already confirmed crisis in terms of high-level national security concepts. Apart from using widely accepted national security theoretical constructions developed over years of social and political research, the second pillar of the approach is the modern computational paradigms, especially machine learning and its applications in natural language processing
    corecore