14,803 research outputs found
Advances on Concept Drift Detection in Regression Tasks using Social Networks Theory
Mining data streams is one of the main studies in machine learning area due
to its application in many knowledge areas. One of the major challenges on
mining data streams is concept drift, which requires the learner to discard the
current concept and adapt to a new one. Ensemble-based drift detection
algorithms have been used successfully to the classification task but usually
maintain a fixed size ensemble of learners running the risk of needlessly
spending processing time and memory. In this paper we present improvements to
the Scale-free Network Regressor (SFNR), a dynamic ensemble-based method for
regression that employs social networks theory. In order to detect concept
drifts SFNR uses the Adaptive Window (ADWIN) algorithm. Results show
improvements in accuracy, especially in concept drift situations and better
performance compared to other state-of-the-art algorithms in both real and
synthetic data
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms
Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data.
A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability.
To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity.
A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case.
The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change.
The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence
Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC
The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this
final state has yielded some of the most precise measurements of the particle. As
measurements of the Higgs boson become increasingly precise, greater import is
placed on the factors that constitute the uncertainty. Reducing the effects of these
uncertainties requires an understanding of their causes. The research presented
in this thesis aims to illuminate how uncertainties on simulation modelling are
determined and proffers novel techniques in deriving them.
The upgrade of the FastCaloSim tool is described, used for simulating events in
the ATLAS calorimeter at a rate far exceeding the nominal detector simulation,
Geant4. The integration of a method that allows the toolbox to emulate the
accordion geometry of the liquid argon calorimeters is detailed. This tool allows
for the production of larger samples while using significantly fewer computing
resources.
A measurement of the total Higgs boson production cross-section multiplied
by the diphoton branching ratio (σ × Bγγ) is presented, where this value was
determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement
with the Standard Model prediction. The signal and background shape modelling
is described, and the contribution of the background modelling uncertainty to the
total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production
mechanism.
A method for estimating the number of events in a Monte Carlo background
sample required to model the shape is detailed. It was found that the size of
the nominal γγ background events sample required a multiplicative increase by
a factor of 3.60 to adequately model the background with a confidence level of
68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate,
0.5 billion additional simulated events were produced, substantially reducing the
background modelling uncertainty.
A technique is detailed for emulating the effects of Monte Carlo event generator
differences using multivariate reweighting. The technique is used to estimate the
event generator uncertainty on the signal modelling of tHqb events, improving the
reliability of estimating the tHqb production cross-section. Then this multivariate
reweighting technique is used to estimate the generator modelling uncertainties
on background V γγ samples for the first time. The estimated uncertainties were
found to be covered by the currently assumed background modelling uncertainty
Moduli Stabilisation and the Statistics of Low-Energy Physics in the String Landscape
In this thesis we present a detailed analysis of the statistical properties of the type IIB flux landscape of string theory. We focus primarily on models constructed via the Large Volume Scenario (LVS) and KKLT and study the distribution of various phenomenologically relevant quantities. First, we compare our considerations with previous results and point out the importance of Kähler moduli stabilisation, which has been neglected in this context so far. We perform different moduli stabilisation procedures and compare the resulting distributions. To this end, we derive the expressions for the gravitino mass, various quantities related to axion physics and other phenomenologically interesting quantities in terms of the fundamental flux dependent quantities , and , the parameter which specifies the nature of the non-perturbative effects. Exploiting our knowledge of the distribution of these fundamental parameters, we can derive a distribution for all the quantities we are interested in. For models that are stabilised via LVS we find a logarithmic distribution, whereas for KKLT and perturbatively stabilised models we find a power-law distribution. We continue by investigating the statistical significance of a newly found class of KKLT vacua and present a search algorithm for such constructions. We conclude by presenting an application of our findings. Given the mild preference for higher scale supersymmetry breaking, we present a model of the early universe, which allows for additional periods of early matter domination and ultimately leads to rather sharp predictions for the dark matter mass in this model. We find the dark matter mass to be in the very heavy range
Building body identities - exploring the world of female bodybuilders
This thesis explores how female bodybuilders seek to develop and maintain a viable sense of self despite being stigmatized by the gendered foundations of what Erving Goffman (1983) refers to as the 'interaction order'; the unavoidable presentational context in which identities are forged during the course of social life. Placed in the context of an overview of the historical treatment of women's bodies, and a concern with the development of bodybuilding as a specific form of body modification, the research draws upon a unique two year ethnographic study based in the South of England, complemented by interviews with twenty-six female bodybuilders, all of whom live in the U.K. By mapping these extraordinary women's lives, the research illuminates the pivotal spaces and essential lived experiences that make up the female bodybuilder. Whilst the women appear to be embarking on an 'empowering' radical body project for themselves, the consequences of their activity remains culturally ambivalent. This research exposes the 'Janus-faced' nature of female bodybuilding, exploring the ways in which the women negotiate, accommodate and resist pressures to engage in more orthodox and feminine activities and appearances
The temporality of rhetoric: the spatialization of time in modern criticism
Every conception of criticism conceals a notion of time which informs the manner in which the critic conceives of history, representation and criticism itself. This thesis reveals the philosophies of time inherent in certain key modern critical concepts: allegory, irony and the sublime. Each concept opens a breach in time, a disruption of chronology. In each case this gap or aporia is emphatically closed, elided or denied. Taking the philosophy of time elaborated by Giorgio Agamben as an introductory proposition, my argument turns in Chapter One to the allegorical temporality which Walter Benjamin sees as the time of photography. The second chapter examines the aesthetics of the sublime as melancholic or mournful untimeliness. In Chapter Three, Paul de Man's conception of irony provides an exemplary instance of the denial of this troubling temporal predicament. In opposition to the foreclosure of the disturbing temporalities of criticism, history and representation, the thesis proposes a fundamental rethinking of the philosophy of time as it relates to these categories of reflection. In a reading of an inaugural meditation on the nature of time, and in examining certain key contemporary philosophical and critical texts, I argue for a critical attendance to that which eludes those modes of thought that attempt to map time as a recognizable and essentially spatial field. The Confessions of Augustine provide, in the fourth chapter, a model for thinking through the problems set up earlier: Augustine affords us, precisely, a means of conceiving of the gap or the interim. In the final chapter, this concept is developed with reference to the criticism of Arnold and Eliot, the fiction of Virginia Woolf and the philosophy of cinema derived from Deleuze and Lyotard. In conclusion, the philosophical implications of the thesis are placed in relation to a conception of the untimeliness of death
Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos
Data charts are widely used in our daily lives, being present in regular media,
such as newspapers, magazines, web pages, books, and many others. A well constructed
data chart leads to an intuitive understanding of its underlying data
and in the same way, when data charts have wrong design choices, a redesign
of these representations might be needed. However, in most cases, these
charts are shown as a static image, which means that the original data are not
usually available. Therefore, automatic methods could be applied to extract the
underlying data from the chart images to allow these changes. The task of
recognizing charts and extracting data from them is complex, largely due to the
variety of chart types and their visual characteristics.
Computer Vision techniques for image classification and object detection are
widely used for the problem of recognizing charts, but only in images without
any disturbance. Other features in real-world images that can make this task
difficult are not present in most literature works, like photo distortions, noise,
alignment, etc. Two computer vision techniques that can assist this task and
have been little explored in this context are perspective detection and
correction. These methods transform a distorted and noisy chart in a clear
chart, with its type ready for data extraction or other uses. The task of
reconstructing data is straightforward, as long the data is available the
visualization can be reconstructed, but the scenario of reconstructing it on the
same context is complex.
Using a Visualization Grammar for this scenario is a key component, as these
grammars usually have extensions for interaction, chart layers, and multiple
views without requiring extra development effort.
This work presents a model for automated support for custom recognition, and
reconstruction of charts in images. The model automatically performs the
process steps, such as reverse engineering, turning a static chart back into its
data table for later reconstruction, while allowing the user to make modifications
in case of uncertainties. This work also features a model-based architecture
along with prototypes for various use cases. Validation is performed step by
step, with methods inspired by the literature. This work features three use
cases providing proof of concept and validation of the model.
The first use case features usage of chart recognition methods focused on
documents in the real-world, the second use case focus on vocalization of
charts, using a visualization grammar to reconstruct a chart in audio format,
and the third use case presents an Augmented Reality application that
recognizes and reconstructs charts in the same context (a piece of paper)
overlaying the new chart and interaction widgets. The results showed that with
slight changes, chart recognition and reconstruction methods are now ready for
real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando
presentes nos meios de comunicação regulares, tais como jornais, revistas,
páginas web, livros, e muitos outros. Um gráfico bem construÃdo leva a uma
compreensão intuitiva dos seus dados inerentes e da mesma forma, quando
os gráficos de dados têm escolhas de conceção erradas, poderá ser
necessário um redesenho destas representações. Contudo, na maioria dos
casos, estes gráficos são mostrados como uma imagem estática, o que
significa que os dados originais não estão normalmente disponÃveis. Portanto,
poderiam ser aplicados métodos automáticos para extrair os dados inerentes
das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de
reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande
parte devido à variedade de tipos de gráficos e à s suas caracterÃsticas visuais.
As técnicas de Visão Computacional para classificação de imagens e deteção
de objetos são amplamente utilizadas para o problema de reconhecimento de
gráficos, mas apenas em imagens sem qualquer ruÃdo. Outras caracterÃsticas
das imagens do mundo real que podem dificultar esta tarefa não estão
presentes na maioria das obras literárias, como distorções fotográficas, ruÃdo,
alinhamento, etc. Duas técnicas de visão computacional que podem ajudar
nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e
correção da perspetiva. Estes métodos transformam um gráfico distorcido e
ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados
ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que
os dados estejam disponÃveis a visualização pode ser reconstruÃda, mas o
cenário de reconstrução no mesmo contexto é complexo.
A utilização de uma Gramática de Visualização para este cenário é um
componente chave, uma vez que estas gramáticas têm normalmente
extensões para interação, camadas de gráficos, e visões múltiplas sem exigir
um esforço extra de desenvolvimento.
Este trabalho apresenta um modelo de suporte automatizado para o
reconhecimento personalizado, e reconstrução de gráficos em imagens
estáticas. O modelo executa automaticamente as etapas do processo, tais
como engenharia inversa, transformando um gráfico estático novamente na
sua tabela de dados para posterior reconstrução, ao mesmo tempo que
permite ao utilizador fazer modificações em caso de incertezas. Este trabalho
também apresenta uma arquitetura baseada em modelos, juntamente com
protótipos para vários casos de utilização. A validação é efetuada passo a
passo, com métodos inspirados na literatura. Este trabalho apresenta três
casos de uso, fornecendo prova de conceito e validação do modelo.
O primeiro caso de uso apresenta a utilização de métodos de reconhecimento
de gráficos focando em documentos no mundo real, o segundo caso de uso
centra-se na vocalização de gráficos, utilizando uma gramática de visualização
para reconstruir um gráfico em formato áudio, e o terceiro caso de uso
apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói
gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos
gráficos e widgets de interação. Os resultados mostraram que com pequenas
alterações, os métodos de reconhecimento e reconstrução dos gráficos estão
agora prontos para os gráficos do mundo real, tendo em consideração o
tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic
Digital asset management via distributed ledgers
Distributed ledgers rose to prominence with the advent of Bitcoin, the first provably secure protocol to solve consensus in an open-participation setting. Following, active research and engineering efforts have proposed a multitude of applications and alternative designs, the most prominent being Proof-of-Stake (PoS). This thesis expands the scope of secure and efficient asset management over a distributed ledger around three axes: i) cryptography; ii) distributed systems; iii) game theory and economics. First, we analyze the security of various wallets. We start with a formal model of hardware wallets, followed by an analytical framework of PoS wallets, each outlining the unique properties of Proof-of-Work (PoW) and PoS respectively. The latter also provides a rigorous design to form collaborative participating entities, called stake pools. We then propose Conclave, a stake pool design which enables a group of parties to participate in a PoS system in a collaborative manner, without a central operator. Second, we focus on efficiency. Decentralized systems are aimed at thousands of users across the globe, so a rigorous design for minimizing memory and storage consumption is a prerequisite for scalability. To that end, we frame ledger maintenance as an optimization problem and design a multi-tier framework for designing wallets which ensure that updates increase the ledger’s global state only to a minimal extent, while preserving the security guarantees outlined in the security analysis. Third, we explore incentive-compatibility and analyze blockchain systems from a micro and a macroeconomic perspective. We enrich our cryptographic and systems' results by analyzing the incentives of collective pools and designing a state efficient Bitcoin fee function. We then analyze the Nash dynamics of distributed ledgers, introducing a formal model that evaluates whether rational, utility-maximizing participants are disincentivized from exhibiting undesirable infractions, and highlighting the differences between PoW and PoS-based ledgers, both in a standalone setting and under external parameters, like market price fluctuations. We conclude by introducing a macroeconomic principle, cryptocurrency egalitarianism, and then describing two mechanisms for enabling taxation in blockchain-based currency systems
Robustness against adversarial attacks on deep neural networks
While deep neural networks have been successfully applied in several different domains, they exhibit vulnerabilities to artificially-crafted perturbations in data. Moreover, these perturbations have been shown to be transferable across different networks where the same perturbations can be transferred between different models. In response to this problem, many robust learning approaches have emerged. Adversarial training is regarded as a mainstream approach to enhance the robustness of deep neural networks with respect to norm-constrained perturbations. However, adversarial training requires a large number of perturbed examples (e.g., over 100,000 examples are required for MNIST dataset) trained on the deep neural networks before robustness can be considerably enhanced. This is problematic due to the large computational cost of obtaining attacks. Developing computationally effective approaches while retaining robustness against norm-constrained perturbations remains a challenge in the literature.
In this research we present two novel robust training algorithms based on Monte-Carlo Tree Search (MCTS) [1] to enhance robustness under norm-constrained perturbations [2, 3]. The first algorithm searches potential candidates with Scale Invariant Feature Transform method and makes decisions with Monte-Carlo Tree Search method [2]. The second algorithm adopts Decision Tree Search method (DTS) to accelerate the search process while maintaining efficiency [3]. Our overarching objective is to provide computationally effective approaches that can be deployed to train deep neural networks robust against perturbations in data. We illustrate the robustness with these algorithms by studying the resistances to adversarial examples obtained in the context of the MNIST and CIFAR10 datasets. For MNIST, the results showed an average training efforts saving of 21.1\% when compared to Projected Gradient Descent (PGD) and 28.3\% when compared to Fast Gradient Sign Methods (FGSM). For CIFAR10, we obtained an average improvement of efficiency of 9.8\% compared to PGD and 13.8\% compared to FGSM. The results suggest that these two methods here introduced are not only robust to norm-constrained perturbations but also efficient during training.
In regards to transferability of defences, our experiments [4] reveal that across different network architectures, across a variety of attack methods from white-box to black-box and across various datasets including MNIST and CIFAR10, our algorithms outperform other state-of-the-art methods, e.g., PGD and FGSM. Furthermore, the derived attacks and robust models obtained on our framework are reusable in the sense that the same norm-constrained perturbations can facilitate robust training across different networks. Lastly, we investigate the robustness of intra-technique and cross-technique transferability and the relations with different impact factors from adversarial strength to network capacity. The results suggest that known attacks on the resulting models are less transferable than those models trained by other state-of-the-art attack algorithms.
Our results suggest that exploiting these tree search frameworks can result in significant improvements in the robustness of deep neural networks while saving computational cost on robust training. This paves the way for several future directions, both algorithmic and theoretical, as well as numerous applications to establish the robustness of deep neural networks with increasing trust and safety.Open Acces
- …