129 research outputs found

    Infering Air Quality from Traffic Data using Transferable Neural Network Models

    Get PDF
    This work presents a neural network based model for inferring air quality from traffic measurements. It is important to obtain information on air quality in urban environments in order to meet legislative and policy requirements. Measurement equipment tends to be expensive to purchase and maintain. Therefore, a model based approach capable of accurate determination of pollution levels is highly beneficial. The objective of this study was to develop a neural network model to accurately infer pollution levels from existing data sources in Leicester, UK. Neural Networks are models made of several highly interconnected processing elements. These elements process information by their dynamic state response to inputs. Problems which were not solvable by traditional algorithmic approaches frequently can be solved using neural networks. This paper shows that using a simple neural network with traffic and meteorological data as inputs, the air quality can be estimated with a good level of generalisation and in near real-time. By applying these models to links rather than nodes, this methodology can directly be used to inform traffic engineers and direct traffic management decisions towards enhancing local air quality and traffic management simultaneously.Universidad de MĂĄlaga. Campus de Excelencia Internacional AndalucĂ­a Tech

    Visual Field Endpoints Based on Subgroups of Points May Be Useful in Glaucoma Clinical Trials: A Study With the Humphrey Field Analyzer and Compass Perimeter

    Get PDF
    PRECIS: Visual field endpoints based on average deviation of specific subsets of points rather than all points may offer a more homogenous dataset without necessarily worsening test-retest variability and so may be useful in clinical trials. PURPOSE: To characterize outcome measures encompassing particular subsets of visual field points and compare them as obtained with Humphrey (HVF) and Compass perimeters. METHODS: 30 patients with imaging-based glaucomatous neuropathy performed a pair of 24-2 tests with each of 2 perimeters. Non-weighted mean deviation (MD) was calculated for the whole field and separate vertical hemifields, and again after censoring of points with low sensitivity (MDc) and subsequently including only "abnormal" points with total deviation probability of <5% (MDc5%) or <2% (MDc2%). Test-retest variability was assessed using Bland-Altman 95% limits of agreement (95%LoA). RESULTS: For the whole field, using HVF, MD was -7.5±6.9▒dB, MDc -3.6±2.8▒dB, MDc5% -6.4±1.7▒dB and MDc2% -7.3±1.5▒dB. With Compass MD was -7.5±6.6, MDc -2.9±1.7▒dB, MDc5% -6.3±1.5, and MDC2% -7.9±1.6. The respective 95% LoA were 5.5, 5.3, 4.6 and 5.6 with HVF, and 4.8, 3.7, 7.1 and 7.1 with Compass. The respective number of eligible points were 52, 42±12, 20±11 and 15±9 with HVF, and 52, 41.2±12.6, 10±7 and 7±5 with Compass. With both machines, standard deviation (SD) and 95%LoA increased in hemifields compared to the total field, but this increase was mitigated after censoring. CONCLUSIONS: Restricting analysis to particular subsets of points of interest in the visual field after censoring points with low sensitivity, as compared with using the familiar total field mean deviation, can provide outcome measures with a broader range of mean deviation, a markedly reduced SD and therefore more homogenous dataset, without necessarily worsening test-retest variability

    Genetic Classification of Populations using Supervised Learning

    Get PDF
    There are many instances in genetics in which we wish to determine whether two candidate populations are distinguishable on the basis of their genetic structure. Examples include populations which are geographically separated, case--control studies and quality control (when participants in a study have been genotyped at different laboratories). This latter application is of particular importance in the era of large scale genome wide association studies, when collections of individuals genotyped at different locations are being merged to provide increased power. The traditional method for detecting structure within a population is some form of exploratory technique such as principal components analysis. Such methods, which do not utilise our prior knowledge of the membership of the candidate populations. are termed \emph{unsupervised}. Supervised methods, on the other hand are able to utilise this prior knowledge when it is available. In this paper we demonstrate that in such cases modern supervised approaches are a more appropriate tool for detecting genetic differences between populations. We apply two such methods, (neural networks and support vector machines) to the classification of three populations (two from Scotland and one from Bulgaria). The sensitivity exhibited by both these methods is considerably higher than that attained by principal components analysis and in fact comfortably exceeds a recently conjectured theoretical limit on the sensitivity of unsupervised methods. In particular, our methods can distinguish between the two Scottish populations, where principal components analysis cannot. We suggest, on the basis of our results that a supervised learning approach should be the method of choice when classifying individuals into pre-defined populations, particularly in quality control for large scale genome wide association studies.Comment: Accepted PLOS On

    Neural Network Parameterizations of Electromagnetic Nucleon Form Factors

    Full text link
    The electromagnetic nucleon form-factors data are studied with artificial feed forward neural networks. As a result the unbiased model-independent form-factor parametrizations are evaluated together with uncertainties. The Bayesian approach for the neural networks is adapted for chi2 error-like function and applied to the data analysis. The sequence of the feed forward neural networks with one hidden layer of units is considered. The given neural network represents a particular form-factor parametrization. The so-called evidence (the measure of how much the data favor given statistical model) is computed with the Bayesian framework and it is used to determine the best form factor parametrization.Comment: The revised version is divided into 4 sections. The discussion of the prior assumptions is added. The manuscript contains 4 new figures and 2 new tables (32 pages, 15 figures, 2 tables

    Constraints on fNL from Wilkinson Microwave Anisotropy Probe 7-year data using a neural network classifier

    Full text link
    We present a multi-class neural network (NN) classifier as a method to measure nonGaussianity, characterised by the local non-linear coupling parameter fNL, in maps of the cosmic microwave background (CMB) radiation. The classifier is trained on simulated non-Gaussian CMB maps with a range of known fNL values by providing it with wavelet coefficients of the maps; we consider both the HealPix (HW) wavelet and the spherical Mexican hat wavelet (SMHW). When applied to simulated test maps, the NN classfier produces results in very good agreement with those obtained using standard chi2 minimization. The standard deviations of the fNL estimates for WMAPlike simulations were {\sigma} = 22 and {\sigma} = 33 for the SMHW and the HW, respectively, which are extremely close to those obtained using classical statistical methods in Curto et al. and Casaponsa et al. Moreover, the NN classifier does not require the inversion of a large covariance matrix, thus avoiding any need to regularise the matrix when it is not directly invertible, and is considerably faster.Comment: Accepted for publication in MNRAS, 9 pages, 5 figures, 1 tabl

    On the Bounds of Function Approximations

    Full text link
    Within machine learning, the subfield of Neural Architecture Search (NAS) has recently garnered research attention due to its ability to improve upon human-designed models. However, the computational requirements for finding an exact solution to this problem are often intractable, and the design of the search space still requires manual intervention. In this paper we attempt to establish a formalized framework from which we can better understand the computational bounds of NAS in relation to its search space. For this, we first reformulate the function approximation problem in terms of sequences of functions, and we call it the Function Approximation (FA) problem; then we show that it is computationally infeasible to devise a procedure that solves FA for all functions to zero error, regardless of the search space. We show also that such error will be minimal if a specific class of functions is present in the search space. Subsequently, we show that machine learning as a mathematical problem is a solution strategy for FA, albeit not an effective one, and further describe a stronger version of this approach: the Approximate Architectural Search Problem (a-ASP), which is the mathematical equivalent of NAS. We leverage the framework from this paper and results from the literature to describe the conditions under which a-ASP can potentially solve FA as well as an exhaustive search, but in polynomial time.Comment: Accepted as a full paper at ICANN 2019. The final, authenticated publication will be available at https://doi.org/10.1007/978-3-030-30487-4_3

    Towards Comprehensive Foundations of Computational Intelligence

    Full text link
    Abstract. Although computational intelligence (CI) covers a vast variety of different methods it still lacks an integrative theory. Several proposals for CI foundations are discussed: computing and cognition as compression, meta-learning as search in the space of data models, (dis)similarity based methods providing a framework for such meta-learning, and a more general approach based on chains of transformations. Many useful transformations that extract information from features are discussed. Heterogeneous adaptive systems are presented as particular example of transformation-based systems, and the goal of learning is redefined to facilitate creation of simpler data models. The need to understand data structures leads to techniques for logical and prototype-based rule extraction, and to generation of multiple alternative models, while the need to increase predictive power of adaptive models leads to committees of competent models. Learning from partial observations is a natural extension towards reasoning based on perceptions, and an approach to intuitive solving of such problems is presented. Throughout the paper neurocognitive inspirations are frequently used and are especially important in modeling of the higher cognitive functions. Promising directions such as liquid and laminar computing are identified and many open problems presented.
    • 

    corecore