1,866 research outputs found

    Guaranteeing generalisation in neural networks

    Get PDF
    Neural networks need to be able to guarantee their intrinsic generalisation abilities if they are to be used reliably. Mitchell's concept and version spaces technique is able to guarantee generalisation in the symbolic concept-learning environment in which it is implemented. Generalisation, according to Mitchell, is guaranteed when there is no alternative concept that is consistent with all the examples presented so far, except the current concept, given the bias of the user. A form of bidirectional convergence is used by Mitchell to recognise when the no-alternative situation has been reached. Mitchell's technique has problems of search and storage feasibility in its symbolic environment. This thesis aims to show that by evolving the technique further in a neural environment, these problems can be overcome. Firstly, the biasing factors which affect the kind of concept that can be learned are explored in a neural network context. Secondly, approaches for abstracting the underlying features of the symbolic technique that enable recognition of the no-alternative situation are discussed. The discussion generates neural techniques for guaranteeing generalisation and culminates in a neural technique which is able to recognise when the best fit neural weight state has been found for a given set of data and topology

    Machine Learning (ML) Methods in Assessing the Intensity of Damage Caused by High-Energy Mining Tremors in Traditional Development of LGOM Mining Area

    Get PDF
    The paper presents a comparative analysis of Machine Learning (ML) research methods allowing to assess the risk of mining damage occurring in traditional masonry buildings located in the mining area of Legnica-Głogów Copper District (LGOM) as a result of intense mining tremors. The database of reports on damage that occurred after the tremors of 20 February 2002, 16 May 2004 and 21 May 2006 formed the basis for the analysis. Based on these data, classification models were created using the Probabilistic Neural Network (PNN) and the Support Vector Machine (SVM) method. The results of previous research studies allowed to include structural and geometric features of buildings,as well as protective measures against mining tremors in the model. The probabilistic notation of the model makes it possible to effectively assess the probability of damage in the analysis of large groups of building structures located in the area of paraseismic impacts. The results of the conducted analyses confirm the thesis that the proposed methodology may allow to estimate, with the appropriate probability, the financial outlays that the mining plant should secure for the repair of the expected damage to the traditional development of the LGOM mining area

    Online Ensemble Learning of Sensorimotor Contingencies

    Get PDF
    Forward models play a key role in cognitive agents by providing predictions of the sensory consequences of motor commands, also known as sensorimotor contingencies (SMCs). In continuously evolving environments, the ability to anticipate is fundamental in distinguishing cognitive from reactive agents, and it is particularly relevant for autonomous robots, that must be able to adapt their models in an online manner. Online learning skills, high accuracy of the forward models and multiple-step-ahead predictions are needed to enhance the robots’ anticipation capabilities. We propose an online heterogeneous ensemble learning method for building accurate forward models of SMCs relating motor commands to effects in robots’ sensorimotor system, in particular considering proprioception and vision. Our method achieves up to 98% higher accuracy both in short and long term predictions, compared to single predictors and other online and offline homogeneous ensembles. This method is validated on two different humanoid robots, namely the iCub and the Baxter

    THE USE OF NEURAL NETWORKS IN THE SPATIAL ANALYSIS OF PROPERTY VALUES

    Get PDF
    The real-estate market is "where" a multiplicity of economic, cultural, social and demographic factors are synthesised with respect to choices regarding the qualitative and locational aspects of a property. The spatial analysis of the real-estate market and, in particular, of the factors which contribute to determining prices, is a very useful instrument in outlining the geography of the economic development of vast areas. The aim of the paper is the construction of a simulation model, on a spatial level, of real-estate values with reference to the housing market in the urban area of the city of Treviso (I). The model was built using a neural network which gives the possibility of analysing the marginal contribution of single real-estate characteristics independently of the a priori choice of the interpolation function; at the same time it works well even in the presence of statistical correlation among the explicative variables, a serious drawback in multiple regression models. The work is divided into several parts. First, a synthetic picture of the real-estate market of the area studied has been drawn up with reference to the main conditioning factors. Then the problem of the selection of a neural network model for the appraisal of property values is presented. Finally, there is the description of the procedure for the spatialization of obtained results from the neural model for the definition of a values map. The results shows the notable interpretative and predictive capacity of the neural model and it seems very useful in appraisals. Furthermore, the mapping of value fluctuations enables first-hand verification of the "goodness" of the assessed model and its capacity to portray the real situation. The general approach presented seems, therefore, useful both as an instrument of support for urban and territorial planning, as well as a permanent monitoring system of the real-estate market with the aim of creating an informative system of support for the analysis of real-estate investment.Land Economics/Use,

    The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning

    Full text link
    In this work, we assess the theoretical limitations of determining guaranteed stability and accuracy of neural networks in classification tasks. We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation. We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks in the above settings is extremely challenging, if at all possible, even when such ideal solutions exist within the given class of neural architectures
    • …
    corecore