12,156 research outputs found

    Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks

    Get PDF
    Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model

    Robust sound event detection in bioacoustic sensor networks

    Full text link
    Bioacoustic sensors, sometimes known as autonomous recording units (ARUs), can record sounds of wildlife over long periods of time in scalable and minimally invasive ways. Deriving per-species abundance estimates from these sensors requires detection, classification, and quantification of animal vocalizations as individual acoustic events. Yet, variability in ambient noise, both over time and across sensors, hinders the reliability of current automated systems for sound event detection (SED), such as convolutional neural networks (CNN) in the time-frequency domain. In this article, we develop, benchmark, and combine several machine listening techniques to improve the generalizability of SED models across heterogeneous acoustic environments. As a case study, we consider the problem of detecting avian flight calls from a ten-hour recording of nocturnal bird migration, recorded by a network of six ARUs in the presence of heterogeneous background noise. Starting from a CNN yielding state-of-the-art accuracy on this task, we introduce two noise adaptation techniques, respectively integrating short-term (60 milliseconds) and long-term (30 minutes) context. First, we apply per-channel energy normalization (PCEN) in the time-frequency domain, which applies short-term automatic gain control to every subband in the mel-frequency spectrogram. Secondly, we replace the last dense layer in the network by a context-adaptive neural network (CA-NN) layer. Combining them yields state-of-the-art results that are unmatched by artificial data augmentation alone. We release a pre-trained version of our best performing system under the name of BirdVoxDetect, a ready-to-use detector of avian flight calls in field recordings.Comment: 32 pages, in English. Submitted to PLOS ONE journal in February 2019; revised August 2019; published October 201

    Predictive Performance Of Machine Learning Algorithms For Ore Reserve Estimation In Sparse And Imprecise Data

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2006Traditional geostatistical estimation techniques have been used predominantly in the mining industry for the purpose of ore reserve estimation. Determination of mineral reserve has always posed considerable challenge to mining engineers due to geological complexities that are generally associated with the phenomenon of ore body formation. Considerable research over the years has resulted in the development of a number of state-of-the-art methods for the task of predictive spatial mapping such as ore reserve estimation. Recent advances in the use of the machine learning algorithms (MLA) have provided a new approach to solve the age-old problem. Therefore, this thesis is focused on the use of two MLA, viz. the neural network (NN) and support vector machine (SVM), for the purpose of ore reserve estimation. Application of the MLA have been elaborated with two complex drill hole datasets. The first dataset is a placer gold drill hole data characterized by high degree of spatial variability, sparseness and noise while the second dataset is obtained from a continuous lode deposit. The application and success of the models developed using these MLA for the purpose of ore reserve estimation depends to a large extent on the data subsets on which they are trained and subsequently on the selection of the appropriate model parameters. The model data subsets obtained by random data division are not desirable in sparse data conditions as it usually results in statistically dissimilar subsets, thereby reducing their applicability. Therefore, an ideal technique for data subdivision has been suggested in the thesis. Additionally, issues pertaining to the optimum model development have also been discussed. To investigate the accuracy and the applicability of the MLA for ore reserve estimation, their generalization ability was compared with the geostatistical ordinary kriging (OK) method. The analysis of Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Error (ME) and the coefficient of determination (R2) as the indices of the model performance indicated that they may significantly improve the predictive ability and thereby reduce the inherent risk in ore reserve estimation

    Data Mining by Soft Computing Methods for The Coronary Heart Disease Database

    Get PDF
    For improvement of data mining technology, the advantages and disadvantages on respective data mining methods should be discussed by comparison under the same condition. For this purpose, the Coronary Heart Disease database (CHD DB) was developed in 2004, and the data mining competition was held in the International Conference on Knowledge-Based Intelligent Information and Engineering Systems (KES). In the competition, two methods based on soft computing were presented. In this paper, we report the overview of the CHD DB and the soft computing methods, and discuss the features of respective methods by comparison of the experimental results

    Field-aware Calibration: A Simple and Empirically Strong Method for Reliable Probabilistic Predictions

    Full text link
    It is often observed that the probabilistic predictions given by a machine learning model can disagree with averaged actual outcomes on specific subsets of data, which is also known as the issue of miscalibration. It is responsible for the unreliability of practical machine learning systems. For example, in online advertising, an ad can receive a click-through rate prediction of 0.1 over some population of users where its actual click rate is 0.15. In such cases, the probabilistic predictions have to be fixed before the system can be deployed. In this paper, we first introduce a new evaluation metric named field-level calibration error that measures the bias in predictions over the sensitive input field that the decision-maker concerns. We show that existing post-hoc calibration methods have limited improvements in the new field-level metric and other non-calibration metrics such as the AUC score. To this end, we propose Neural Calibration, a simple yet powerful post-hoc calibration method that learns to calibrate by making full use of the field-aware information over the validation set. We present extensive experiments on five large-scale datasets. The results showed that Neural Calibration significantly improves against uncalibrated predictions in common metrics such as the negative log-likelihood, Brier score and AUC, as well as the proposed field-level calibration error.Comment: WWW 202

    A Modified Kennard-Stone Algorithm for Optimal Division of Data for Developing Artificial Neural Network Models

    Get PDF
    This paper proposes a method, namely MDKS (Kennard-Stone algorithm based on Mahalanobis distance), to divide the data into training and testing subsets for developing artificial neural network (ANN) models. This method is a modified version of the Kennard-Stone (KS) algorithm. With this method, better data splitting, in terms of data representation and enhanced performance of developed ANN models, can be achieved. Compared with standard KS algorithm and another improved KS algorithm (data division based on joint x - y distances (SPXY) method), the proposed method has also shown a better performance. Therefore, the proposed technique can be used as an advantageous alternative to other existing methods of data splitting for developing ANN models. Care should be taken when dealing with large amount of dataset since they may increase the computational load for MDKS due to its variance-covariance matrix calculations

    Analysis of the Correlation Between Majority Voting Error and the Diversity Measures in Multiple Classifier Systems

    Get PDF
    Combining classifiers by majority voting (MV) has recently emerged as an effective way of improving performance of individual classifiers. However, the usefulness of applying MV is not always observed and is subject to distribution of classification outputs in a multiple classifier system (MCS). Evaluation of MV errors (MVE) for all combinations of classifiers in MCS is a complex process of exponential complexity. Reduction of this complexity can be achieved provided the explicit relationship between MVE and any other less complex function operating on classifier outputs is found. Diversity measures operating on binary classification outputs (correct/incorrect) are studied in this paper as potential candidates for such functions. Their correlation with MVE, interpreted as the quality of a measure, is thoroughly investigated using artificial and real-world datasets. Moreover, we propose new diversity measure efficiently exploiting information coming from the whole MCS, rather than its part, for which it is applied
    • ā€¦
    corecore