1,007 research outputs found

    Significant Subgraph Mining with Multiple Testing Correction

    Full text link
    The problem of finding itemsets that are statistically significantly enriched in a class of transactions is complicated by the need to correct for multiple hypothesis testing. Pruning untestable hypotheses was recently proposed as a strategy for this task of significant itemset mining. It was shown to lead to greater statistical power, the discovery of more truly significant itemsets, than the standard Bonferroni correction on real-world datasets. An open question, however, is whether this strategy of excluding untestable hypotheses also leads to greater statistical power in subgraph mining, in which the number of hypotheses is much larger than in itemset mining. Here we answer this question by an empirical investigation on eight popular graph benchmark datasets. We propose a new efficient search strategy, which always returns the same solution as the state-of-the-art approach and is approximately two orders of magnitude faster. Moreover, we exploit the dependence between subgraphs by considering the effective number of tests and thereby further increase the statistical power.Comment: 18 pages, 5 figure, accepted to the 2015 SIAM International Conference on Data Mining (SDM15

    k-Nearest Neighbour Classifiers: 2nd Edition (with Python examples)

    Get PDF
    Perhaps the most straightforward classifier in the arsenal or machine learning techniques is the Nearest Neighbour Classifier -- classification is achieved by identifying the nearest neighbours to a query example and using those neighbours to determine the class of the query. This approach to classification is of particular importance because issues of poor run-time performance is not such a problem these days with the computational power that is available. This paper presents an overview of techniques for Nearest Neighbour classification focusing on; mechanisms for assessing similarity (distance), computational issues in identifying nearest neighbours and mechanisms for reducing the dimension of the data. This paper is the second edition of a paper previously published as a technical report. Sections on similarity measures for time-series, retrieval speed-up and intrinsic dimensionality have been added. An Appendix is included providing access to Python code for the key methods.Comment: 22 pages, 15 figures: An updated edition of an older tutorial on kN

    Memristor Emulator Circuit Design and Applications

    Get PDF
    This chapter introduces a design guide of memristor emulator circuits, from conceptual idea until experimental tests. Three topologies of memristor emulator circuits in their incremental and decremental versions are analysed and designed at low and high frequency. The behavioural model of each topology is derived and programmed at SIMULINK under the MATLAB environment. An offset compensation technique is also described in order to achieve the frequency-dependent pinched hysteresis loop that is on the origin and when the memristor emulator circuit is operating at high frequency. Furthermore, from these topologies, a technique to transform normal non-linear resistors to inverse non-linear resistors is also addressed. HSPICE numerical simulations for each topology are also shown. Finally, three real analogue applications based on memristors are analysed and explained at the behavioural level of abstraction

    PointMap: A real-time memory-based learning system with on-line and post-training pruning

    Full text link
    Also published in the International Journal of Hybrid Intelligent Systems, Volume 1, January, 2004A memory-based learning system called PointMap is a simple and computationally efficient extension of Condensed Nearest Neighbor that allows the user to limit the number of exemplars stored during incremental learning. PointMap evaluates the information value of coding nodes during training, and uses this index to prune uninformative nodes either on-line or after training. These pruning methods allow the user to control both a priori code size and sensitivity to detail in the training data, as well as to determine the code size necessary for accurate performance on a given data set. Coding and pruning computations are local in space, with only the nearest coded neighbor available for comparison with the input; and in time, with only the current input available during coding. Pruning helps solve common problems of traditional memory-based learning systems: large memory requirements, their accompanying slow on-line computations, and sensitivity to noise. PointMap copes with the curse of dimensionality by considering multiple nearest neighbors during testing without increasing the complexity of the training process or the stored code. The performance of PointMap is compared to that of a group of sixteen nearest-neighbor systems on benchmark problems.This research was supported by grants from the Air Force Office of Scientific Research (AFOSR F49620-98-l-0108, F49620-0l-l-0397, and F49620-0l-l-0423) and the Office of Naval Research (ONR N00014-0l-l-0624)

    Instance selection of linear complexity for big data

    Get PDF
    Over recent decades, database sizes have grown considerably. Larger sizes present new challenges, because machine learning algorithms are not prepared to process such large volumes of information. Instance selection methods can alleviate this problem when the size of the data set is medium to large. However, even these methods face similar problems with very large-to-massive data sets. In this paper, two new algorithms with linear complexity for instance selection purposes are presented. Both algorithms use locality-sensitive hashing to find similarities between instances. While the complexity of conventional methods (usually quadratic, O(n2), or log-linear, O(nlogn)) means that they are unable to process large-sized data sets, the new proposal shows competitive results in terms of accuracy. Even more remarkably, it shortens execution time, as the proposal manages to reduce complexity and make it linear with respect to the data set size. The new proposal has been compared with some of the best known instance selection methods for testing and has also been evaluated on large data sets (up to a million instances).Supported by the Research Projects TIN 2011-24046 and TIN 2015-67534-P from the Spanish Ministry of Economy and Competitiveness

    Temporal impulse and step responses of the human eye obtained psychophysically by means of a drift-correcting perturbation technique

    Get PDF
    Internal impulse and step responses are derived from the thresholds of short probe flashes by means of a drift-correcting perturbation technique. The approach is based on only two postulated systems properties: quasi-linearity and peak detection. A special feature of the technique is its strong reduction of the concealing effect of sensitivity drift within and between sessions. Results were found to be repeatable, even after about one year. For a 1° foveal disk at 1200 td stationary level, impulse responses of increments and decrements were found to be mirror-symmetrical. They were equal to the derivatives of the measured step responses. As a consequence the threshold of any fast-changing retinal illumination should be predictable. This will be tested in a subsequent paper. The transfer function of the system responding to a 1° stimulus shows a band-pass filter type of processing for transients, confirming quantitatively earlier findings. In contrast, a foveal point source on an extended background of 1200 td, to which impulse and step responses appear also to be linearly related, gives rise to low-pass filter action of the system
    corecore