314,529 research outputs found

    Complexity in the Case against Accuracy Estimation

    Get PDF
    Some authors haverepeatedl pointed out that the use of the accuracy, inparticulq for comparingclgN -,::LN is not adequate. The main argument concerns some assumptions ofsel-I 11 vall,N or correctnessunderlnes the use of this criterion. In this paper, we study the computational burden of the accuracy's replacy'sN forbuil:I# and comparingclaringNqP using 13 the framework of Inductive Logic Programming.Replamming is investigated in three ways: complIIIL of the accuracy with anadditional requirement,replrement of the accuracy with 15 bi-criterionrecentl introduced fromstatistical decision theory: the Receiver Operating Characteristicanalisti andrepl,I'NG# of the accuracy by asingl criterion. We prove very hard 17 resul, for most of thepossibl repllONG##I A #rstresul shows thataltNq': the arbitrary multraryNIII' ofcl-IPq appears to betotalq uselq# "Arbitrary" is to be taken in its broadest 19 meaning, inparticul# exponential The second point is the sudden appearance of the negative resuli which is not a function of the criteria's demands. The third point is theequivalNGin 21 di#culN of al these di#erent criteria. In contrast, thesingl accuracy's optimization appears to be tractabl in this framework. 23 c 2002Publ-LL: byEl-L:-O Science B.V. 1. I936361108 An essential task of Machine Learning (ML) and Data Mining (DM) systems is relIII to cl#L-#NGPIOIN ThisbasicalO consists in giving the most accurate answer 27 #TelN +33-596-72-73-64; fax: +33-596-72-73-62

    Autoregressive time series prediction by means of fuzzy inference systems using nonparametric residual variance estimation

    Get PDF
    We propose an automatic methodology framework for short- and long-term prediction of time series by means of fuzzy inference systems. In this methodology, fuzzy techniques and statistical techniques for nonparametric residual variance estimation are combined in order to build autoregressive predictive models implemented as fuzzy inference systems. Nonparametric residual variance estimation plays a key role in driving the identification and learning procedures. Concrete criteria and procedures within the proposed methodology framework are applied to a number of time series prediction problems. The learn from examples method introduced by Wang and Mendel (W&M) is used for identification. The Levenberg–Marquardt (L–M) optimization method is then applied for tuning. The W&M method produces compact and potentially accurate inference systems when applied after a proper variable selection stage. The L–M method yields the best compromise between accuracy and interpretability of results, among a set of alternatives. Delta test based residual variance estimations are used in order to select the best subset of inputs to the fuzzy inference systems as well as the number of linguistic labels for the inputs. Experiments on a diverse set of time series prediction benchmarks are compared against least-squares support vector machines (LS-SVM), optimally pruned extreme learning machine (OP-ELM), and k-NN based autoregressors. The advantages of the proposed methodology are shown in terms of linguistic interpretability, generalization capability and computational cost. Furthermore, fuzzy models are shown to be consistently more accurate for prediction in the case of time series coming from real-world applications.Ministerio de Ciencia e Innovación TEC2008-04920Junta de Andalucía P08-TIC-03674, IAC07-I-0205:33080, IAC08-II-3347:5626

    Optimization of fuzzy analogy in software cost estimation using linguistic variables

    Get PDF
    One of the most important objectives of software engineering community has been the increase of useful models that beneficially explain the development of life cycle and precisely calculate the effort of software cost estimation. In analogy concept, there is deficiency in handling the datasets containing categorical variables though there are innumerable methods to estimate the cost. Due to the nature of software engineering domain, generally project attributes are often measured in terms of linguistic values such as very low, low, high and very high. The imprecise nature of such value represents the uncertainty and vagueness in their elucidation. However, there is no efficient method that can directly deal with the categorical variables and tolerate such imprecision and uncertainty without taking the classical intervals and numeric value approaches. In this paper, a new approach for optimization based on fuzzy logic, linguistic quantifiers and analogy based reasoning is proposed to improve the performance of the effort in software project when they are described in either numerical or categorical data. The performance of this proposed method exemplifies a pragmatic validation based on the historical NASA dataset. The results were analyzed using the prediction criterion and indicates that the proposed method can produce more explainable results than other machine learning methods.Comment: 14 pages, 8 figures; Journal of Systems and Software, 2011. arXiv admin note: text overlap with arXiv:1112.3877 by other author

    Video Classification With CNNs: Using The Codec As A Spatio-Temporal Activity Sensor

    Get PDF
    We investigate video classification via a two-stream convolutional neural network (CNN) design that directly ingests information extracted from compressed video bitstreams. Our approach begins with the observation that all modern video codecs divide the input frames into macroblocks (MBs). We demonstrate that selective access to MB motion vector (MV) information within compressed video bitstreams can also provide for selective, motion-adaptive, MB pixel decoding (a.k.a., MB texture decoding). This in turn allows for the derivation of spatio-temporal video activity regions at extremely high speed in comparison to conventional full-frame decoding followed by optical flow estimation. In order to evaluate the accuracy of a video classification framework based on such activity data, we independently train two CNN architectures on MB texture and MV correspondences and then fuse their scores to derive the final classification of each test video. Evaluation on two standard datasets shows that the proposed approach is competitive to the best two-stream video classification approaches found in the literature. At the same time: (i) a CPU-based realization of our MV extraction is over 977 times faster than GPU-based optical flow methods; (ii) selective decoding is up to 12 times faster than full-frame decoding; (iii) our proposed spatial and temporal CNNs perform inference at 5 to 49 times lower cloud computing cost than the fastest methods from the literature.Comment: Accepted in IEEE Transactions on Circuits and Systems for Video Technology. Extension of ICIP 2017 conference pape

    Evaluating Example-based Pose Estimation: Experiments on the HumanEva Sets

    Get PDF
    We present an example-based approach to pose recovery, using histograms of oriented gradients as image descriptors. Tests on the HumanEva-I and HumanEva-II data sets provide us insight into the strengths and limitations of an example-based approach. We report mean relative 3D errors of approximately 65 mm per joint on HumanEva-I, and 175 mm on HumanEva-II. We discuss our results using single and multiple views. Also, we perform experiments to assess the algorithm’s generalization to unseen subjects, actions and viewpoints. We plan to incorporate the temporal aspect of human motion analysis to reduce orientation ambiguities, and increase the pose recovery accuracy

    FastDepth: Fast Monocular Depth Estimation on Embedded Systems

    Full text link
    Depth sensing is a critical function for robotic tasks such as localization, mapping and obstacle detection. There has been a significant and growing interest in depth estimation from a single RGB image, due to the relatively low cost and size of monocular cameras. However, state-of-the-art single-view depth estimation algorithms are based on fairly complex deep neural networks that are too slow for real-time inference on an embedded platform, for instance, mounted on a micro aerial vehicle. In this paper, we address the problem of fast depth estimation on embedded systems. We propose an efficient and lightweight encoder-decoder network architecture and apply network pruning to further reduce computational complexity and latency. In particular, we focus on the design of a low-latency decoder. Our methodology demonstrates that it is possible to achieve similar accuracy as prior work on depth estimation, but at inference speeds that are an order of magnitude faster. Our proposed network, FastDepth, runs at 178 fps on an NVIDIA Jetson TX2 GPU and at 27 fps when using only the TX2 CPU, with active power consumption under 10 W. FastDepth achieves close to state-of-the-art accuracy on the NYU Depth v2 dataset. To the best of the authors' knowledge, this paper demonstrates real-time monocular depth estimation using a deep neural network with the lowest latency and highest throughput on an embedded platform that can be carried by a micro aerial vehicle.Comment: Accepted for presentation at ICRA 2019. 8 pages, 6 figures, 7 table
    • 

    corecore