413,767 research outputs found

    Machine learning based characterization of nanoindentation induced acoustic events

    Get PDF
    Developed as Baushinger effect monitoring technique in polycrystaline metallic materials the passive acoustic wave monitoring has been a critical health monitoring technique in Atomic energy pressurized tanks, power and structure monitoring in aeronautics, automotive, as well as recent application in spinning disk data storage (HDD) health monitoring applications [1]. Passive monitoring of acoustic waves generated during initial contact point has been attracting attention of material scientists since the inception of nanomechanical test instruments for nanoindentation and nanoscratch applications. The conventional acoustic wave signal treatment via RMS or integrated energy values proved that quantitative acoustic wave properties correlate well with the local contact materials phenomena such as yield point initiation for W(100) [2, 3], Sapphire [4], phase transformation for SMA [2] and differentiating of thin film fracture modes [5]. Several attempts have been made to look at the differenciative properties of the acoustic signatures via signal decomposition techniques such as wavelets [2, 6]. Even though acoustic wave signatures were reconstructed, the true potential of the method was not investigated from the machine learning perspective. In this work a machine learning based signal processing of nanoindentation induced acoustic events was investigated in details. The synergy of wavelet signal decomposition and information theory based signal presorting prepares data for the machine learning step. In the machine learning step the Bayesian filtering and convolutive neural networks sort wavelet coefficients by their statistical significance. This creates acoustic signature libraries that are typical for the specific materials phenomena during the indent. The hardware consist of the newly developed ultrasonic probe integrated into the nanoindentation tip, thus eliminating boundary effects and ensuring that only waves that pass via the contact are being recorded. Appropriate signal conditioning and fast data acquisition hardware is synchronized with a quasi-static nanoindentation controller. The machine learning routine together with wavelet decomposition and presorting algorithms are implemented into the dedicated acoustic data evaluation kernel which resides in the fast access memory of the designated controller. References: A. Daugela, S. Tadepalli, Drive level acoustic defectoscopy for head disc interface (HDI) integration and manufacturing, Journal of Microsystem Technol., Vol. 18, No. 9-10 (2012) pp. 1425 - 1430. A. Daugela, H. Kutomi, T.J., Wyrobek, Nanoindentation Induced Acoustic Emission Monitoring of Native Oxide Fracture and Phase Transformations, Zeitschrift fur Metallkunde, Vol. 92, No 9 (2001), pp. 1052-1056. N. Tymiak, A. Daugela, T. J. Wyrobek and O. L. Warren, Highly Localized Acoustic Emission Monitoring of Nanoscale Indentation Contacts, Journal of Materials Research, Vol. 18, No. 4 (2003), pp. 1 – 13. N.I. Tymiak, A. Daugela, T.J. Wyrobek and O.L. Warren, Acoustic emission monitoring of the earliest stages of contact induced plasticity in sapphire Acta Materiallia, Vol. 52 (2004) pp. 553-563. N. Faisal, R. Ahmed, R.L Reuben, Indentation testing and its acoustic emission response: applications and emerging trends, Int. Materials Reviews, Vol. 56, No. 2 (2011). N. Tymiak, D. Chrobak, W. W. Gerberich, O. Warren, R. Nowak, Role of competition between slip and twinning in nanoscale deformation of sapphire, Physical Review B - Condensed Matter and Materials Physics, Vol. 79, No. 17, (2009)

    The teaching size: computable teachers and learners for universal languages

    Full text link
    [EN] The theoretical hardness of machine teaching has usually been analyzed for a range of concept languages under several variants of the teaching dimension: the minimum number of examples that a teacher needs to figure out so that the learner identifies the concept. However, for languages where concepts have structure (and hence size), such as Turing-complete languages, a low teaching dimension can be achieved at the cost of using very large examples, which are hard to process by the learner. In this paper we introduce the teaching size, a more intuitive way of assessing the theoretical feasibility of teaching concepts for structured languages. In the most general case of universal languages, we show that focusing on the total size of a witness set rather than its cardinality, we can teach all total functions that are computable within some fixed time bound. We complement the theoretical results with a range of experimental results on a simple Turing-complete language, showing how teaching dimension and teaching size differ in practice. Quite remarkably, we found that witness sets are usually smaller than the programs they identify, which is an illuminating justification of why machine teaching from examples makes sense at all.We would like to thank the anonymous referees for their helpful comments. This work was supported by the EU (FEDER) and the Spanish MINECO under grant RTI2018-094403-B-C32, and the Generalitat Valenciana PROMETEO/2019/098. This work was done while the first author visited Universitat Politecnica de Valencia and also while the third author visited University of Bergen (covered by Generalitat Valenciana BEST/2018/027 and University of Bergen). J. Hernandez-Orallo is also funded by an FLI grant RFP2-152.Telle, JA.; Hernández-Orallo, J.; Ferri Ramírez, C. (2019). The teaching size: computable teachers and learners for universal languages. Machine Learning. 108(8-9):1653-1675. https://doi.org/10.1007/s10994-019-05821-2S165316751088-9Angluin, D., & Kriķis, M. (2003). Learning from different teachers. Machine Learning, 51(2), 137–163.Balbach, F. J. (2007). Models for algorithmic teaching. Ph.D. thesis, University of Lübeck.Balbach, F. J. (2008). Measuring teachability using variants of the teaching dimension. Theoretical Computer Science, 397(1–3), 94–113.Balbach, F. J., & Zeugmann, T. (2009). Recent developments in algorithmic teaching. In Intl conf on language and automata theory and applications (pp. 1–18). Springer.Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009). Curriculum learning. In Proceedings of the 26th annual international conference on machine learning (pp. 41–48). ACM.Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In IJCAI-17 Workshop on explainable AI (XAI) (p. 8).Böhm, C. (1964). On a family of turing machines and the related programming language. ICC Bulletin, 3(3), 187–194.Elias, P. (1975). Universal codeword sets and representations of the integers. IEEE Transactions on Information Theory, 21(2), 194–203.Freivalds, R., Kinber, E. B., & Wiehagen, R. (1989). Inductive inference from good examples. In International workshop on analogical and inductive inference (pp. 1–17). Springer.Freivalds, R., Kinber, E. B., & Wiehagen, R. (1993). On the power of inductive inference from good examples. Theoretical Computer Science, 110(1), 131–144.Gao, Z., Ries, C., Simon, H. U., & Zilles, S. (2016). Preference-based teaching. In Conf. on learning theory (pp. 971–997).Gold, E. M. (1967). Language identification in the limit. Information and Control, 10(5), 447–474.Goldman, S. A., & Kearns, M. J. (1995). On the complexity of teaching. Journal of Computer and System Sciences, 50(1), 20–31.Goldman, S. A., & Mathias, H. D. (1993). Teaching a smart learner. In Conf. on computational learning theory (pp. 67–76).Gulwani, S., Hernández-Orallo, J., Kitzelmann, E., Muggleton, S. H., Schmid, U., & Zorn, B. (2015). Inductive programming meets the real world. Communications of the ACM, 58(11).Hernandez-Orallo, J., & Telle, J. A. (2018). Finite biased teaching with infinite concept classes. arXiv preprint. arXiv:1804.07121 .Jun, S. W. (2016). 50,000,000,000 instructions per second: Design and implementation of a 256-core brainfuck computer. Computer Science and AI Laboratory, MIT.Khan, F., Mutlu, B., & Zhu, X. (2011). How do humans teach: On curriculum learning and teaching dimension. In Advances in neural information processing systems (pp. 1449–1457).Lake, B., & Baroni, M. (2018). Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In ICML (pp. 2879–2888).Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332–1338.Lázaro-Gredilla, M., Lin, D., Guntupalli, J. S., & George, D. (2019). Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs. Science Robotics 4.Levin, L. A. (1973). Universal Search Problems. Problems of Information Transmission, 9, 265–266.Li, M., & Vitányi, P. (2008). An introduction to Kolmogorov complexity and its applications (3rd ed.). New York, NY: Springer.Lieberman, H. (2001). Your wish is my command: Programming by example. San Francisco, CA: Morgan Kaufmann.Shafto, P., Goodman, N. D., & Griffiths, T. L. (2014). A rational account of pedagogical reasoning: Teaching by, and learning from, examples. Cognitive Psychology, 71, 55–89.Shinohara, A., & Miyano, S. (1991). Teachability in computational learning. New Generation Computing, 8(4), 337–347.Simard, P. Y., Amershi, S., Chickering, D. M., Pelton, A. E., Ghorashi, S., Meek, C., Ramos, G., Suh, J., Verwey, J., & Wang, M., et al. (2017). Machine teaching: A new paradigm for building machine learning systems. arXiv preprint arXiv:1707.06742 .Solomonoff, R. J. (1964). A formal theory of inductive inference. Part I. Information and Control, 7(1), 1–22.Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27(11), 1134–1142.Vapnik, V. N., & Chervonenkis, A. Y. (1971). On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and Its Applications, 16, 264–280.Zhu, X. (2013). Machine teaching for Bayesian learners in the exponential family. In Neural information processing systems 26, Curran (pp. 1905–1913).Zhu, X. (2015). Machine teaching: An inverse problem to machine learning and an approach toward optimal education. In AAAI (pp. 4083–4087).Zhu, X., Singla, A., Zilles, S., & Rafferty, A. N. (2018). An overview of machine teaching. arXiv preprint arXiv:1801.05927

    A review of the state of the art in Machine Learning on the Semantic Web: Technical Report CSTR-05-003

    Get PDF

    One-Class Classification: Taxonomy of Study and Review of Techniques

    Full text link
    One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class. The OCC problem has been considered and applied under many research themes, such as outlier/novelty detection and concept learning. In this paper we present a unified view of the general problem of OCC by presenting a taxonomy of study for OCC problems, which is based on the availability of training data, algorithms used and the application domains applied. We further delve into each of the categories of the proposed taxonomy and present a comprehensive literature review of the OCC algorithms, techniques and methodologies with a focus on their significance, limitations and applications. We conclude our paper by discussing some open research problems in the field of OCC and present our vision for future research.Comment: 24 pages + 11 pages of references, 8 figure

    Survey of data mining approaches to user modeling for adaptive hypermedia

    Get PDF
    The ability of an adaptive hypermedia system to create tailored environments depends mainly on the amount and accuracy of information stored in each user model. Some of the difficulties that user modeling faces are the amount of data available to create user models, the adequacy of the data, the noise within that data, and the necessity of capturing the imprecise nature of human behavior. Data mining and machine learning techniques have the ability to handle large amounts of data and to process uncertainty. These characteristics make these techniques suitable for automatic generation of user models that simulate human decision making. This paper surveys different data mining techniques that can be used to efficiently and accurately capture user behavior. The paper also presents guidelines that show which techniques may be used more efficiently according to the task implemented by the applicatio

    Unifying an Introduction to Artificial Intelligence Course through Machine Learning Laboratory Experiences

    Full text link
    This paper presents work on a collaborative project funded by the National Science Foundation that incorporates machine learning as a unifying theme to teach fundamental concepts typically covered in the introductory Artificial Intelligence courses. The project involves the development of an adaptable framework for the presentation of core AI topics. This is accomplished through the development, implementation, and testing of a suite of adaptable, hands-on laboratory projects that can be closely integrated into the AI course. Through the design and implementation of learning systems that enhance commonly-deployed applications, our model acknowledges that intelligent systems are best taught through their application to challenging problems. The goals of the project are to (1) enhance the student learning experience in the AI course, (2) increase student interest and motivation to learn AI by providing a framework for the presentation of the major AI topics that emphasizes the strong connection between AI and computer science and engineering, and (3) highlight the bridge that machine learning provides between AI technology and modern software engineering
    corecore