494 research outputs found

    Towards adaptive anomaly detection systems using boolean combination of hidden Markov models

    Get PDF
    Anomaly detection monitors for significant deviations from normal system behavior. Hidden Markov Models (HMMs) have been successfully applied in many intrusion detection applications, including anomaly detection from sequences of operating system calls. In practice, anomaly detection systems (ADSs) based on HMMs typically generate false alarms because they are designed using limited representative training data and prior knowledge. However, since new data may become available over time, an important feature of an ADS is the ability to accommodate newly-acquired data incrementally, after it has originally been trained and deployed for operations. Incremental re-estimation of HMM parameters raises several challenges. HMM parameters should be updated from new data without requiring access to the previously-learned training data, and without corrupting previously-learned models of normal behavior. Standard techniques for training HMM parameters involve iterative batch learning, and hence must observe the entire training data prior to updating HMM parameters. Given new training data, these techniques must restart the training procedure using all (new and previously-accumulated) data. Moreover, a single HMM system for incremental learning may not adequately approximate the underlying data distribution of the normal process, due to the many local maxima in the solution space. Ensemble methods have been shown to alleviate knowledge corruption, by combining the outputs of classifiers trained independently on successive blocks of data. This thesis makes contributions at the HMM and decision levels towards improved accuracy, efficiency and adaptability of HMM-based ADSs. It first presents a survey of techniques found in literature that may be suitable for incremental learning of HMM parameters, and assesses the challenges faced when these techniques are applied to incremental learning scenarios in which the new training data is limited and abundant. Consequently, An efficient alternative to the Forward-Backward algorithm is first proposed to reduce the memory complexity without increasing the computational overhead of HMM parameters estimation from fixed-size abundant data. Improved techniques for incremental learning of HMM parameters are then proposed to accommodate new data over time, while maintaining a high level of performance. However, knowledge corruption caused by a single HMM with a fixed number of states remains an issue. To overcome such limitations, this thesis presents an efficient system to accommodate new data using a learn-and-combine approach at the decision level. When a new block of training data becomes available, a new pool of base HMMs is generated from the data using a different number of HMM states and random initializations. The responses from the newly-trained HMMs are then combined to those of the previously-trained HMMs in receiver operating characteristic (ROC) space using novel Boolean combination (BC) techniques. The learn-and-combine approach allows to select a diversified ensemble of HMMs (EoHMMs) from the pool, and adapts the Boolean fusion functions and thresholds for improved performance, while it prunes redundant base HMMs. The proposed system is capable of changing its desired operating point during operations, and this point can be adjusted to changes in prior probabilities and costs of errors. During simulations conducted for incremental learning from successive data blocks using both synthetic and real-world system call data sets, the proposed learn-and-combine approach has been shown to achieve the highest level of accuracy than all related techniques. In particular, it can sustain a significantly higher level of accuracy than when the parameters of a single best HMM are re-estimated for each new block of data, using the reference batch learning and the proposed incremental learning techniques. It also outperforms static fusion techniques such as majority voting for combining the responses of new and previously-generated pools of HMMs. Ensemble selection techniques have been shown to form compact EoHMMs for operations, by selecting diverse and accurate base HMMs from the pool while maintaining or improving the overall system accuracy. Pruning has been shown to prevents pool sizes from increasing indefinitely with the number of data blocks acquired over time. Therefore, the storage space for accommodating HMMs parameters and the computational costs of the selection techniques are reduced, without negatively affecting the overall system performance. The proposed techniques are general in that they can be employed to adapt HMM-based systems to new data, within a wide range of application domains. More importantly, the proposed Boolean combination techniques can be employed to combine diverse responses from any set of crisp or soft one- or two-class classifiers trained on different data or features or trained according to different parameters, or from different detectors trained on the same data. In particular, they can be effectively applied when training data is limited and test data is imbalanced

    Domain Adaptation Techniques for Machine Translation and Their Evaluation in a Real-World Setting

    Get PDF
    Abstract. Statistical Machine Translation (SMT) is currently used in real-time and commercial settings to quickly produce initial translations for a document which can later be edited by a human. The SMT models specialized for one domain often perform poorly when applied to other domains. The typical assumption that both training and testing data are drawn from the same distribution no longer applies. This paper evaluates domain adaptation techniques for SMT systems in the context of end-user feedback in a real world application. We present our experiments using two adaptive techniques, one relying on log-linear models and the other using mixture models. We describe our experimental results on legal and government data, and present the human evaluation effort for post-editing in addition to traditional automated scoring techniques (BLEU scores). The human effort is based primarily on the amount of time and number of edits required by a professional post-editor to improve the quality of machine-generated translations to meet industry standards. The experimental results in this paper show that the domain adaptation techniques can yield a significant increase in BLEU score (up to four points) and a significant reduction in post-editing time of about one second per word

    ROC curves for regression

    Full text link
    “NOTICE: this is the author’s version of a work that was accepted for publication in Pattern Recognition. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition Volume 46, Issue 12, December 2013, Pages 3395–3411 DOI: 10.1016/j.patcog.2013.06.014Receiver Operating Characteristic (ROC) analysis is one of the most popular tools for the visual assessment and understanding of classifier performance. In this paper we present a new representation of regression models in the so-called regression ROC (RROC) space. The basic idea is to represent over-estimation against under-estimation. The curves are just drawn by adjusting a shift, a constant that is added (or subtracted) to the predictions, and plays a similar role as a threshold in classification. From here, we develop the notions of optimal operating condition, convexity, dominance, and explore several evaluation metrics that can be shown graphically, such as the area over the RROC curve (AOC). In particular, we show a novel and significant result: the AOC is equivalent to the error variance. We illustrate the application of RROC curves to resource estimation, namely the estimation of software project effort.I would like to thank Peter Flach and Nicolas Lachiche for some very useful comments and corrections on earlier versions of this paper, especially the suggestion of drawing normalised curves (dividing x-axis and y-axis by n). This work was supported by the MEC/MINECO projects CONSOLIDER-INGENIO CSD2007-00022 and TIN 2010-21062-C02-02, GVA project Prometeo/2008/051, the COST - European Cooperation in the field of Scientific and Technical Research IC0801 AT, and the REFRAME project granted by the European Coordinated Research on Long-term Challenges in Information and Communication Sciences & Technologies ERA-Net (CHIST-ERA), and funded by the respective national research councils and ministries.Hernández-Orallo, J. (2013). ROC curves for regression. Pattern Recognition. 46(12):3395-3411. https://doi.org/10.1016/j.patcog.2013.06.014S33953411461

    Novel Approach for Modeling Wireless Fading Channels using a Finite State Markov Chain

    Get PDF
    yesEmpirical modeling of wireless fading channels using common schemes such as autoregression and thefinitestate Markov chain (FSMC) is investigated. The conceptual background of both channel structures and the establishment of their mutual dependence in a confined manner are presented. The novel contribution lies in the proposal of a new approach for deriving the state transition probabilities borrowed from economic disciplines, which has not been studied so far with respect to the modeling of FSMC wireless fading channels. The proposed approach is based on equal portioning of the received signal-to-noise ratio, realized by using an alternative probability construction that was initially highlighted by Tauchen. The associated statistical procedure shows that afirst-order FSMC with a limited number of channel states can satisfactorily approximate fading. The computational overheads of the proposed technique are analyzed andproven to be less demanding compared to the conventional FSMC approach based on the levelcrossing rate. Simulations confirm the analytical results and promising performance of the new channel modelbased on the Tauchen approach without extracomplexity costs

    Evaluation in artificial intelligence: From task-oriented to ability-oriented measurement

    Full text link
    The final publication is available at Springer via http://dx.doi.org/ 10.1007/s10462-016-9505-7.The evaluation of artificial intelligence systems and components is crucial for the progress of the discipline. In this paper we describe and critically assess the different ways AI systems are evaluated, and the role of components and techniques in these systems. We first focus on the traditional task-oriented evaluation approach. We identify three kinds of evaluation: human discrimination, problem benchmarks and peer confrontation. We describe some of the limitations of the many evaluation schemes and competitions in these three categories, and follow the progression of some of these tests. We then focus on a less customary (and challenging) ability-oriented evaluation approach, where a system is characterised by its (cognitive) abilities, rather than by the tasks it is designed to solve. We discuss several possibilities: the adaptation of cognitive tests used for humans and animals, the development of tests derived from algorithmic information theory or more integrated approaches under the perspective of universal psychometrics. We analyse some evaluation tests from AI that are better positioned for an ability-oriented evaluation and discuss how their problems and limitations can possibly be addressed with some of the tools and ideas that appear within the paper. Finally, we enumerate a series of lessons learnt and generic guidelines to be used when an AI evaluation scheme is under consideration.I thank the organisers of the AEPIA Summer School On Artificial Intelligence, held in September 2014, for giving me the opportunity to give a lecture on 'AI Evaluation'. This paper was born out of and evolved through that lecture. The information about many benchmarks and competitions discussed in this paper have been contrasted with information from and discussions with many people: M. Bedia, A. Cangelosi, C. Dimitrakakis, I. GarcIa-Varea, Katja Hofmann, W. Langdon, E. Messina, S. Mueller, M. Siebers and C. Soares. Figure 4 is courtesy of F. Martinez-Plumed. Finally, I thank the anonymous reviewers, whose comments have helped to significantly improve the balance and coverage of the paper. This work has been partially supported by the EU (FEDER) and the Spanish MINECO under Grants TIN 2013-45732-C4-1-P, TIN 2015-69175-C4-1-R and by Generalitat Valenciana PROMETEOII2015/013.José Hernández-Orallo (2016). Evaluation in artificial intelligence: From task-oriented to ability-oriented measurement. Artificial Intelligence Review. 1-51. https://doi.org/10.1007/s10462-016-9505-7S151Abel D, Agarwal A, Diaz F, Krishnamurthy A, Schapire RE (2016) Exploratory gradient boosting for reinforcement learning in complex domains. arXiv preprint arXiv:1603.04119Adams S, Arel I, Bach J, Coop R, Furlan R, Goertzel B, Hall JS, Samsonovich A, Scheutz M, Schlesinger M, Shapiro SC, Sowa J (2012) Mapping the landscape of human-level artificial general intelligence. AI Mag 33(1):25–42Adams SS, Banavar G, Campbell M (2016) I-athlon: towards a multi-dimensional Turing test. AI Mag 37(1):78–84Alcalá J, Fernández A, Luengo J, Derrac J, García S, Sánchez L, Herrera F (2010) Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. J Mult Valued Logic Soft Comput 17:255–287Alexander JRM, Smales S (1997) Intelligence, learning and long-term memory. Personal Individ Differ 23(5):815–825Alpcan T, Everitt T, Hutter M (2014) Can we measure the difficulty of an optimization problem? In: IEEE information theory workshop (ITW)Alur R, Bodik R, Juniwal G, Martin MMK, Raghothaman M, Seshia SA, Singh R, Solar-Lezama A, Torlak E, Udupa A (2013) Syntax-guided synthesis. In: Formal methods in computer-aided design (FMCAD), 2013, IEEE, pp 1–17Alvarado N, Adams SS, Burbeck S, Latta C (2002) Beyond the Turing test: performance metrics for evaluating a computer simulation of the human mind. In: Proceedings of the 2nd international conference on development and learning, IEEE, pp 147–152Amigoni F, Bastianelli E, Berghofer J, Bonarini A, Fontana G, Hochgeschwender N, Iocchi L, Kraetzschmar G, Lima P, Matteucci M, Miraldo P, Nardi D, Schiaffonati V (2015) Competitions for benchmarking: task and functionality scoring complete performance assessment. IEEE Robot Autom Mag 22(3):53–61Anderson J, Lebiere C (2003) The Newell test for a theory of cognition. Behav Brain Sci 26(5):587–601Anderson J, Baltes J, Cheng CT (2011) Robotics competitions as benchmarks for AI research. Knowl Eng Rev 26(01):11–17Arel I, Rose DC, Karnowski TP (2010) Deep machine learning—a new frontier in artificial intelligence research. IEEE Comput Intell Mag 5(4):13–18Asada M, Hosoda K, Kuniyoshi Y, Ishiguro H, Inui T, Yoshikawa Y, Ogino M, Yoshida C (2009) Cognitive developmental robotics: a survey. IEEE Trans Auton Ment Dev 1(1):12–34Aziz H, Brill M, Fischer F, Harrenstein P, Lang J, Seedig HG (2015) Possible and necessary winners of partial tournaments. J Artif Intell Res 54:493–534Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/mlBagnall AJ, Zatuchna ZV (2005) On the classification of maze problems. In: Bull L, Kovacs T (eds) Foundations of learning classifier system. Studies in fuzziness and soft computing, vol. 183, Springer, pp 305–316. http://rd.springer.com/chapter/10.1007/11319122_12Baldwin D, Yadav SB (1995) The process of research investigations in artificial intelligence - a unified view. IEEE Trans Syst Man Cybern 25(5):852–861Bellemare MG, Naddaf Y, Veness J, Bowling M (2013) The arcade learning environment: an evaluation platform for general agents. J Artif Intell Res 47:253–279Besold TR (2014) A note on chances and limitations of psychometric ai. In: KI 2014: advances in artificial intelligence. Springer, pp 49–54Biever C (2011) Ultimate IQ: one test to rule them all. New Sci 211(2829, 10 September 2011):42–45Borg M, Johansen SS, Thomsen DL, Kraus M (2012) Practical implementation of a graphics Turing test. In: Advances in visual computing. Springer, pp 305–313Boring EG (1923) Intelligence as the tests test it. New Repub 35–37Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, OxfordBrazdil P, Carrier CG, Soares C, Vilalta R (2008) Metalearning: applications to data mining. Springer, New YorkBringsjord S (2011) Psychometric artificial intelligence. J Exp Theor Artif Intell 23(3):271–277Bringsjord S, Schimanski B (2003) What is artificial intelligence? Psychometric AI as an answer. In: International joint conference on artificial intelligence, pp 887–893Brundage M (2016) Modeling progress in ai. AAAI 2016 Workshop on AI, Ethics, and SocietyBuchanan BG (1988) Artificial intelligence as an experimental science. Springer, New YorkBuhrmester M, Kwang T, Gosling SD (2011) Amazon’s mechanical turk a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 6(1):3–5Bursztein E, Aigrain J, Moscicki A, Mitchell JC (2014) The end is nigh: generic solving of text-based captchas. In: Proceedings of the 8th USENIX conference on Offensive Technologies, USENIX Association, p 3Campbell M, Hoane AJ, Hsu F (2002) Deep Blue. Artif Intell 134(1–2):57–83Cangelosi A, Schlesinger M, Smith LB (2015) Developmental robotics: from babies to robots. MIT Press, CambridgeCaputo B, Müller H, Martinez-Gomez J, Villegas M, Acar B, Patricia N, Marvasti N, Üsküdarlı S, Paredes R, Cazorla M et al (2014) Imageclef 2014: overview and analysis of the results. In: Information access evaluation. Multilinguality, multimodality, and interaction, Springer, pp 192–211Carlson A, Betteridge J, Kisiel B, Settles B, Hruschka ER Jr, Mitchell TM (2010) Toward an architecture for never-ending language learning. In: AAAI, vol 5, p 3Carroll JB (1993) Human cognitive abilities: a survey of factor-analytic studies. Cambridge University Press, CambridgeCaruana R (1997) Multitask learning. Mach Learn 28(1):41–75Chaitin GJ (1982) Gödel’s theorem and information. Int J Theor Phys 21(12):941–954Chandrasekaran B (1990) What kind of information processing is intelligence? In: The foundation of artificial intelligence—a sourcebook. Cambridge University Press, pp 14–46Chater N (1999) The search for simplicity: a fundamental cognitive principle? Q J Exp Psychol Sect A 52(2):273–302Chater N, Vitányi P (2003) Simplicity: a unifying principle in cognitive science? Trends Cogn Sci 7(1):19–22Chu Z, Gianvecchio S, Wang H, Jajodia S (2010) Who is tweeting on twitter: human, bot, or cyborg? In: Proceedings of the 26th annual computer security applications conference, ACM, pp 21–30Cochran WG (2007) Sampling techniques. Wiley, New YorkCohen PR, Howe AE (1988) How evaluation guides AI research: the message still counts more than the medium. AI Mag 9(4):35Cohen Y (2013) Testing and cognitive enhancement. Technical repor, National Institute for Testing and Evaluation, Jerusalem, IsraelConrad JG, Zeleznikow J (2013) The significance of evaluation in AI and law: a case study re-examining ICAIL proceedings. In: Proceedings of the 14th international conference on artificial intelligence and law, ACM, pp 186–191Conrad JG, Zeleznikow J (2015) The role of evaluation in ai and law. In: Proceedings of the 15th international conference on artificial intelligence and law, pp 181–186Deary IJ, Der G, Ford G (2001) Reaction times and intelligence differences: a population-based cohort study. Intelligence 29(5):389–399Decker KS, Durfee EH, Lesser VR (1989) Evaluating research in cooperative distributed problem solving. Distrib Artif Intell 2:487–519Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30Detterman DK (2011) A challenge to Watson. Intelligence 39(2–3):77–78Dimitrakakis C (2016) Personal communicationDimitrakakis C, Li G, Tziortziotis N (2014) The reinforcement learning competition 2014. AI Mag 35(3):61–65Dowe DL (2013) Introduction to Ray Solomonoff 85th memorial conference. In: Dowe DL (ed) Algorithmic probability and friends. Bayesian prediction and artificial intelligence, lecture notes in computer science, vol 7070. Springer, Berlin, pp 1–36Dowe DL, Hajek AR (1997) A computational extension to the Turing Test. In: Proceedings of the 4th conference of the Australasian cognitive science society, University of Newcastle, NSW, AustraliaDowe DL, Hajek AR (1998) A non-behavioural, computational extension to the Turing test. In: International conference on computational intelligence and multimedia applications (ICCIMA’98), Gippsland, Australia, pp 101–106Dowe DL, Hernández-Orallo J (2012) IQ tests are not for machines, yet. Intelligence 40(2):77–81Dowe DL, Hernández-Orallo J (2014) How universal can an intelligence test be? Adapt Behav 22(1):51–69Drummond C (2009) Replicability is not reproducibility: nor is it good science. In: Proceedings of the evaluation methods for machine learning workshop at the 26th ICML, Montreal, CanadaDrummond C, Japkowicz N (2010) Warning: statistical benchmarking is addictive. Kicking the habit in machine learning. J Exp Theor Artif Intell 22(1):67–80Duan Y, Chen X, Houthooft R, Schulman J, Abbeel P (2016) Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv:1604.06778Eden AH, Moor JH, Soraker JH, Steinhart E (2013) Singularity hypotheses: a scientific and philosophical assessment. Springer, New YorkEdmondson W (2012) The intelligence in ETI—what can we know? Acta Astronaut 78:37–42Elo AE (1978) The rating of chessplayers, past and present, vol 3. Batsford, LondonEmbretson SE, Reise SP (2000) Item response theory for psychologists. L. Erlbaum, HillsdaleEvans JM, Messina ER (2001) Performance metrics for intelligent systems. NIST Special Publication SP, pp 101–104Everitt T, Lattimore T, Hutter M (2014) Free lunch for optimisation under the universal distribution. In: 2014 IEEE Congress on evolutionary computation (CEC), IEEE, pp 167–174Falkenauer E (1998) On method overfitting. J Heuristics 4(3):281–287Feldman J (2003) Simplicity and complexity in human concept learning. Gen Psychol 38(1):9–15Ferrando PJ (2009) Difficulty, discrimination, and information indices in the linear factor analysis model for continuous item responses. Appl Psychol Meas 33(1):9–24Ferrando PJ (2012) Assessing the discriminating power of item and test scores in the linear factor-analysis model. Psicológica 33:111–139Ferri C, Hernández-Orallo J, Modroiu R (2009) An experimental comparison of performance measures for classification. Pattern Recogn Lett 30(1):27–38Ferrucci D, Brown E, Chu-Carroll J, Fan J, Gondek D, Kalyanpur AA, Lally A, Murdock J, Nyberg E, Prager J et al (2010) Building Watson: an overview of the DeepQA project. AI Mag 31(3):59–79Fogel DB (1991) The evolution of intelligent decision making in gaming. Cybern Syst 22(2):223–236Gaschnig J, Klahr P, Pople H, Shortliffe E, Terry A (1983) Evaluation of expert systems: issues and case studies. Build Exp Syst 1:241–278Geissman JR, Schultz RD (1988) Verification & validation. AI Exp 3(2):26–33Genesereth M, Love N, Pell B (2005) General game playing: overview of the AAAI competition. AI Mag 26(2):62Gerónimo D, López AM (2014) Datasets and benchmarking. In: Vision-based pedestrian protection systems for intelligent vehicles. Springer, pp 87–93Goertzel B, Pennachin C (eds) (2007) Artificial general intelligence. Springer, New YorkGoertzel B, Arel I, Scheutz M (2009) Toward a roadmap for human-level artificial general intelligence: embedding HLAI systems in broad, approachable, physical or virtual contexts. Artif Gen Intell Roadmap InitiatGoldreich O, Vadhan S (2007) Special issue on worst-case versus average-case complexity editors’ foreword. Comput complex 16(4):325–330Gordon BB (2007) Report on panel discussion on (re-)establishing or increasing collaborative links between artificial intelligence and intelligent systems. In: Messina ER, Madhavan R (eds) Proceedings of the 2007 workshop on performance metrics for intelligent systems, pp 302–303Gulwani S, Hernández-Orallo J, Kitzelmann E, Muggleton SH, Schmid U, Zorn B (2015) Inductive programming meets the real world. Commun ACM 58(11):90–99Hand DJ (2004) Measurement theory and practice. A Hodder Arnold Publication, LondonHernández-Orallo J (2000a) Beyond the Turing test. J Logic Lang Inf 9(4):447–466Hernández-Orallo J (2000b) On the computational measurement of intelligence factors. In: Meystel A (ed) Performance metrics for intelligent systems workshop. National Institute of Standards and Technology, Gaithersburg, pp 1–8Hernández-Orallo J (2000c) Thesis: computational measures of information gain and reinforcement in inference processes. AI Commun 13(1):49–50Hernández-Orallo J (2010) A (hopefully) non-biased universal environment class for measuring intelligence of biological and artificial systems. In: Artificial general intelligence, 3rd International Conference. Atlantis Press, Extended report at http://users.dsic.upv.es/proy/anynt/unbiased.pdf , pp 182–183Hernández-Orallo J (2014) On environment difficulty and discriminating power. Auton Agents Multi-Agent Syst. 29(3):402–454. doi: 10.1007/s10458-014-9257-1Hernández-Orallo J, Dowe DL (2010) Measuring universal intelligence: towards an anytime intelligence test. Artif Intell 174(18):1508–1539Hernández-Orallo J, Dowe DL (2013) On potential cognitive abilities in the machine kingdom. Minds Mach 23:179–210Hernández-Orallo J, Minaya-Collado N (1998) A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In: Proceedings of international symposium of engineering of intelligent systems (EIS’98), ICSC Press, pp 146–163Hernández-Orallo J, Dowe DL, España-Cubillo S, Hernández-Lloreda MV, Insa-Cabrera J (2011) On more realistic environment distributions for defining, evaluating and developing intelligence. In: Schmidhuber J, Thórisson K, Looks M (eds) Artificial general intelligence, LNAI, vol 6830. Springer, New York, pp 82–91Hernández-Orallo J, Flach P, Ferri C (2012a) A unified view of performance metrics: translating threshold choice into expected classification loss. J Mach Learn Res 13(1):2813–2869Hernández-Orallo J, Insa-Cabrera J, Dowe DL, Hibbard B (2012b) Turing Tests with Turing machines. In: Voronkov A (ed) Turing-100, EPiC Series, vol 10, pp 140–156Hernández-Orallo J, Dowe DL, Hernández-Lloreda MV (2014) Universal psychometrics: measuring cognitive abilities in the machine kingdom. Cogn Syst Res 27:50–74Hernández-Orallo J, Martínez-Plumed F, Schmid U, Siebers M, Dowe DL (2016) Computer models solving intelligence test problems: progress and implications. Artif Intell 230:74–107Herrmann E, Call J, Hernández-Lloreda MV, Hare B, Tomasello M (2007) Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis. Science 317(5843):1360–1366Hibbard B (2009) Bias and no free lunch in formal measures of intelligence. J Artif Gen Intell 1(1):54–61Hingston P (2010) A new design for a Turing Test for bots. In: 2010 IEEE symposium on computational intelligence and games (CIG), IEEE, pp 345–350Hingston P (2012) Believable bots: can computers play like people?. Springer, New YorkHo TK, Basu M (2002) Complexity measures of supervised classification problems. IEEE Trans Pattern Anal Mach Intell 24(3):289–300Hutter M (2007) Universal algorithmic intelligence: a mathematical top \rightarrow → down approach. In: Goertzel B, Pennachin C (eds) Artificial general intelligence, cognitive technologies. Springer, Berlin, pp 227–290Igel C, Toussaint M (2005) A no-free-lunch theorem for non-uniform distributions of target functions. J Math Model Algorithms 3(4):313–322Insa-Cabrera J (2016) Towards a universal test of social intelligence. Ph.D. thesis, Departament de Sistemes Informátics i Computació, UPVInsa-Cabrera J, Dowe DL, España-Cubillo S, Hernández-Lloreda MV, Hernández-Orallo J (2011a) Comparing humans and ai agents. In: Schmidhuber J, Thórisson K, Looks M (eds) Artificial general intelligence, LNAI, vol 6830. Springer, New York, pp 122–132Insa-Cabrera J, Dowe DL, Hernández-Orallo J (2011) Evaluating a reinforcement learning algorithm with a general intelligence test. In: Lozano JA, Gamez JM (eds) Current topics in artificial intelligence. CAEPIA 2011, LNAI series 7023. Springer, New YorkInsa-Cabrera J, Benacloch-Ayuso JL, Hernández-Orallo J (2012) On measuring social intelligence: experiments on competition and cooperation. In: Bach J, Goertzel B, Iklé M (eds) AGI, lecture notes in computer science, vol 7716. Springer, New York, pp 126–135Jacoff A, Messina E, Weiss BA, Tadokoro S, Nakagawa Y (2003) Test arenas and performance metrics for urban search and rescue robots. In: Proceedings of 2003 IEEE/RSJ international conference on intelligent robots and systems, 2003 (IROS 2003), IEEE, vol 4, pp 3396–3403Japkowicz N, Shah M (2011) Evaluating learning algorithms. Cambridge University Press, CambridgeJiang J (2008) A literature survey on domain adaptation of statistical classifiers. http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/surveyJohnson M, Hofmann K, Hutton T, Bignell D (2016) The Malmo platform for artificial intelligence experimentation. In: International joint conference on artificial intelligence (IJCAI)Keith TZ, Reynolds MR (2010) Cattell–Horn–Carroll abilities and cognitive tests: what we’ve learned from 20 years of research. Psychol Schools 47(7):635–650Ketter W, Symeonidis A (2012) Competitive benchmarking: lessons learned from the trading agent competition. AI Mag 33(2):103Khreich W, Granger E, Miri A, Sabourin R (2012) A survey of techniques for incremental learning of HMM parameters. Inf Sci 197:105–130Kim JH (2004) Soccer robotics, vol 11. Springer, New YorkKitano H, Asada M, Kuniyoshi Y, Noda I, Osawa E (1997) Robocup: the robot world cup initiative. In: Proceedings of the first international conference on autonomous agents, ACM, pp 340–347Kleiner K (2011) Who are you calling bird-brained? An attempt is being made to devise a universal intelligence test. Economist 398(8723, 5 March 2011):82Knuth DE (1973) Sorting and searching, volume 3 of the art of computer programming. Addison-Wesley, ReadingKoza JR (2010) Human-competitive results produced by genetic programming. Genet Program Evolvable Mach 11(3–4):251–284Krueger J, Osherson D (1980) On the psychology of structural simplicity. In: Jusczyk PW, Klein RM (eds) The nature of thought: essays in honor of D. O. Hebb. Psychology Press, London, pp 187–205Langford J (2005) Clever methods of overfitting. Machine Learning (Theory). http://hunch.netLangley P (1987) Research papers in machine learning. Mach Learn 2(3):195–198Langley P (2011) The changing science of machine learning. Mach Learn 82(3):275–279Langley P (2012) The cognitive systems paradigm. Adv Cogn Syst 1:3–13Lattimore T, Hutter M (2013) No free lunch versus Occam’s razor in supervised learning. Algorithmic Probability and Friends. Springer, Bayesian Prediction and Artificial Intelligence, pp 223–235Leeuwenberg ELJ, Van Der Helm PA (2012) Structural information theory: the simplicity of visual form. Cambridge University Press, CambridgeLegg S, Hutter M (2007a) Tests of machine intelligence. In: Lungarella M, Iida F, Bongard J, Pfeifer R (eds) 50 Years of Artificial Intelligence, Lecture Notes in Computer Science, vol 4850, Springer Berlin Heidelberg, pp 232–242. doi: 10.1007/978-3-540-77296-5_22Legg S, Hutter M (2007b) Universal intelligence: a definition of machine intelligence. Minds Mach 17(4):391–444Legg S, Veness J (2013) An approximation of the universal intelligence measure. Algorithmic Probability and Friends. Springer, Bayesian Prediction and Artificial Intelligence, pp 236–249Levesque HJ (2014) On our best behaviour. Artif Intell 212:27–35Levesque HJ, Davis E, Morgenstern L (2012) The winog

    Test of lepton universality in bs+b \rightarrow s \ell^+ \ell^- decays

    Get PDF
    The first simultaneous test of muon-electron universality using B+K++B^{+}\rightarrow K^{+}\ell^{+}\ell^{-} and B0K0+B^{0}\rightarrow K^{*0}\ell^{+}\ell^{-} decays is performed, in two ranges of the dilepton invariant-mass squared, q2q^{2}. The analysis uses beauty mesons produced in proton-proton collisions collected with the LHCb detector between 2011 and 2018, corresponding to an integrated luminosity of 9 fb1\mathrm{fb}^{-1}. Each of the four lepton universality measurements reported is either the first in the given q2q^{2} interval or supersedes previous LHCb measurements. The results are compatible with the predictions of the Standard Model.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-046.html (LHCb public pages

    Observation of Cabibbo-suppressed two-body hadronic decays and precision mass measurement of the Ωc0\Omega_{c}^{0} baryon

    Full text link
    The first observation of the singly Cabibbo-suppressed Ωc0ΩK+\Omega_{c}^{0}\to\Omega^{-}K^{+} and Ωc0Ξπ+\Omega_{c}^{0}\to\Xi^{-}\pi^{+} decays is reported, using proton-proton collision data at a centre-of-mass energy of 13TeV13\,{\rm TeV}, corresponding to an integrated luminosity of 5.4fb15.4\,{\rm fb}^{-1}, collected with the LHCb detector between 2016 and 2018. The branching fraction ratios are measured to be B(Ωc0ΩK+)B(Ωc0Ωπ+)=0.0608±0.0051(stat)±0.0040(syst)\frac{\mathcal{B}(\Omega_{c}^{0}\to\Omega^{-}K^{+})}{\mathcal{B}(\Omega_{c}^{0}\to\Omega^{-}\pi^{+})}=0.0608\pm0.0051({\rm stat})\pm 0.0040({\rm syst}), B(Ωc0Ξπ+)B(Ωc0Ωπ+)=0.1581±0.0087(stat)±0.0043(syst)±0.0016(ext)\frac{\mathcal{B}(\Omega_{c}^{0}\to\Xi^{-}\pi^{+})}{\mathcal{B}(\Omega_{c}^{0}\to\Omega^{-}\pi^{+})}=0.1581\pm0.0087({\rm stat})\pm0.0043({\rm syst})\pm0.0016({\rm ext}). In addition, using the Ωc0Ωπ+\Omega_{c}^{0}\to\Omega^{-}\pi^{+} decay channel, the Ωc0\Omega_{c}^{0} baryon mass is measured to be M(Ωc0)=2695.28±0.07(stat)±0.27(syst)±0.30(ext)MeV/c2M(\Omega_{c}^{0})=2695.28\pm0.07({\rm stat})\pm0.27({\rm syst})\pm0.30({\rm ext})\,{\rm MeV}/c^{2}, improving the precision of the previous world average by a factor of four.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2023-011.html (LHCb public pages

    Precision measurement of CP\it{CP} violation in the penguin-mediated decay Bs0ϕϕB_s^{0}\rightarrow\phi\phi

    Get PDF
    A flavor-tagged time-dependent angular analysis of the decay Bs0ϕϕB_s^{0}\rightarrow\phi\phi is performed using pppp collision data collected by the LHCb experiment at % at s=13\sqrt{s}=13 TeV, the center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 6 fb^{-1}. The CP\it{CP}-violating phase and direct CP\it{CP}-violation parameter are measured to be ϕssˉs=0.042±0.075±0.009\phi_{s\bar{s}s} = -0.042 \pm 0.075 \pm 0.009 rad and λ=1.004±0.030±0.009|\lambda|=1.004\pm 0.030 \pm 0.009 , respectively, assuming the same values for all polarization states of the ϕϕ\phi\phi system. In these results, the first uncertainties are statistical and the second systematic. These parameters are also determined separately for each polarization state, showing no evidence for polarization dependence. The results are combined with previous LHCb measurements using pppp collisions at center-of-mass energies of 7 and 8 TeV, yielding ϕssˉs=0.074±0.069\phi_{s\bar{s}s} = -0.074 \pm 0.069 rad and lambda=1.009±0.030|lambda|=1.009 \pm 0.030. This is the most precise study of time-dependent CP\it{CP} violation in a penguin-dominated BB meson decay. The results are consistent with CP\it{CP} symmetry and with the Standard Model predictions.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2023-001.html (LHCb public pages

    Measurement of ZZ boson production cross-section in pppp collisions at s=5.02\sqrt{s} = 5.02 TeV

    Full text link
    The first measurement of the ZZ boson production cross-section at centre-of-mass energy s=5.02\sqrt{s} = 5.02\,TeV in the forward region is reported, using pppp collision data collected by the LHCb experiment in year 2017, corresponding to an integrated luminosity of 100±2pb1100 \pm 2\,\rm{pb^{-1}}. The production cross-section is measured for final-state muons in the pseudorapidity range 2.020GeV/c2.0 20\,\rm{GeV/}\it{c}. The integrated cross-section is determined to be σZμ+μ=39.6±0.7(stat)±0.6(syst)±0.8(lumi) pb \sigma_{Z \rightarrow \mu^{+}\mu^{-}} = 39.6 \pm 0.7\,(\rm{stat}) \pm 0.6\,(\rm{syst}) \pm 0.8\,(\rm{lumi}) \ \rm{pb} for the di-muon invariant mass in the range 60<Mμμ<120GeV/c260<M_{\mu\mu}<120\,\rm{GeV/}\it{c^{2}}. This result and the differential cross-section results are in good agreement with theoretical predictions at next-to-next-to-leading order in the strong coupling. Based on a previous LHCb measurement of the ZZ boson production cross-section in ppPb collisions at sNN=5.02\sqrt{s_{NN}}=5.02 TeV, the nuclear modification factor RpPbR_{p\rm{Pb}} is measured for the first time at this energy. The measured values are 1.20.3+0.5(stat)±0.1(syst)1.2^{+0.5}_{-0.3}\,(\rm{stat}) \pm 0.1\,(\rm{syst}) in the forward region (1.53<yμ<4.031.53<y^*_{\mu}<4.03) and 3.60.9+1.6(stat)±0.2(syst)3.6^{+1.6}_{-0.9}\,(\rm{stat}) \pm 0.2\,(\rm{syst}) in the backward region (4.97<yμ<2.47-4.97<y^*_{\mu}<-2.47), where yμy^*_{\mu} represents the muon rapidity in the centre-of-mass frame.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2023-010.html (LHCb public pages

    Study of charmonium decays to KS0KπK^0_S K \pi in the B(KS0Kπ)KB \to (K^0_S K \pi) K channels

    Get PDF
    A study of the B+KS0K+Kπ+B^+\to K^0_SK^+K^-\pi^+ and B+KS0K+K+πB^+\to K^0_SK^+K^+\pi^- decays is performed using proton-proton collisions at center-of-mass energies of 7, 8 and 13 TeV at the LHCb experiment. The KS0KπK^0_SK \pi invariant mass spectra from both decay modes reveal a rich content of charmonium resonances. New precise measurements of the ηc\eta_c and ηc(2S)\eta_c(2S) resonance parameters are performed and branching fraction measurements are obtained for B+B^+ decays to ηc\eta_c, J/ψJ/\psi, ηc(2S)\eta_c(2S) and χc1\chi_{c1} resonances. In particular, the first observation and branching fraction measurement of B+χc0K0π+B^+ \to \chi_{c0} K^0 \pi^+ is reported as well as first measurements of the B+K0K+Kπ+B^+\to K^0K^+K^-\pi^+ and B+K0K+K+πB^+\to K^0K^+K^+\pi^- branching fractions. Dalitz plot analyses of ηcKS0Kπ\eta_c \to K^0_SK\pi and ηc(2S)KS0Kπ\eta_c(2S) \to K^0_SK\pi decays are performed. A new measurement of the amplitude and phase of the KπK \pi SS-wave as functions of the KπK \pi mass is performed, together with measurements of the K0(1430)K^*_0(1430), K0(1950)K^*_0(1950) and a0(1700)a_0(1700) parameters. Finally, the branching fractions of χc1\chi_{c1} decays to KK^* resonances are also measured.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-051.html (LHCb public pages
    corecore