52 research outputs found
Energetical analysis of two different configurations of a liquid-gas compressed energy storage
In order to enhance the spreading of renewable energy sources in the Italian electric power market, as well as to promote self-production and to decrease the phase delay between energy production and consumption, energy storage solutions are catching on. Nowadays, in general, small size electric storage batteries represent a quite diffuse technology, while air liquid-compressed energy storage solutions are used for high size. The goal of this paper is the development of a numerical model for small size storage, environmentally sustainable, to exploit the higher efficiency of the liquid pumping to compress air. Two different solutions were analyzed, to improve the system efficiency and to exploit the heat produced by the compression phase of the gas. The study was performed with a numerical model implemented in Matlab, by analyzing the variation of hermodynamical parameters during the compression and the expansion phases, making an energetic assessment for the whole system. The results show a good global efficiency, thus making the system competitive with the smallest size storage batteries
Performance evaluation of Attribute-Based Encryption on constrained IoT devices
The Internet of Things (IoT) is enabling a new generation of innovative services based on the seamless integration of smart objects into information systems. This raises new security and privacy challenges that require novel cryptographic methods. Attribute-Based Encryption (ABE) is a type of public-key encryption that enforces a fine-grained access control on encrypted data based on flexible access policies. The feasibility of ABE adoption in fully-fledged computing systems, i.e., smartphones or embedded systems, has been demonstrated in recent works. In this paper, we consider IoT devices characterized by strong limitations in terms of computing, storage, and power. Specifically, we assess the performance of ABE in typical IoT constrained devices. We evaluate the performance of three representative ABE schemes configured considering the worst-case scenario on two popular IoT platforms, namely ESP32 and RE-Mote. Our results show that, if we assume to employ up to 10 attributes in ciphertexts and to leverage hardware cryptographic acceleration, then ABE can indeed be adopted on devices with very limited memory and computing power, while obtaining a satisfactory battery lifetime. In our experiments, as also performed in other works in the literature, we consider only the worst-case configuration, which, however, might not be completely representative of the real working conditions of sensors employing ABE. For this reason, we complete our evaluation by proposing a novel benchmark method that we used to complement the experiments by evaluating the average performance. We show that by always considering the worst case, the current literature significantly overestimates the processing time and the energy consumption
Efficient energy storage in residential buildings integrated with RESHeat system
The Renewable Energy System for Residential Building Heating and Electricity Production (RESHeat) system has been realized for heating and cooling residential buildings. The main components of the RESHeat system are a heat pump, photovoltaic modules, sun-tracking solar collectors and photovoltaic/thermal modules, an under-ground thermal energy storage unit, and a ground heat exchanger. One of the main novelties of the RESHeat system is efficient ground regeneration due to the underground energy storage unit. During a heating season, a large amount of heat is taken from the ground. The underground energy storage unit allows the restoration of ground heating capability and the heat pump's coefficient of performance (COP) to be kept high as possible for consecutive years. The paper presents an energy analysis for a residential building that is a RESHeat system demo site, along with integrating the RESHeat system with the building. The experimentally validated components coupled with the building model to achieve the system performance in TRNSYS software. The results show that the yearly average COP of the heat pump is 4.85 due to the underground energy storage unit. The RESHeat system is able to fully cover the heating demand of the building using renewable energy sources and an efficient underground energy storage system
Comparison of different heating generator systems to reduce energy consumption in social housing in a Mediterranean climate
This study analyses the energy consumption of a social housing built in the 80's. This building typology is deteriorating over time with increased energy consumption for air conditioning and indoor comfort that is well below the standard. This typology is also widely diffused in the city's building stock, especially in its suburbs. Thus, the energy efficiency of public social housing represents a major concern for the Italian national scene, and its improvement represents an effort of critical importance. However, public funding is significantly reduced compared to the past and. In addition, it is often difficult to act on passive systems, such as installing thermal insulation, or replacing terminal units inside apartments. In these cases, as an energy retrofit, it may be appropriate to evaluate the possibility of preserving as much of the existing distribution and supply system as possible, while modifying the thermal energy generation system. In general, where the boiler is not obsolescent, the idea is to propose a hybrid generation system with the inclusion of a heat pump (HHP), which could be implemented with renewable energy equipment, properly installed in the building. The main goal of the present work was to evaluate through dynamic analysis different HVAC scenarios, to assess the optimal configuration of the system for residential use. The results show that a hybrid system can lower the primary energy consumption up to 28%, thus allowing the employment of renewable energies within the social housing building stock
Planning Machine Activity Between Manufacturing Operations: Maintaining Accuracy While Reducing Energy Consumption
There has recently been an increased emphasis on reducing energy consumption in manufacturing. This is largely because of fluctuations in energy costs causing uncertainty. The increased competition between manufacturers means that even a slight change in energy consumption can have implications on their profit margin or competitiveness of quote. Furthermore, there is a drive from policy-makers to audit the environmental impact of manufactured goods from cradle-to-grave. The understanding, and potential reduction of machine tool energy consumption has therefore received significant interest as they require large amounts of energy to perform either subtractive or additive manufacturing tasks.
One area that has received relatively little interest, yet could harness great potential, is reducing energy consumption by optimally planning machine activities while the machine is not in operation. The intuitive option is to turn off all non-essential energy-consuming processes. However, manufacturing processes such as milling often release large amounts of heat into the machine's structure causing deformation, which results in deviation of the machine tool's actual cutting position from that which was commanded, a phenomenon known as thermal deformation. A rapid change in temperature can increase the deformation, which can deteriorate the machine's manufacturing capability, potentially producing scrap parts with the associated commercial and environmental repercussions. It is therefore necessary to consider the relationship between energy consumption, thermal deformation, machining accuracy and time, when planning the machine's activity when idle, or about to resume machining.
In this paper, we investigate the exploitability of automated planning techniques for planning machine activities between subtractive manufacturing operations, while being sufficiently broad to be extended to additive processes. The aim is to reduce energy consumption but maintain machine accuracy. Specifically, a novel domain model is presented where the machine's energy consumption, thermal stability, and their relationship to the overall machine's accuracy is encoded. Experimental analysis then demonstrates the effectiveness of the proposed approach using a case study which considers real-world dat
Optimized energy and air quality management of shared smart buildings in the covid-19 scenario
Worldwide increasing awareness of energy sustainability issues has been the main driver in developing the concepts of (Nearly) Zero Energy Buildings, where the reduced energy consumptions are (nearly) fully covered by power locally generated by renewable sources. At the same time, recent advances in Internet of Things technologies are among the main enablers of Smart Homes and Buildings. The transition of conventional buildings into active environments that process, elaborate and react to online measured environmental quantities is being accelerated by the aspects related to COVID-19, most notably in terms of air exchange and the monitoring of the density of occupants. In this paper, we address the problem of maximizing the energy efficiency and comfort perceived by occupants, defined in terms of thermal comfort, visual comfort and air quality. The case study of the University of Pisa is considered as a practical example to show preliminary results of the aggregation of environmental data
Possible use of Digital Variance Angiography in Liver Transarterial Chemoembolization: A Retrospective Observational Study
Purpose Digital variance angiography (DVA), a recently developed image processing technology, provided higher contrast-to-noise ratio (CNR) and better image quality (IQ) during lower limb interventions than digital subtraction angiography (DSA). Our aim was to investigate whether this quality improvement can be observed also during liver transarterial chemoembolization (TACE).Materials and MethodsWe retrospectively compared the CNR and IQ parameters of DSA and DVA images from 25 patients (65% male, mean +/- SD age: 67.5 +/- 11.2 years) underwent TACE intervention at our institute. CNR was calculated on 50 images. IQ of every image set was evaluated by 5 experts using 4-grade Likert scales. Both single image evaluation and paired image comparison were performed in a blinded and randomized manner. The diagnostic value was evaluated based on the possibility to identify lesions and feeding arteries.ResultsDVA provided significantly higher CNR (mean CNRDVA/CNRDSA was 1.33). DVA images received significantly higher individual Likert score (mean +/- SEM 3.34 +/- 0,08 vs. 2.89 +/- 0.11, Wilcoxon signed-rank p < 0.001) and proved to be superior also in paired comparisons (median comparison score 1.60 [IQR:2.40], one sample Wilcoxon p < 0.001 compared to equal quality level). DSA could not detect lesion and feeding artery in 28 and 36% of cases, and allowed clear detection only in 22% and 16%, respectively. In contrast, DVA failed only in 8 and 18% and clearly revealed lesions and feeding arteries in 32 and 26%, respectively.ConclusionIn our study, DVA provided higher quality images and better diagnostic insight than DSA; therefore, DVA could represent a useful tool in liver TACE interventions
Automated Planning Techniques for Robot Manipulation Tasks Involving Articulated Objects
The goal-oriented manipulation of articulated objects plays an important role in real-world robot tasks. Current approaches typically pose a number of simplifying assumptions to reason upon how to obtain an articulated object’s goal configuration, and exploit ad hoc algorithms. The consequence is two-fold: firstly, it is difficult to generalise obtained solutions (in terms of actions a robot can execute) to different target object’s configurations and, in a broad sense, to different object’s physical characteristics; secondly, the representation and the reasoning layers are tightly coupled and inter-dependent.
In this paper we investigate the use of automated planning techniques for dealing with articulated objects manipulation tasks. Such techniques allow for a clear separation between knowledge and reasoning, as advocated in Knowledge Engineering. We introduce two PDDL formulations of the task, which rely on conceptually different representations of the orientation of the objects. Experiments involving several planners and increasing size objects demonstrate the effectiveness of the proposed models, and confirm its exploitability when embedded in a real-world robot software architecture
Evaluation in artificial intelligence: From task-oriented to ability-oriented measurement
The final publication is available at Springer via http://dx.doi.org/ 10.1007/s10462-016-9505-7.The evaluation of artificial intelligence systems and components is crucial for the
progress of the discipline. In this paper we describe and critically assess the different ways
AI systems are evaluated, and the role of components and techniques in these systems. We
first focus on the traditional task-oriented evaluation approach. We identify three kinds of
evaluation: human discrimination, problem benchmarks and peer confrontation. We describe
some of the limitations of the many evaluation schemes and competitions in these three categories,
and follow the progression of some of these tests. We then focus on a less customary
(and challenging) ability-oriented evaluation approach, where a system is characterised by
its (cognitive) abilities, rather than by the tasks it is designed to solve. We discuss several
possibilities: the adaptation of cognitive tests used for humans and animals, the development
of tests derived from algorithmic information theory or more integrated approaches under
the perspective of universal psychometrics. We analyse some evaluation tests from AI that
are better positioned for an ability-oriented evaluation and discuss how their problems and
limitations can possibly be addressed with some of the tools and ideas that appear within
the paper. Finally, we enumerate a series of lessons learnt and generic guidelines to be used
when an AI evaluation scheme is under consideration.I thank the organisers of the AEPIA Summer School On Artificial Intelligence, held in September 2014, for giving me the opportunity to give a lecture on 'AI Evaluation'. This paper was born out of and evolved through that lecture. The information about many benchmarks and competitions discussed in this paper have been contrasted with information from and discussions with many people: M. Bedia, A. Cangelosi, C. Dimitrakakis, I. GarcIa-Varea, Katja Hofmann, W. Langdon, E. Messina, S. Mueller, M. Siebers and C. Soares. Figure 4 is courtesy of F. Martinez-Plumed. Finally, I thank the anonymous reviewers, whose comments have helped to significantly improve the balance and coverage of the paper. This work has been partially supported by the EU (FEDER) and the Spanish MINECO under Grants TIN 2013-45732-C4-1-P, TIN 2015-69175-C4-1-R and by Generalitat Valenciana PROMETEOII2015/013.JosĂ© Hernández-Orallo (2016). Evaluation in artificial intelligence: From task-oriented to ability-oriented measurement. Artificial Intelligence Review. 1-51. https://doi.org/10.1007/s10462-016-9505-7S151Abel D, Agarwal A, Diaz F, Krishnamurthy A, Schapire RE (2016) Exploratory gradient boosting for reinforcement learning in complex domains. arXiv preprint arXiv:1603.04119Adams S, Arel I, Bach J, Coop R, Furlan R, Goertzel B, Hall JS, Samsonovich A, Scheutz M, Schlesinger M, Shapiro SC, Sowa J (2012) Mapping the landscape of human-level artificial general intelligence. AI Mag 33(1):25–42Adams SS, Banavar G, Campbell M (2016) I-athlon: towards a multi-dimensional Turing test. AI Mag 37(1):78–84Alcalá J, Fernández A, Luengo J, Derrac J, GarcĂa S, Sánchez L, Herrera F (2010) Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. J Mult Valued Logic Soft Comput 17:255–287Alexander JRM, Smales S (1997) Intelligence, learning and long-term memory. Personal Individ Differ 23(5):815–825Alpcan T, Everitt T, Hutter M (2014) Can we measure the difficulty of an optimization problem? In: IEEE information theory workshop (ITW)Alur R, Bodik R, Juniwal G, Martin MMK, Raghothaman M, Seshia SA, Singh R, Solar-Lezama A, Torlak E, Udupa A (2013) Syntax-guided synthesis. In: Formal methods in computer-aided design (FMCAD), 2013, IEEE, pp 1–17Alvarado N, Adams SS, Burbeck S, Latta C (2002) Beyond the Turing test: performance metrics for evaluating a computer simulation of the human mind. In: Proceedings of the 2nd international conference on development and learning, IEEE, pp 147–152Amigoni F, Bastianelli E, Berghofer J, Bonarini A, Fontana G, Hochgeschwender N, Iocchi L, Kraetzschmar G, Lima P, Matteucci M, Miraldo P, Nardi D, Schiaffonati V (2015) Competitions for benchmarking: task and functionality scoring complete performance assessment. IEEE Robot Autom Mag 22(3):53–61Anderson J, Lebiere C (2003) The Newell test for a theory of cognition. Behav Brain Sci 26(5):587–601Anderson J, Baltes J, Cheng CT (2011) Robotics competitions as benchmarks for AI research. Knowl Eng Rev 26(01):11–17Arel I, Rose DC, Karnowski TP (2010) Deep machine learning—a new frontier in artificial intelligence research. IEEE Comput Intell Mag 5(4):13–18Asada M, Hosoda K, Kuniyoshi Y, Ishiguro H, Inui T, Yoshikawa Y, Ogino M, Yoshida C (2009) Cognitive developmental robotics: a survey. IEEE Trans Auton Ment Dev 1(1):12–34Aziz H, Brill M, Fischer F, Harrenstein P, Lang J, Seedig HG (2015) Possible and necessary winners of partial tournaments. J Artif Intell Res 54:493–534Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/mlBagnall AJ, Zatuchna ZV (2005) On the classification of maze problems. In: Bull L, Kovacs T (eds) Foundations of learning classifier system. Studies in fuzziness and soft computing, vol. 183, Springer, pp 305–316. http://rd.springer.com/chapter/10.1007/11319122_12Baldwin D, Yadav SB (1995) The process of research investigations in artificial intelligence - a unified view. IEEE Trans Syst Man Cybern 25(5):852–861Bellemare MG, Naddaf Y, Veness J, Bowling M (2013) The arcade learning environment: an evaluation platform for general agents. J Artif Intell Res 47:253–279Besold TR (2014) A note on chances and limitations of psychometric ai. In: KI 2014: advances in artificial intelligence. Springer, pp 49–54Biever C (2011) Ultimate IQ: one test to rule them all. New Sci 211(2829, 10 September 2011):42–45Borg M, Johansen SS, Thomsen DL, Kraus M (2012) Practical implementation of a graphics Turing test. In: Advances in visual computing. Springer, pp 305–313Boring EG (1923) Intelligence as the tests test it. New Repub 35–37Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, OxfordBrazdil P, Carrier CG, Soares C, Vilalta R (2008) Metalearning: applications to data mining. Springer, New YorkBringsjord S (2011) Psychometric artificial intelligence. J Exp Theor Artif Intell 23(3):271–277Bringsjord S, Schimanski B (2003) What is artificial intelligence? Psychometric AI as an answer. In: International joint conference on artificial intelligence, pp 887–893Brundage M (2016) Modeling progress in ai. AAAI 2016 Workshop on AI, Ethics, and SocietyBuchanan BG (1988) Artificial intelligence as an experimental science. Springer, New YorkBuhrmester M, Kwang T, Gosling SD (2011) Amazon’s mechanical turk a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 6(1):3–5Bursztein E, Aigrain J, Moscicki A, Mitchell JC (2014) The end is nigh: generic solving of text-based captchas. In: Proceedings of the 8th USENIX conference on Offensive Technologies, USENIX Association, p 3Campbell M, Hoane AJ, Hsu F (2002) Deep Blue. Artif Intell 134(1–2):57–83Cangelosi A, Schlesinger M, Smith LB (2015) Developmental robotics: from babies to robots. MIT Press, CambridgeCaputo B, MĂĽller H, Martinez-Gomez J, Villegas M, Acar B, Patricia N, Marvasti N, ĂśskĂĽdarlı S, Paredes R, Cazorla M et al (2014) Imageclef 2014: overview and analysis of the results. In: Information access evaluation. Multilinguality, multimodality, and interaction, Springer, pp 192–211Carlson A, Betteridge J, Kisiel B, Settles B, Hruschka ER Jr, Mitchell TM (2010) Toward an architecture for never-ending language learning. In: AAAI, vol 5, p 3Carroll JB (1993) Human cognitive abilities: a survey of factor-analytic studies. Cambridge University Press, CambridgeCaruana R (1997) Multitask learning. Mach Learn 28(1):41–75Chaitin GJ (1982) Gödel’s theorem and information. Int J Theor Phys 21(12):941–954Chandrasekaran B (1990) What kind of information processing is intelligence? In: The foundation of artificial intelligence—a sourcebook. Cambridge University Press, pp 14–46Chater N (1999) The search for simplicity: a fundamental cognitive principle? Q J Exp Psychol Sect A 52(2):273–302Chater N, Vitányi P (2003) Simplicity: a unifying principle in cognitive science? Trends Cogn Sci 7(1):19–22Chu Z, Gianvecchio S, Wang H, Jajodia S (2010) Who is tweeting on twitter: human, bot, or cyborg? In: Proceedings of the 26th annual computer security applications conference, ACM, pp 21–30Cochran WG (2007) Sampling techniques. Wiley, New YorkCohen PR, Howe AE (1988) How evaluation guides AI research: the message still counts more than the medium. AI Mag 9(4):35Cohen Y (2013) Testing and cognitive enhancement. Technical repor, National Institute for Testing and Evaluation, Jerusalem, IsraelConrad JG, Zeleznikow J (2013) The significance of evaluation in AI and law: a case study re-examining ICAIL proceedings. In: Proceedings of the 14th international conference on artificial intelligence and law, ACM, pp 186–191Conrad JG, Zeleznikow J (2015) The role of evaluation in ai and law. In: Proceedings of the 15th international conference on artificial intelligence and law, pp 181–186Deary IJ, Der G, Ford G (2001) Reaction times and intelligence differences: a population-based cohort study. Intelligence 29(5):389–399Decker KS, Durfee EH, Lesser VR (1989) Evaluating research in cooperative distributed problem solving. Distrib Artif Intell 2:487–519Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30Detterman DK (2011) A challenge to Watson. Intelligence 39(2–3):77–78Dimitrakakis C (2016) Personal communicationDimitrakakis C, Li G, Tziortziotis N (2014) The reinforcement learning competition 2014. AI Mag 35(3):61–65Dowe DL (2013) Introduction to Ray Solomonoff 85th memorial conference. In: Dowe DL (ed) Algorithmic probability and friends. Bayesian prediction and artificial intelligence, lecture notes in computer science, vol 7070. Springer, Berlin, pp 1–36Dowe DL, Hajek AR (1997) A computational extension to the Turing Test. In: Proceedings of the 4th conference of the Australasian cognitive science society, University of Newcastle, NSW, AustraliaDowe DL, Hajek AR (1998) A non-behavioural, computational extension to the Turing test. In: International conference on computational intelligence and multimedia applications (ICCIMA’98), Gippsland, Australia, pp 101–106Dowe DL, Hernández-Orallo J (2012) IQ tests are not for machines, yet. Intelligence 40(2):77–81Dowe DL, Hernández-Orallo J (2014) How universal can an intelligence test be? Adapt Behav 22(1):51–69Drummond C (2009) Replicability is not reproducibility: nor is it good science. In: Proceedings of the evaluation methods for machine learning workshop at the 26th ICML, Montreal, CanadaDrummond C, Japkowicz N (2010) Warning: statistical benchmarking is addictive. Kicking the habit in machine learning. J Exp Theor Artif Intell 22(1):67–80Duan Y, Chen X, Houthooft R, Schulman J, Abbeel P (2016) Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv:1604.06778Eden AH, Moor JH, Soraker JH, Steinhart E (2013) Singularity hypotheses: a scientific and philosophical assessment. Springer, New YorkEdmondson W (2012) The intelligence in ETI—what can we know? Acta Astronaut 78:37–42Elo AE (1978) The rating of chessplayers, past and present, vol 3. Batsford, LondonEmbretson SE, Reise SP (2000) Item response theory for psychologists. L. Erlbaum, HillsdaleEvans JM, Messina ER (2001) Performance metrics for intelligent systems. NIST Special Publication SP, pp 101–104Everitt T, Lattimore T, Hutter M (2014) Free lunch for optimisation under the universal distribution. In: 2014 IEEE Congress on evolutionary computation (CEC), IEEE, pp 167–174Falkenauer E (1998) On method overfitting. J Heuristics 4(3):281–287Feldman J (2003) Simplicity and complexity in human concept learning. Gen Psychol 38(1):9–15Ferrando PJ (2009) Difficulty, discrimination, and information indices in the linear factor analysis model for continuous item responses. Appl Psychol Meas 33(1):9–24Ferrando PJ (2012) Assessing the discriminating power of item and test scores in the linear factor-analysis model. PsicolĂłgica 33:111–139Ferri C, Hernández-Orallo J, Modroiu R (2009) An experimental comparison of performance measures for classification. Pattern Recogn Lett 30(1):27–38Ferrucci D, Brown E, Chu-Carroll J, Fan J, Gondek D, Kalyanpur AA, Lally A, Murdock J, Nyberg E, Prager J et al (2010) Building Watson: an overview of the DeepQA project. AI Mag 31(3):59–79Fogel DB (1991) The evolution of intelligent decision making in gaming. Cybern Syst 22(2):223–236Gaschnig J, Klahr P, Pople H, Shortliffe E, Terry A (1983) Evaluation of expert systems: issues and case studies. Build Exp Syst 1:241–278Geissman JR, Schultz RD (1988) Verification & validation. AI Exp 3(2):26–33Genesereth M, Love N, Pell B (2005) General game playing: overview of the AAAI competition. AI Mag 26(2):62GerĂłnimo D, LĂłpez AM (2014) Datasets and benchmarking. In: Vision-based pedestrian protection systems for intelligent vehicles. Springer, pp 87–93Goertzel B, Pennachin C (eds) (2007) Artificial general intelligence. Springer, New YorkGoertzel B, Arel I, Scheutz M (2009) Toward a roadmap for human-level artificial general intelligence: embedding HLAI systems in broad, approachable, physical or virtual contexts. Artif Gen Intell Roadmap InitiatGoldreich O, Vadhan S (2007) Special issue on worst-case versus average-case complexity editors’ foreword. Comput complex 16(4):325–330Gordon BB (2007) Report on panel discussion on (re-)establishing or increasing collaborative links between artificial intelligence and intelligent systems. In: Messina ER, Madhavan R (eds) Proceedings of the 2007 workshop on performance metrics for intelligent systems, pp 302–303Gulwani S, Hernández-Orallo J, Kitzelmann E, Muggleton SH, Schmid U, Zorn B (2015) Inductive programming meets the real world. Commun ACM 58(11):90–99Hand DJ (2004) Measurement theory and practice. A Hodder Arnold Publication, LondonHernández-Orallo J (2000a) Beyond the Turing test. J Logic Lang Inf 9(4):447–466Hernández-Orallo J (2000b) On the computational measurement of intelligence factors. In: Meystel A (ed) Performance metrics for intelligent systems workshop. National Institute of Standards and Technology, Gaithersburg, pp 1–8Hernández-Orallo J (2000c) Thesis: computational measures of information gain and reinforcement in inference processes. AI Commun 13(1):49–50Hernández-Orallo J (2010) A (hopefully) non-biased universal environment class for measuring intelligence of biological and artificial systems. In: Artificial general intelligence, 3rd International Conference. Atlantis Press, Extended report at http://users.dsic.upv.es/proy/anynt/unbiased.pdf , pp 182–183Hernández-Orallo J (2014) On environment difficulty and discriminating power. Auton Agents Multi-Agent Syst. 29(3):402–454. doi: 10.1007/s10458-014-9257-1Hernández-Orallo J, Dowe DL (2010) Measuring universal intelligence: towards an anytime intelligence test. Artif Intell 174(18):1508–1539Hernández-Orallo J, Dowe DL (2013) On potential cognitive abilities in the machine kingdom. Minds Mach 23:179–210Hernández-Orallo J, Minaya-Collado N (1998) A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In: Proceedings of international symposium of engineering of intelligent systems (EIS’98), ICSC Press, pp 146–163Hernández-Orallo J, Dowe DL, España-Cubillo S, Hernández-Lloreda MV, Insa-Cabrera J (2011) On more realistic environment distributions for defining, evaluating and developing intelligence. In: Schmidhuber J, ThĂłrisson K, Looks M (eds) Artificial general intelligence, LNAI, vol 6830. Springer, New York, pp 82–91Hernández-Orallo J, Flach P, Ferri C (2012a) A unified view of performance metrics: translating threshold choice into expected classification loss. J Mach Learn Res 13(1):2813–2869Hernández-Orallo J, Insa-Cabrera J, Dowe DL, Hibbard B (2012b) Turing Tests with Turing machines. In: Voronkov A (ed) Turing-100, EPiC Series, vol 10, pp 140–156Hernández-Orallo J, Dowe DL, Hernández-Lloreda MV (2014) Universal psychometrics: measuring cognitive abilities in the machine kingdom. Cogn Syst Res 27:50–74Hernández-Orallo J, MartĂnez-Plumed F, Schmid U, Siebers M, Dowe DL (2016) Computer models solving intelligence test problems: progress and implications. Artif Intell 230:74–107Herrmann E, Call J, Hernández-Lloreda MV, Hare B, Tomasello M (2007) Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis. Science 317(5843):1360–1366Hibbard B (2009) Bias and no free lunch in formal measures of intelligence. J Artif Gen Intell 1(1):54–61Hingston P (2010) A new design for a Turing Test for bots. In: 2010 IEEE symposium on computational intelligence and games (CIG), IEEE, pp 345–350Hingston P (2012) Believable bots: can computers play like people?. Springer, New YorkHo TK, Basu M (2002) Complexity measures of supervised classification problems. IEEE Trans Pattern Anal Mach Intell 24(3):289–300Hutter M (2007) Universal algorithmic intelligence: a mathematical top → down approach. In: Goertzel B, Pennachin C (eds) Artificial general intelligence, cognitive technologies. Springer, Berlin, pp 227–290Igel C, Toussaint M (2005) A no-free-lunch theorem for non-uniform distributions of target functions. J Math Model Algorithms 3(4):313–322Insa-Cabrera J (2016) Towards a universal test of social intelligence. Ph.D. thesis, Departament de Sistemes Informátics i ComputaciĂł, UPVInsa-Cabrera J, Dowe DL, España-Cubillo S, Hernández-Lloreda MV, Hernández-Orallo J (2011a) Comparing humans and ai agents. In: Schmidhuber J, ThĂłrisson K, Looks M (eds) Artificial general intelligence, LNAI, vol 6830. Springer, New York, pp 122–132Insa-Cabrera J, Dowe DL, Hernández-Orallo J (2011) Evaluating a reinforcement learning algorithm with a general intelligence test. In: Lozano JA, Gamez JM (eds) Current topics in artificial intelligence. CAEPIA 2011, LNAI series 7023. Springer, New YorkInsa-Cabrera J, Benacloch-Ayuso JL, Hernández-Orallo J (2012) On measuring social intelligence: experiments on competition and cooperation. In: Bach J, Goertzel B, IklĂ© M (eds) AGI, lecture notes in computer science, vol 7716. Springer, New York, pp 126–135Jacoff A, Messina E, Weiss BA, Tadokoro S, Nakagawa Y (2003) Test arenas and performance metrics for urban search and rescue robots. In: Proceedings of 2003 IEEE/RSJ international conference on intelligent robots and systems, 2003 (IROS 2003), IEEE, vol 4, pp 3396–3403Japkowicz N, Shah M (2011) Evaluating learning algorithms. Cambridge University Press, CambridgeJiang J (2008) A literature survey on domain adaptation of statistical classifiers. http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/surveyJohnson M, Hofmann K, Hutton T, Bignell D (2016) The Malmo platform for artificial intelligence experimentation. In: International joint conference on artificial intelligence (IJCAI)Keith TZ, Reynolds MR (2010) Cattell–Horn–Carroll abilities and cognitive tests: what we’ve learned from 20 years of research. Psychol Schools 47(7):635–650Ketter W, Symeonidis A (2012) Competitive benchmarking: lessons learned from the trading agent competition. AI Mag 33(2):103Khreich W, Granger E, Miri A, Sabourin R (2012) A survey of techniques for incremental learning of HMM parameters. Inf Sci 197:105–130Kim JH (2004) Soccer robotics, vol 11. Springer, New YorkKitano H, Asada M, Kuniyoshi Y, Noda I, Osawa E (1997) Robocup: the robot world cup initiative. In: Proceedings of the first international conference on autonomous agents, ACM, pp 340–347Kleiner K (2011) Who are you calling bird-brained? An attempt is being made to devise a universal intelligence test. Economist 398(8723, 5 March 2011):82Knuth DE (1973) Sorting and searching, volume 3 of the art of computer programming. Addison-Wesley, ReadingKoza JR (2010) Human-competitive results produced by genetic programming. Genet Program Evolvable Mach 11(3–4):251–284Krueger J, Osherson D (1980) On the psychology of structural simplicity. In: Jusczyk PW, Klein RM (eds) The nature of thought: essays in honor of D. O. Hebb. Psychology Press, London, pp 187–205Langford J (2005) Clever methods of overfitting. Machine Learning (Theory). http://hunch.netLangley P (1987) Research papers in machine learning. Mach Learn 2(3):195–198Langley P (2011) The changing science of machine learning. Mach Learn 82(3):275–279Langley P (2012) The cognitive systems paradigm. Adv Cogn Syst 1:3–13Lattimore T, Hutter M (2013) No free lunch versus Occam’s razor in supervised learning. Algorithmic Probability and Friends. Springer, Bayesian Prediction and Artificial Intelligence, pp 223–235Leeuwenberg ELJ, Van Der Helm PA (2012) Structural information theory: the simplicity of visual form. Cambridge University Press, CambridgeLegg S, Hutter M (2007a) Tests of machine intelligence. In: Lungarella M, Iida F, Bongard J, Pfeifer R (eds) 50 Years of Artificial Intelligence, Lecture Notes in Computer Science, vol 4850, Springer Berlin Heidelberg, pp 232–242. doi: 10.1007/978-3-540-77296-5_22Legg S, Hutter M (2007b) Universal intelligence: a definition of machine intelligence. Minds Mach 17(4):391–444Legg S, Veness J (2013) An approximation of the universal intelligence measure. Algorithmic Probability and Friends. Springer, Bayesian Prediction and Artificial Intelligence, pp 236–249Levesque HJ (2014) On our best behaviour. Artif Intell 212:27–35Levesque HJ, Davis E, Morgenstern L (2012) The winog
- …