10 research outputs found
The foundations of the human cultural niche
abstract: Technological innovations have allowed humans to settle in habitats for which they are poorly suited biologically. However, our understanding of how humans produce complex technologies is limited. We used a computer-based experiment, involving humans and learning bots, to investigate how reasoning abilities, social learning mechanisms and population structure affect the production of virtual artefacts. We found that humansâ reasoning abilities play an important role in the production of innovations, but that groups of individuals are able to produce artefacts that are more complex than any isolated individual can produce during the same amount of time. We show that this group-level ability to produce complex innovations is maximized when social information is easy to acquire and when individuals are organized into large and partially connected populations. These results suggest that the transition to behavioural modernity could have been triggered by a change in ancestral between-group interaction patterns.The final version of this article, as published in Nature Communications, can be viewed online at: https://www.nature.com/articles/ncomms939
Supporting learning activities in virtual worlds: methods, tools and evaluation
2011 - 2012Continuing advances and reduced costs in computational power, graphics and network bandwidth let 3D immersive multiâuser Virtual Worlds (VWs) become increasingly accessible while offering an improved and engaging quality of experience.
Excited at the prospects of engaging their Net Generation, students and educators worldwide are attempting to exploit the affordances of threeâdimensional (3D) VWs. Environments such as Second Life (SL) are increasingly used in education, often for their flexibility in facilitating studentâdirected, selfâpaced learning and their communication features.
Research on the educational value of VWs has revealed their potential as learning platforms. However, further studies are always needed in order to assess their effectiveness, satisfactorily and social engagement, not only in the general didactic use of the environment, but also for each specific learning subjects, activities and modality.
A major question in using VWs in education is finding appropriate valueâadded educational applications.
The main challenge is to determine learning approaches in which learning in a VW presents added value with respect to traditional education, and to effectively utilize the third dimension to avoid using the environment simply as a communication platform.
In addition, the educational VW activities become more and more sophisticated, starting from the early ones based only on information displaying and teaching resources to simulated laboratory and scenarios. The more complex the learning activities are, the more the challenge of guiding students during their learning trajectories increases and there is the need of providing them with appropriate support and guidance.
The main contributions of this thesis are summarized as follows: (i) we propose an appropriate valueâadded educational application that supports individual learning activities effectively exploiting the third dimension. In particular, we adopt a VW to support the learning of engineering practices based on technical drawing. The proposed system called VirtualHOP trains the students in the way of learningâbyâdoing methodology to build the required 3D objects; (ii) we enhance an helping system with the avatar appearance and AI for helping the exploration of environments and fruition of distance didactic activities in SL; (iii) we empirically evaluate the didactic value and the user perceptions concerning both the learning setting and the avatarâbased virtual assistant. The results demonstrate the usefulness of both the didactic experiences offered in SL and a positive attitude of the learners in terms of enjoyment and easeâofâuse. [edited by author]XI n.s
Recommended from our members
Understanding the behaviour and influence of automated social agents
Soft-bound submitted: Fri 23 Feb 2018
Corrections submitted: Mon 30 Jul 2018
Corrections approved: Tue 7 Aug 2018
Apollo submitted: Wed 22 Aug 2018
Hard-bound submitted: Fri 24 Aug 2018Online social networks (OSNs) have seen a remarkable rise in the presence of automated social agents, or social bots. Social bots are the new computing viral, that are surreptitious and clever. What facilitates the creation of social agents is the massive human user-base and business-supportive operating model of social networks. These automated agents are injected by agencies, brands, individuals, and corporations to serve their work and purpose; utilising them for news and emergency communication, marketing, social activism, political campaigning, and even spam and spreading malicious content. Their influence was recently substantiated by coordinated social hacking and computational political propaganda. The thesis of my dissertation argues that automated agents exercise a profound impact on OSNs that transforms into an array of influence on our society and systems. However, latent or veiled, these agents can be successfully detected through measurement, feature extraction and finely tuned supervised learning models. The various types of automated agents can be further unravelled through unsupervised machine learning and natural language processing, to formally inform the populace of their existence and impact.Sep'14-Aug'17, Marie Curie ITN METRICS, Early-Stage Researcher
Sep'17, UMobile, Research Associate
Oct'17-Mar'18, EPSRC Global Challenges Research Fund, Research Associat
Symbiotic interaction between humans and robot swarms
Comprising of a potentially large team of autonomous cooperative robots locally interacting and communicating with each other, robot swarms provide a natural diversity of parallel and distributed functionalities, high flexibility, potential for redundancy, and fault-tolerance. The use of autonomous mobile robots is expected to increase in the future and swarm robotic systems are envisioned to play important roles in tasks such as: search and rescue (SAR) missions, transportation of objects, surveillance, and reconnaissance operations. To robustly deploy robot swarms on the field with humans, this research addresses the fundamental problems in the relatively new field of human-swarm interaction (HSI). Four groups of core classes of problems have been addressed for proximal interaction between humans and robot swarms: interaction and communication; swarm-level sensing and classification; swarm coordination; swarm-level learning. The primary contribution of this research aims to develop a bidirectional human-swarm communication system for non-verbal interaction between humans and heterogeneous robot swarms. The guiding field of application are SAR missions. The core challenges and issues in HSI include: How can human operators interact and communicate with robot swarms? Which interaction modalities can be used by humans? How can human operators instruct and command robots from a swarm? Which mechanisms can be used by robot swarms to convey feedback to human operators? Which type of feedback can swarms convey to humans? In this research, to start answering these questions, hand gestures have been chosen as the interaction modality for humans, since gestures are simple to use, easily recognized, and possess spatial-addressing properties. To facilitate bidirectional interaction and communication, a dialogue-based interaction system is introduced which consists of: (i) a grammar-based gesture language with a vocabulary of non-verbal commands that allows humans to efficiently provide mission instructions to swarms, and (ii) a swarm coordinated multi-modal feedback language that enables robot swarms to robustly convey swarm-level decisions, status, and intentions to humans using multiple individual and group modalities. The gesture language allows humans to: select and address single and multiple robots from a swarm, provide commands to perform tasks, specify spatial directions and application-specific parameters, and build iconic grammar-based sentences by combining individual gesture commands. Swarms convey different types of multi-modal feedback to humans using on-board lights, sounds, and locally coordinated robot movements. The swarm-to-human feedback: conveys to humans the swarm's understanding of the recognized commands, allows swarms to assess their decisions (i.e., to correct mistakes: made by humans in providing instructions, and errors made by swarms in recognizing commands), and guides humans through the interaction process. The second contribution of this research addresses swarm-level sensing and classification: How can robot swarms collectively sense and recognize hand gestures given as visual signals by humans? Distributed sensing, cooperative recognition, and decision-making mechanisms have been developed to allow robot swarms to collectively recognize visual instructions and commands given by humans in the form of gestures. These mechanisms rely on decentralized data fusion strategies and multi-hop messaging passing algorithms to robustly build swarm-level consensus decisions. Measures have been introduced in the cooperative recognition protocol which provide a trade-off between the accuracy of swarm-level consensus decisions and the time taken to build swarm decisions. The third contribution of this research addresses swarm-level cooperation: How can humans select spatially distributed robots from a swarm and the robots understand that they have been selected? How can robot swarms be spatially deployed for proximal interaction with humans? With the introduction of spatially-addressed instructions (pointing gestures) humans can robustly address and select spatially- situated individuals and groups of robots from a swarm. A cascaded classification scheme is adopted in which, first the robot swarm identifies the selection command (e.g., individual or group selection), and then the robots coordinate with each other to identify if they have been selected. To obtain better views of gestures issued by humans, distributed mobility strategies have been introduced for the coordinated deployment of heterogeneous robot swarms (i.e., ground and flying robots) and to reshape the spatial distribution of swarms. The fourth contribution of this research addresses the notion of collective learning in robot swarms. The questions that are answered include: How can robot swarms learn about the hand gestures given by human operators? How can humans be included in the loop of swarm learning? How can robot swarms cooperatively learn as a team? Online incremental learning algorithms have been developed which allow robot swarms to learn individual gestures and grammar-based gesture sentences supervised by human instructors in real-time. Humans provide different types of feedback (i.e., full or partial feedback) to swarms for improving swarm-level learning. To speed up the learning rate of robot swarms, cooperative learning strategies have been introduced which enable individual robots in a swarm to intelligently select locally sensed information and share (exchange) selected information with other robots in the swarm. The final contribution is a systemic one, it aims on building a complete HSI system towards potential use in real-world applications, by integrating the algorithms, techniques, mechanisms, and strategies discussed in the contributions above. The effectiveness of the global HSI system is demonstrated in the context of a number of interactive scenarios using emulation tests (i.e., performing simulations using gesture images acquired by a heterogeneous robotic swarm) and by performing experiments with real robots using both ground and flying robots
Dynamic adversarial mining - effectively applying machine learning in adversarial non-stationary environments.
While understanding of machine learning and data mining is still in its budding stages, the engineering applications of the same has found immense acceptance and success. Cybersecurity applications such as intrusion detection systems, spam filtering, and CAPTCHA authentication, have all begun adopting machine learning as a viable technique to deal with large scale adversarial activity. However, the naive usage of machine learning in an adversarial setting is prone to reverse engineering and evasion attacks, as most of these techniques were designed primarily for a static setting. The security domain is a dynamic landscape, with an ongoing never ending arms race between the system designer and the attackers. Any solution designed for such a domain needs to take into account an active adversary and needs to evolve over time, in the face of emerging threats. We term this as the âDynamic Adversarial Miningâ problem, and the presented work provides the foundation for this new interdisciplinary area of research, at the crossroads of Machine Learning, Cybersecurity, and Streaming Data Mining. We start with a white hat analysis of the vulnerabilities of classification systems to exploratory attack. The proposed âSeed-Explore-Exploitâ framework provides characterization and modeling of attacks, ranging from simple random evasion attacks to sophisticated reverse engineering. It is observed that, even systems having prediction accuracy close to 100%, can be easily evaded with more than 90% precision. This evasion can be performed without any information about the underlying classifier, training dataset, or the domain of application. Attacks on machine learning systems cause the data to exhibit non stationarity (i.e., the training and the testing data have different distributions). It is necessary to detect these changes in distribution, called concept drift, as they could cause the prediction performance of the model to degrade over time. However, the detection cannot overly rely on labeled data to compute performance explicitly and monitor a drop, as labeling is expensive and time consuming, and at times may not be a possibility altogether. As such, we propose the âMargin Density Drift Detection (MD3)â algorithm, which can reliably detect concept drift from unlabeled data only. MD3 provides high detection accuracy with a low false alarm rate, making it suitable for cybersecurity applications; where excessive false alarms are expensive and can lead to loss of trust in the warning system. Additionally, MD3 is designed as a classifier independent and streaming algorithm for usage in a variety of continuous never-ending learning systems. We then propose a âDynamic Adversarial Miningâ based learning framework, for learning in non-stationary and adversarial environments, which provides âsecurity by designâ. The proposed âPredict-Detectâ classifier framework, aims to provide: robustness against attacks, ease of attack detection using unlabeled data, and swift recovery from attacks. Ideas of feature hiding and obfuscation of feature importance are proposed as strategies to enhance the learning framework\u27s security. Metrics for evaluating the dynamic security of a system and recover-ability after an attack are introduced to provide a practical way of measuring efficacy of dynamic security strategies. The framework is developed as a streaming data methodology, capable of continually functioning with limited supervision and effectively responding to adversarial dynamics. The developed ideas, methodology, algorithms, and experimental analysis, aim to provide a foundation for future work in the area of âDynamic Adversarial Miningâ, wherein a holistic approach to machine learning based security is motivated
A Transparency Index Framework for Machine Learning powered AI in Education
The increase in the use of AI systems in our daily lives, brings calls for more ethical AI development from different sectors including, finance, the judiciary and to an increasing extent education. A number of AI ethics checklists and frameworks have been proposed focusing on different dimensions of ethical AI, such as fairness, explainability and safety. However, the abstract nature of these existing ethical AI guidelines often makes them difficult to operationalise in real-world contexts. The inadequacy of the existing situation with respect to ethical guidance is further complicated by the paucity of work to develop transparent machine learning powered AI systems for real-world. This is particularly true for AI applied in education and training.
In this thesis, a Transparency Index Framework is presented as a tool to forefront the importance of transparency and aid the contextualisation of ethical guidance for the education and training sector. The transparency index framework presented here has been developed in three iterative phases.
In phase one, an extensive literature review of the real-world AI development pipelines was conducted. In phase two, an AI-powered tool for use in an educational and training setting was developed. The initial version of the Transparency Index Framework was prepared after phase two. And in phase three, a revised version of the Transparency Index Framework was co- designed that integrates learning from phases one and two. The co-design process engaged a range of different AI in education stakeholders, including educators, ed-tech experts and AI practitioners.
The Transparency Index Framework presented in this thesis maps the requirements of transparency for different categories of AI in education stakeholders, and shows how transparency considerations can be ingrained throughout the AI development process, from initial data collection to deployment in the world, including continuing iterative improvements. Transparency is shown to enable the implementation of other ethical AI dimensions, such as interpretability, accountability and safety. The
3
optimisation of transparency from the perspective of end-users and ed-tech companies who are developing AI systems is discussed and the importance of conceptualising transparency in developing AI powered ed-tech products is highlighted. In particular, the potential for transparency to bridge the gap between the machine learning and learning science communities is noted. For example, through the use of datasheets, model cards and factsheets adapted and contextualised for education through a range of stakeholder perspectives, including educators, ed-tech experts and AI practitioners
Artificial intelligence is ineffective and potentially harmful for fact checking
Fact checking can be an effective strategy against misinformation, but its
implementation at scale is impeded by the overwhelming volume of information
online. Recent artificial intelligence (AI) language models have shown
impressive ability in fact-checking tasks, but how humans interact with
fact-checking information provided by these models is unclear. Here we
investigate the impact of fact checks generated by a popular AI model on belief
in, and sharing intent of, political news in a preregistered randomized control
experiment. Although the AI performs reasonably well in debunking false
headlines, we find that it does not significantly affect participants' ability
to discern headline accuracy or share accurate news. However, the AI
fact-checker is harmful in specific cases: it decreases beliefs in true
headlines that it mislabels as false and increases beliefs for false headlines
that it is unsure about. On the positive side, the AI increases sharing intents
for correctly labeled true headlines. When participants are given the option to
view AI fact checks and choose to do so, they are significantly more likely to
share both true and false news but only more likely to believe false news. Our
findings highlight an important source of potential harm stemming from AI
applications and underscore the critical need for policies to prevent or
mitigate such unintended consequences
AVATAR - Machine Learning Pipeline Evaluation Using Surrogate Model
© 2020, The Author(s). The evaluation of machine learning (ML) pipelines is essential during automatic ML pipeline composition and optimisation. The previous methods such as Bayesian-based and genetic-based optimisation, which are implemented in Auto-Weka, Auto-sklearn and TPOT, evaluate pipelines by executing them. Therefore, the pipeline composition and optimisation of these methods requires a tremendous amount of time that prevents them from exploring complex pipelines to find better predictive models. To further explore this research challenge, we have conducted experiments showing that many of the generated pipelines are invalid, and it is unnecessary to execute them to find out whether they are good pipelines. To address this issue, we propose a novel method to evaluate the validity of ML pipelines using a surrogate model (AVATAR). The AVATAR enables to accelerate automatic ML pipeline composition and optimisation by quickly ignoring invalid pipelines. Our experiments show that the AVATAR is more efficient in evaluating complex pipelines in comparison with the traditional evaluation approaches requiring their execution
An investigation of innovation and knowledge creation in virtual worlds
The Internet and World Wide Web have had, and continue to have, an incredible
impact on our civilization. These technologies have radically influenced the way
that society is organised and the manner in which people around the world
communicate and interact. The structure and function of individual, social,
organisational, economic and political life begin to resemble the digital network
architectures upon which they are increasingly reliant. It is increasingly difficult
to imagine how our âofflineâ world would look or function without the âonlineâ
world; it is becoming less meaningful to distinguish between the âactualâ and the
âvirtualâ. Thus, the major architectural project of the twenty-first century is to
âimagine, build, and enhance an interactive and ever changing cyberspaceâ (LĂ©vy,
1997, p. 10). Virtual worlds are at the forefront of this evolving digital landscape.
Virtual worlds have âcritical implications for business, education, social sciences,
and our society at largeâ (Messinger et al., 2009, p. 204).
This study focuses on the possibilities of virtual worlds in terms of
communication, collaboration, innovation and creativity. The concept of
knowledge creation is at the core of this research. The study shows that scholars
increasingly recognise that knowledge creation, as a socially enacted process,
goes to the very heart of innovation. However, efforts to build upon these insights
have struggled to escape the influence of the information processing paradigm of
old and have failed to move beyond the persistent but problematic
conceptualisation of knowledge creation in terms of tacit and explicit knowledge.
Based on these insights, the study leverages extant research to develop the
conceptual apparatus necessary to carry out an investigation of innovation and
knowledge creation in virtual worlds. The study derives and articulates a set of
definitions (of virtual worlds, innovation, knowledge and knowledge creation) to
guide research. The study also leverages a number of extant theories in order to
develop a preliminary framework to model knowledge creation in virtual worlds.
Using a combination of participant observation and six case studies of innovative
educational projects in Second Life, the study yields a range of insights into the
process of knowledge creation in virtual worlds and into the factors that affect it.
The studyâs contributions to theory are expressed as a series of propositions and
findings and are represented as a revised and empirically grounded theoretical
framework of knowledge creation in virtual worlds. These findings highlight the
importance of prior related knowledge and intrinsic motivation in terms of
shaping and stimulating knowledge creation in virtual worlds. At the same time,
they highlight the importance of meta-knowledge (knowledge about knowledge)
in terms of guiding the knowledge creation process whilst revealing the diversity
of behavioural approaches actually used to create knowledge in virtual worlds
and. This theoretical framework is itself one of the chief contributions of the study
and the analysis explores how it can be used to guide further research in virtual
worlds and on knowledge creation. The studyâs contributions to practice are
presented as actionable guide to simulate knowledge creation in virtual worlds.
This guide utilises a theoretically based classification of four knowledge-creator
archetypes (the sage, the lore master, the artisan, and the apprentice) and derives
an actionable set of behavioural prescriptions for each archetype. The study
concludes with a discussion of the studyâs implications in terms of future
research
Tune your brown clustering, please
Brown clustering, an unsupervised hierarchical clustering technique based on ngram mutual information, has proven useful in many NLP applications. However, most uses of Brown clustering employ the same default configuration; the appropriateness of this configuration has gone predominantly unexplored. Accordingly, we present information for practitioners on the behaviour of Brown clustering in order to assist hyper-parametre tuning, in the form of a theoretical model of Brown clustering utility. This model is then evaluated empirically in two sequence labelling tasks over two text types. We explore the dynamic between the input corpus size, chosen number of classes, and quality of the resulting clusters, which has an impact for any approach using Brown clustering. In every scenario that we examine, our results reveal that the values most commonly used for the clustering are sub-optimal