11 research outputs found

    Alternative Metrics for the Evaluation of Scholarly Activities : An Analysis of Articles Authored by Greek Researchers

    Get PDF
    Recently, altmetrics have emerged as alternative means of measuring scholarly impact, aiming at improving and complementing both traditional and web-based metrics. The aim of the present study is to contribute to the altmetrics literature by providing an overview of the coverage of altmetrics sources for the Aristotle University of Thessaloniki (AUTh) publications. We used Scopus to collect all research articles stating AUTh as the affiliation of at least one author and published from 2010 to 2016. The altmetric data originated from Altmetric Explorer, a service provided by Altmetric.com. Only 17% of all publications retrieved from Scopus had some kind of mentions, while there was a clear increasing trend over the years. The presence of altmetrics was different from each Altmetric.com attention source. Around 81% of all mentions came from Twitter. Facebook was a distant second, followed by news outlets. All other sources had very low or negligible coverage. The overwhelming majority of tweets had been posted by members of the public, who do not link to scholarly literature. Medical Sciences had by far the highest number of publications with altmetric scores, followed, in a distance by Sciences. However, Arts, Humanities and Social Sciences publications exhibited a significant altmetric activity. More research is needed in order to get a better insight into the altmetric landscape in Greece and develop an understanding about the kind of influence altmetrics measure, and the relationship, if any, between altmetric indicators and scientific impact

    Application of Neural Networks for Intelligent Video Game Character Artificial Intelligences

    Get PDF
    Much of today’s gaming culture pushes for increased realism and believability. While these movements have led to much more realistic graphics, we also need to keep in mind the behavior of artificial intelligence characters also in the game. Neural Networks are complicated data structures that have shown potential to learn and interpret complex behavior. This research analyzes the application of neural network as primary controllers for video game characters’ artificial intelligence. The paper considers the existing artificial intelligence techniques, and the existing uses of neural networks. It then describes a project in which a video game was created to serve as a case study in analyzing neural network controlled game characters. The paper discusses the design of the game where the player communicates with an artificial intelligence astronaut. The way that the player phrases the messages he sends to the astronaut helps determine whether the astronaut survives. The paper then analyzes the effectiveness of the astronaut character in exhibiting intelligent behavior. It then discusses potential future work in better demonstrating effective neural network controlled artificial intelligences

    Reinforcement learning with value advice

    No full text
    The problem we consider in this paper is reinforcement learning with value advice. In this setting, the agent is given limited access to an oracle that can tell it the expected return (value) of any state-action pair with respect to the optimal policy. The agent must use this value to learn an explicit policy that performs well in the environment. We provide an algorithm called RLAdvice, based on the imitation learning algorithm DAgger. We illustrate the effectiveness of this method in the Arcade Learning Environment on three different games, using value estimates from UCT as advice

    Deep Reinforcement Learning with Interactive Feedback in a Human-Robot Environment

    Full text link
    Robots are extending their presence in domestic environments every day, being more common to see them carrying out tasks in home scenarios. In the future, robots are expected to increasingly perform more complex tasks and, therefore, be able to acquire experience from different sources as quickly as possible. A plausible approach to address this issue is interactive feedback, where a trainer advises a learner on which actions should be taken from specific states to speed up the learning process. Moreover, deep reinforcement learning has been recently widely utilized in robotics to learn the environment and acquire new skills autonomously. However, an open issue when using deep reinforcement learning is the excessive time needed to learn a task from raw input images. In this work, we propose a deep reinforcement learning approach with interactive feedback to learn a domestic task in a human-robot scenario. We compare three different learning methods using a simulated robotic arm for the task of organizing different objects; the proposed methods are (i) deep reinforcement learning (DeepRL); (ii) interactive deep reinforcement learning using a previously trained artificial agent as an advisor (agent-IDeepRL); and (iii) interactive deep reinforcement learning using a human advisor (human-IDeepRL). We demonstrate that interactive approaches provide advantages for the learning process. The obtained results show that a learner agent, using either agent-IDeepRL or human-IDeepRL, completes the given task earlier and has fewer mistakes compared to the autonomous DeepRL approach.Comment: In press journal Applied Science

    Problem Solving for Industry

    Get PDF
    This project seeks to use reinforcement learning to develop AI agents used to controlled NPCs in video game worlds that are capable of mastering decision tasks in their video game environments. Our job will be to develop algorithms and methods that can effectively train the AI agents using Reinforcement learning, which can be used in various gaming environments and scenarios such as racing games and first-person shooters. We then market these agents to video game developers for use in their game worlds. The developer can use our agents as-is in their game without modifications or they can train them further, using our algorithms, to tune the AI agents with various behaviours and capability with minimal or no need to write the code themselves. With the use of reinforcement learning, our AI agents will learn using trial and error with rewards used to provide feedback to the AI. Over time the AI will master its environment and other AI and even possibly interaction with the human-gamer. This will produce AI controlled NPCs that behave and interact convincingly with their environments and the player, promoting player immersions while reducing developer workload

    A novel differentially private advising framework in cloud server environment

    Full text link
    Due to the rapid development of the cloud computing environment, it is widely accepted that cloud servers are important for users to improve work efficiency. Users need to know servers' capabilities and make optimal decisions on selecting the best available servers for users' tasks. We consider the process of learning servers' capabilities by users as a multiagent reinforcement learning process. The learning speed and efficiency in reinforcement learning can be improved by sharing the learning experience among learning agents which is defined as advising. However, existing advising frameworks are limited by the requirement that during advising all learning agents in a reinforcement learning environment must have exactly the same actions. To address the above limitation, this article proposes a novel differentially private advising framework for multiagent reinforcement learning. Our proposed approach can significantly improve the application of conventional advising frameworks when agents have one different action. The approach can also widen the applicable field of advising and speed up reinforcement learning by triggering more potential advising processes among agents with different actions

    Accelerating Deep Reinforcement Learning via Action Advising

    Get PDF
    Deep Reinforcement Learning (RL) algorithms can solve complex sequential decision-making tasks successfully. However, they suffer from the major drawbacks of having poor sample efficiency and long training times, which can often be tackled by knowledge reuse. Action advising is a promising knowledge exchange mechanism that adopts the teacher-student paradigm to leverage some legacy knowledge through a budget-limited number of interactions in the form of action advice between peers. In this thesis, we studied action advising techniques, particularly in Deep RL domain, both in single-agent and multi-agent scenarios. We proposed a heuristic-based jointly-initiated action advising method that is suitable for multi-agent Deep RL setting, for the first time in literature. By adopting Random Network Distillation (RND), we devised a measurement for agents to assess their confidence in any given state to initiate the teacher-student dynamics with no prior role assumptions. We also used RND as an advice novelty metric to construct more robust student-initiated advice query strategies in single-agent Deep RL. Moreover, we addressed the absence of advice utilisation mechanisms beyond collection by employing a behavioural cloning module to imitate the teacher's advice. We also proposed a method to automatically tune the relevant hyperparameters of these components on the fly to make our action advising algorithms capable of adapting to any domain with minimal human intervention. Finally, we extended our advice reuse via imitation technique to construct a unified student-initiated approach that addresses both advice collection and advice utilisation problems. The experiments we conducted in a range of Deep RL domains showed that our proposal provides significant contributions. Our Deep RL-compatible action advising techniques managed to achieve a state-of-the-art level of performance. Furthermore, we demonstrated that their practical attributes render domain adaptation and implementation processes straightforward, which is an important progression towards being able to apply action advising in real-world problems

    Rule-based interactive assisted reinforcement learning

    Get PDF
    Reinforcement Learning (RL) has seen increasing interest over the past few years, partially owing to breakthroughs in the digestion and application of external information. The use of external information results in improved learning speeds and solutions to more complex domains. This thesis, a collection of five key contributions, demonstrates that comparable performance gains to existing Interactive Reinforcement Learning methods can be achieved using less data, sourced during operation, and without prior verifcation and validation of the information's integrity. First, this thesis introduces Assisted Reinforcement Learning (ARL), a collective term referring to RL methods that utilise external information to leverage the learning process, and provides a non-exhaustive review of current ARL methods. Second, two advice delivery methods common in ARL, evaluative and informative, are compared through human trials. The comparison highlights how human engagement, accuracy of advice, agent performance, and advice utility differ between the two methods. Third, this thesis introduces simulated users as a methodology for testing and comparing ARL methods. Simulated users enable testing and comparing of ARL systems without costly and time-consuming human trials. While not a replacement for well-designed human trials, simulated users offer a cheap and robust approach to ARL design and comparison. Fourth, the concept of persistence is introduced to Interactive Reinforcement Learning. The retention and reuse of advice maximises utility and can lead to improved performance and reduced human demand. Finally, this thesis presents rule-based interactive RL, an iterative method for providing advice to an agent. Existing interactive RL methods rely on constant human supervision and evaluation, requiring a substantial commitment from the advice-giver. Rule-based advice can be provided proactively and be generalised over the state-space while remaining flexible enough to handle potentially inaccurate or irrelevant information. Ultimately, the thesis contributions are validated empirically and clearly show that rule-based advice signicantly reduces human guidance requirements while improving agent performance.Doctor of Pholosoph
    corecore