39,065 research outputs found

    Remote Access to a Prototyping Laboratory

    Get PDF
    There is a growing global demand for continuing adult higher education particularly in science and engineering subjects. New technologies are emerging which would enable the development of a Remote Access Laboratory for rapid prototyping of Artificial Intelligence, as a learning environment for mechatronic engineering, in which high precision electromechanical devices are designed to exhibit autonomous behaviour. Secondary research investigated the learning theories for a Remote Access Laboratory, and the current practices for distance learning, involving groupware in shared activity 'collaboratories'. Having determined that the laboratory would need a multi-user interactive environment architecture, with the requirement for adaptability to rapid developments,a distributed software architecture was selected. The laboratory design was subsequently argued to be best served by Intelligent Agents in a Multi-Agent system. The aims of the research were to establish the viability of a Remote Access Laboratory for mechatronic experimentation, and to evaluate the technologies required to implement such a laboratory environment for rapid prototyping. These were achieved by developing a novel user interface, based on a multi-functional screen layout, and a graphical specification facility to provide robotic navigation that is intuitive to use and does not require text-based programming. The research investigated the prototyping of robotic behaviour, which used Programming by Demonstration as an innovative technique to prototype robot navigation. The method of designing behaviours met an anticipated need to allow the robot to interact with an environment, to achieve goals under conditions of uncertainty, while requiring a level of abstraction in the behaviour design. The interface structured a composite of the designed behaviours into prototype Artificial Intelligence using a hierarchical behaviour architecture, which complied with the principles of Object Orientated programming. This was subsequently a new and original programming method to facilitate rapid prototyping of Artificial Intelligence design and structuring. Experimentation involved 20 participants attempting to accomplish a series of tasks which involved using the prototyped interface and an existing text-based robot programming system. The participants were profiled by their formal qualifications, knowledge and experience. The experimental data obtained were used to establish a comparative measure of the prototype interface success compared with an existing distance-learning, home experiment kit, in the form of a small controllable model vehicle. The data obtained provided strong evidence to support the hypothesis that a Programming by Demonstration based system for rapid prototyping is more flexible and easier to use than a previously existing distance learning text-based system. The Programming by Demonstration system showed great promise, being quicker for prototyping, and more intuitive. The learning interface design pioneered new techniques and technologies for rapid prototyping of Artificial Intelligence in a Mechatronics Remote Access Laboratory

    A group learning management method for intelligent tutoring systems

    Get PDF
    In this paper we propose a group management specification and execution method that seeks a compromise between simple course design and complex adaptive group interaction. This is achieved through an authoring method that proposes predefined scenarios to the author. These scenarios already include complex learning interaction protocols in which student and group models use and update are automatically included. The method adopts ontologies to represent domain and student models, and object Petri nets to specify the group interaction protocols. During execution, the method is supported by a multi-agent architecture

    An Agent-based Modelling Framework for Driving Policy Learning in Connected and Autonomous Vehicles

    Get PDF
    Due to the complexity of the natural world, a programmer cannot foresee all possible situations, a connected and autonomous vehicle (CAV) will face during its operation, and hence, CAVs will need to learn to make decisions autonomously. Due to the sensing of its surroundings and information exchanged with other vehicles and road infrastructure, a CAV will have access to large amounts of useful data. While different control algorithms have been proposed for CAVs, the benefits brought about by connectedness of autonomous vehicles to other vehicles and to the infrastructure, and its implications on policy learning has not been investigated in literature. This paper investigates a data driven driving policy learning framework through an agent-based modelling approaches. The contributions of the paper are two-fold. A dynamic programming framework is proposed for in-vehicle policy learning with and without connectivity to neighboring vehicles. The simulation results indicate that while a CAV can learn to make autonomous decisions, vehicle-to-vehicle (V2V) communication of information improves this capability. Furthermore, to overcome the limitations of sensing in a CAV, the paper proposes a novel concept for infrastructure-led policy learning and communication with autonomous vehicles. In infrastructure-led policy learning, road-side infrastructure senses and captures successful vehicle maneuvers and learns an optimal policy from those temporal sequences, and when a vehicle approaches the road-side unit, the policy is communicated to the CAV. Deep-imitation learning methodology is proposed to develop such an infrastructure-led policy learning framework

    MACS: Multi-agent COTR system for Defense Contracting

    Get PDF
    The field of intelligent multi-agent systems has expanded rapidly in the recent past. Multi-agent architectures and systems are being investigated and continue to develop. To date, little has been accomplished in applying multi-agent systems to the defense acquisition domain. This paper describes the design, development, and related considerations of a multi-agent system in the area of procurement and contracting for the defense acquisition community

    Reinforcement Learning for UAV Attitude Control

    Full text link
    Autopilot systems are typically composed of an "inner loop" providing stability and control, while an "outer loop" is responsible for mission-level objectives, e.g. way-point navigation. Autopilot systems for UAVs are predominately implemented using Proportional, Integral Derivative (PID) control systems, which have demonstrated exceptional performance in stable environments. However more sophisticated control is required to operate in unpredictable, and harsh environments. Intelligent flight control systems is an active area of research addressing limitations of PID control most recently through the use of reinforcement learning (RL) which has had success in other applications such as robotics. However previous work has focused primarily on using RL at the mission-level controller. In this work, we investigate the performance and accuracy of the inner control loop providing attitude control when using intelligent flight control systems trained with the state-of-the-art RL algorithms, Deep Deterministic Gradient Policy (DDGP), Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO). To investigate these unknowns we first developed an open-source high-fidelity simulation environment to train a flight controller attitude control of a quadrotor through RL. We then use our environment to compare their performance to that of a PID controller to identify if using RL is appropriate in high-precision, time-critical flight control.Comment: 13 pages, 9 figure
    • …
    corecore