39,952 research outputs found
Online cooperation learning environment : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Albany, New Zealand
This project aims to create an online cooperation learning environment for students who study the same paper. Firstly, the whole class will be divided into several tutorial peer groups. One tutorial group includes five to seven students. The students can discuss with each other in the same study group, which is assigned by the lecturer. This is achieved via an online cooperation learning environment application (OCLE), which consists of a web based J2EE application and a peer to peer (P2P) java application, cooperative learning tool (CLT). It can reduce web server traffic significantly during online tutorial discussion time
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
A Mobile-Based Group Quiz System to Promote Collaborative Learning and Facilitate Instant Feedback
In this paper we develop and evaluate a mobile-based questioning-answering system (MQAS) that complements traditional learning which can be used as a tool to encourage teachers to give their students mobile-based weekly group quizzes. These quizzes can provide teachers with valid information about the progress of their students and can also motivate students to work in a collaborative manner in order to facilitate the integration of their knowledge. We describe the architecture and experiences with the system
-Learning: A Collaborative Distributed Strategy for Multi-Agent Reinforcement Learning Through Consensus + Innovations
The paper considers a class of multi-agent Markov decision processes (MDPs),
in which the network agents respond differently (as manifested by the
instantaneous one-stage random costs) to a global controlled state and the
control actions of a remote controller. The paper investigates a distributed
reinforcement learning setup with no prior information on the global state
transition and local agent cost statistics. Specifically, with the agents'
objective consisting of minimizing a network-averaged infinite horizon
discounted cost, the paper proposes a distributed version of -learning,
-learning, in which the network agents collaborate by means of
local processing and mutual information exchange over a sparse (possibly
stochastic) communication network to achieve the network goal. Under the
assumption that each agent is only aware of its local online cost data and the
inter-agent communication network is \emph{weakly} connected, the proposed
distributed scheme is almost surely (a.s.) shown to yield asymptotically the
desired value function and the optimal stationary control policy at each
network agent. The analytical techniques developed in the paper to address the
mixed time-scale stochastic dynamics of the \emph{consensus + innovations}
form, which arise as a result of the proposed interactive distributed scheme,
are of independent interest.Comment: Submitted to the IEEE Transactions on Signal Processing, 33 page
Recommended from our members
Developing an Adaptive Strategy for Connected Eco-Driving Under Uncertain Traffic and Signal Conditions
The Eco-Approach and Departure (EAD) application has been proved to be environmentally efficient for a Connected and Automated Vehicles (CAVs) system. In the real-world traffic, traffic conditions and signal timings are usually dynamic and uncertain due to mixed vehicle types, various driving behaviors and limited sensing range, which is challenging in EAD development. This research proposes an adaptive strategy for connected eco-driving towards a signalized intersection under real world conditions. Stochastic graph models are built to link the vehicle and external (e.g., traffic, signal) data and dynamic programing is applied to identify the optimal speed for each vehicle-state efficiently. From energy perspective, adaptive strategy using traffic data could double the effective sensor range in eco-driving. A hybrid reinforcement learning framework is also developed for EAD in mixed traffic condition using both short-term benefit and long-term benefit as the action reward. Micro-simulation is conducted in Unity to validate the method, showing over 20% energy saving.View the NCST Project Webpag
- …