21,035 research outputs found
Youth and Digital Media: From Credibility to Information Quality
Building upon a process-and context-oriented information quality framework, this paper seeks to map and explore what we know about the ways in which young users of age 18 and under search for information online, how they evaluate information, and how their related practices of content creation, levels of new literacies, general digital media usage, and social patterns affect these activities. A review of selected literature at the intersection of digital media, youth, and information quality -- primarily works from library and information science, sociology, education, and selected ethnographic studies -- reveals patterns in youth's information-seeking behavior, but also highlights the importance of contextual and demographic factors both for search and evaluation. Looking at the phenomenon from an information-learning and educational perspective, the literature shows that youth develop competencies for personal goals that sometimes do not transfer to school, and are sometimes not appropriate for school. Thus far, educational initiatives to educate youth about search, evaluation, or creation have depended greatly on the local circumstances for their success or failure
How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV
This work explores the feasibility of steering a drone with a (recurrent)
neural network, based on input from a forward looking camera, in the context of
a high-level navigation task. We set up a generic framework for training a
network to perform navigation tasks based on imitation learning. It can be
applied to both aerial and land vehicles. As a proof of concept we apply it to
a UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a
room containing a number of obstacles. So far only feedforward neural networks
(FNNs) have been used to train UAV control. To cope with more complex tasks, we
propose the use of recurrent neural networks (RNN) instead and successfully
train an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision
based control is a sequential prediction problem, known for its highly
correlated input data. The correlation makes training a network hard,
especially an RNN. To overcome this issue, we investigate an alternative
sampling method during training, namely window-wise truncated backpropagation
through time (WW-TBPTT). Further, end-to-end training requires a lot of data
which often is not available. Therefore, we compare the performance of
retraining only the Fully Connected (FC) and LSTM control layers with networks
which are trained end-to-end. Performing the relatively simple task of crossing
a room already reveals important guidelines and good practices for training
neural control networks. Different visualizations help to explain the behavior
learned.Comment: 12 pages, 30 figure
Safe Exploration for Optimizing Contextual Bandits
Contextual bandit problems are a natural fit for many information retrieval
tasks, such as learning to rank, text classification, recommendation, etc.
However, existing learning methods for contextual bandit problems have one of
two drawbacks: they either do not explore the space of all possible document
rankings (i.e., actions) and, thus, may miss the optimal ranking, or they
present suboptimal rankings to a user and, thus, may harm the user experience.
We introduce a new learning method for contextual bandit problems, Safe
Exploration Algorithm (SEA), which overcomes the above drawbacks. SEA starts by
using a baseline (or production) ranking system (i.e., policy), which does not
harm the user experience and, thus, is safe to execute, but has suboptimal
performance and, thus, needs to be improved. Then SEA uses counterfactual
learning to learn a new policy based on the behavior of the baseline policy.
SEA also uses high-confidence off-policy evaluation to estimate the performance
of the newly learned policy. Once the performance of the newly learned policy
is at least as good as the performance of the baseline policy, SEA starts using
the new policy to execute new actions, allowing it to actively explore
favorable regions of the action space. This way, SEA never performs worse than
the baseline policy and, thus, does not harm the user experience, while still
exploring the action space and, thus, being able to find an optimal policy. Our
experiments using text classification and document retrieval confirm the above
by comparing SEA (and a boundless variant called BSEA) to online and offline
learning methods for contextual bandit problems.Comment: 23 pages, 3 figure
- …