2,295 research outputs found
A comparison of three interactive television AD formats
This study explores the effects of interacting with three current interactive television (iTV) ad formats, using an Australian audience panel. Interaction with iTV ads has positive effects on awareness and net positive thoughts, which increase purchase intentions compared with the influence of regular ads. The telescopic format represents the best format, likely because it makes the most of the entertaining possibilities of iTV by offering additional long-form video; its superior performance cannot be explained readily by self-selection effects. The results suggest that the effectiveness of iTV ads should be measured by their interaction rate rather than the much smaller response rate, and iTV advertisers should consider ways to maximize interaction and response rates
Country differences in technology experience: The effect of teletext on iTV adoption in the United Kingdom
This study found that participant’s previous teletext experience and previous iTV experience influenced their openness towards using interactive television in planning independent longhaul holidays. The study surveyed participants for their previous interactive media experience (internet, iTV and teletext) before viewing a linear or interactive television destination promotion. Two ad models (impulse and telescopic) were tested from two program formats (travel program segment and ad break in a lifestyle program). These were aired on a video-on-demand network in London (UK) with 164 people out of a total of 375 participating all the way to the final steps of the study. Participants were most experienced with the Internet (mean 6.29 on 1-7scale) and 50% had had experience with an interactive television provider other than the VOD network. 70% had experience with teletext. Overall, participants felt positively towards interactive television as an information source for holiday planning. Those with teletext experience or iTV experience were more open to iTV than those without such experience. Furthermore, actual interaction with the treatment seemed to moderate the previous experience – iTV attractiveness link. This demonstrated that although previous technology experiences can transfer to new media, the actual experience of using the new media is also a powerful factor
Skin friction measuring device for aircraft
A skin friction measuring device for measuring the resistance of an aerodynamic surface to an airstream is described. It was adapted to be mounted on an aircraft and is characterized by a friction plate adapted to be disposed in a flush relationship with the external surface of the aircraft and be displaced in response to skin friction drag. As an airstream is caused to flow over the surface, a potentiometer connected to the plate for providing an electrical output indicates the magnitude of the drag
Practical Open-Loop Optimistic Planning
We consider the problem of online planning in a Markov Decision Process when
given only access to a generative model, restricted to open-loop policies -
i.e. sequences of actions - and under budget constraint. In this setting, the
Open-Loop Optimistic Planning (OLOP) algorithm enjoys good theoretical
guarantees but is overly conservative in practice, as we show in numerical
experiments. We propose a modified version of the algorithm with tighter
upper-confidence bounds, KLOLOP, that leads to better practical performances
while retaining the sample complexity bound. Finally, we propose an efficient
implementation that significantly improves the time complexity of both
algorithms
Sample-Efficient Model-Free Reinforcement Learning with Off-Policy Critics
Value-based reinforcement-learning algorithms provide state-of-the-art
results in model-free discrete-action settings, and tend to outperform
actor-critic algorithms. We argue that actor-critic algorithms are limited by
their need for an on-policy critic. We propose Bootstrapped Dual Policy
Iteration (BDPI), a novel model-free reinforcement-learning algorithm for
continuous states and discrete actions, with an actor and several off-policy
critics. Off-policy critics are compatible with experience replay, ensuring
high sample-efficiency, without the need for off-policy corrections. The actor,
by slowly imitating the average greedy policy of the critics, leads to
high-quality and state-specific exploration, which we compare to Thompson
sampling. Because the actor and critics are fully decoupled, BDPI is remarkably
stable, and unusually robust to its hyper-parameters. BDPI is significantly
more sample-efficient than Bootstrapped DQN, PPO, and ACKTR, on discrete,
continuous and pixel-based tasks. Source code:
https://github.com/vub-ai-lab/bdpi.Comment: Accepted at the European Conference on Machine Learning 2019 (ECML
- …