124,643 research outputs found
Evaluating advanced search interfaces using established information-seeking model
When users have poorly defined or complex goals search interfaces offering only keyword searching facilities provide inadequate support to help them reach their information-seeking objectives. The emergence of interfaces with more advanced capabilities such as faceted browsing and result clustering can go some way to some way toward addressing such problems. The evaluation of these interfaces, however, is challenging since they generally offer diverse and versatile search environments that introduce overwhelming amounts of independent variables to user studies; choosing the interface object as the only independent variable in a study would reveal very little about why one design out-performs another. Nonetheless if we could effectively compare these interfaces we would have a way to determine which was best for a given scenario and begin to learn why. In this article we present a formative framework for the evaluation of advanced search interfaces through the quantification of the strengths and weaknesses of the interfaces in supporting user tactics and varying user conditions. This framework combines established models of users, user needs, and user behaviours to achieve this. The framework is applied to evaluate three search interfaces and demonstrates the potential value of this approach to interactive IR evaluation
Learning Visual Features from Snapshots for Web Search
When applying learning to rank algorithms to Web search, a large number of
features are usually designed to capture the relevance signals. Most of these
features are computed based on the extracted textual elements, link analysis,
and user logs. However, Web pages are not solely linked texts, but have
structured layout organizing a large variety of elements in different styles.
Such layout itself can convey useful visual information, indicating the
relevance of a Web page. For example, the query-independent layout (i.e., raw
page layout) can help identify the page quality, while the query-dependent
layout (i.e., page rendered with matched query words) can further tell rich
structural information (e.g., size, position and proximity) of the matching
signals. However, such visual information of layout has been seldom utilized in
Web search in the past. In this work, we propose to learn rich visual features
automatically from the layout of Web pages (i.e., Web page snapshots) for
relevance ranking. Both query-independent and query-dependent snapshots are
considered as the new inputs. We then propose a novel visual perception model
inspired by human's visual search behaviors on page viewing to extract the
visual features. This model can be learned end-to-end together with traditional
human-crafted features. We also show that such visual features can be
efficiently acquired in the online setting with an extended inverted indexing
scheme. Experiments on benchmark collections demonstrate that learning visual
features from Web page snapshots can significantly improve the performance of
relevance ranking in ad-hoc Web retrieval tasks.Comment: CIKM 201
EntiTables: Smart Assistance for Entity-Focused Tables
Tables are among the most powerful and practical tools for organizing and
working with data. Our motivation is to equip spreadsheet programs with smart
assistance capabilities. We concentrate on one particular family of tables,
namely, tables with an entity focus. We introduce and focus on two specific
tasks: populating rows with additional instances (entities) and populating
columns with new headings. We develop generative probabilistic models for both
tasks. For estimating the components of these models, we consider a knowledge
base as well as a large table corpus. Our experimental evaluation simulates the
various stages of the user entering content into an actual table. A detailed
analysis of the results shows that the models' components are complimentary and
that our methods outperform existing approaches from the literature.Comment: Proceedings of the 40th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR '17), 201
Altered brainstem responses to modafinil in schizophrenia: implications for adjunctive treatment of cognition.
Candidate pro-cognitive drugs for schizophrenia targeting several neurochemical systems have consistently failed to demonstrate robust efficacy. It remains untested whether concurrent antipsychotic medications exert pharmacodynamic interactions that mitigate pro-cognitive action in patients. We used functional MRI (fMRI) in a randomized, double-blind, placebo-controlled within-subject crossover test of single-dose modafinil effects in 27 medicated schizophrenia patients, interrogating brainstem regions where catecholamine systems arise to innervate the cortex, to link cellular and systems-level models of cognitive control. Modafinil effects were evaluated both within this patient group and compared to a healthy subject group. Modafinil modulated activity in the locus coeruleus (LC) and ventral tegmental area (VTA) in the patient group. However, compared to the healthy comparison group, these effects were altered as a function of task demands: the control-independent drug effect on deactivation was relatively attenuated (shallower) in the LC and exaggerated (deeper) in the VTA; in contrast, again compared to the comparison group, the control-related drug effects on positive activation were attenuated in LC, VTA and the cortical cognitive control network. These altered effects in the LC and VTA were significantly and specifically associated with the degree of antagonism of alpha-2 adrenergic and dopamine-2 receptors, respectively, by concurrently prescribed antipsychotics. These sources of evidence suggest interacting effects on catecholamine neurons of chronic antipsychotic treatment, which respectively increase and decrease sustained neuronal activity in LC and VTA. This is the first direct evidence in a clinical population to suggest that antipsychotic medications alter catecholamine neuronal activity to mitigate pro-cognitive drug action on cortical circuits
TimeMachine: Timeline Generation for Knowledge-Base Entities
We present a method called TIMEMACHINE to generate a timeline of events and
relations for entities in a knowledge base. For example for an actor, such a
timeline should show the most important professional and personal milestones
and relationships such as works, awards, collaborations, and family
relationships. We develop three orthogonal timeline quality criteria that an
ideal timeline should satisfy: (1) it shows events that are relevant to the
entity; (2) it shows events that are temporally diverse, so they distribute
along the time axis, avoiding visual crowding and allowing for easy user
interaction, such as zooming in and out; and (3) it shows events that are
content diverse, so they contain many different types of events (e.g., for an
actor, it should show movies and marriages and awards, not just movies). We
present an algorithm to generate such timelines for a given time period and
screen size, based on submodular optimization and web-co-occurrence statistics
with provable performance guarantees. A series of user studies using Mechanical
Turk shows that all three quality criteria are crucial to produce quality
timelines and that our algorithm significantly outperforms various baseline and
state-of-the-art methods.Comment: To appear at ACM SIGKDD KDD'15. 12pp, 7 fig. With appendix. Demo and
other info available at http://cs.stanford.edu/~althoff/timemachine
Predicting Audio Advertisement Quality
Online audio advertising is a particular form of advertising used abundantly
in online music streaming services. In these platforms, which tend to host tens
of thousands of unique audio advertisements (ads), providing high quality ads
ensures a better user experience and results in longer user engagement.
Therefore, the automatic assessment of these ads is an important step toward
audio ads ranking and better audio ads creation. In this paper we propose one
way to measure the quality of the audio ads using a proxy metric called Long
Click Rate (LCR), which is defined by the amount of time a user engages with
the follow-up display ad (that is shown while the audio ad is playing) divided
by the impressions. We later focus on predicting the audio ad quality using
only acoustic features such as harmony, rhythm, and timbre of the audio,
extracted from the raw waveform. We discuss how the characteristics of the
sound can be connected to concepts such as the clarity of the audio ad message,
its trustworthiness, etc. Finally, we propose a new deep learning model for
audio ad quality prediction, which outperforms the other discussed models
trained on hand-crafted features. To the best of our knowledge, this is the
first large-scale audio ad quality prediction study.Comment: WSDM '18 Proceedings of the Eleventh ACM International Conference on
Web Search and Data Mining, 9 page
Real-time multiframe blind deconvolution of solar images
The quality of images of the Sun obtained from the ground are severely
limited by the perturbing effect of the turbulent Earth's atmosphere. The
post-facto correction of the images to compensate for the presence of the
atmosphere require the combination of high-order adaptive optics techniques,
fast measurements to freeze the turbulent atmosphere and very time consuming
blind deconvolution algorithms. Under mild seeing conditions, blind
deconvolution algorithms can produce images of astonishing quality. They can be
very competitive with those obtained from space, with the huge advantage of the
flexibility of the instrumentation thanks to the direct access to the
telescope. In this contribution we leverage deep learning techniques to
significantly accelerate the blind deconvolution process and produce corrected
images at a peak rate of ~100 images per second. We present two different
architectures that produce excellent image corrections with noise suppression
while maintaining the photometric properties of the images. As a consequence,
polarimetric signals can be obtained with standard polarimetric modulation
without any significant artifact. With the expected improvements in computer
hardware and algorithms, we anticipate that on-site real-time correction of
solar images will be possible in the near future.Comment: 16 pages, 12 figures, accepted for publication in A&
Assessing the relevance of higher education courses
The establishment of the European Higher Education Area has involved specifying lists of professional competencies that programs are expected to develop, and with this the need for procedures to measure how every course within a higher education program is aligned with the program’s competencies. We propose an instrument for characterizing this alignment, a process that we call assessing the relevance of a course. Using information from the course syllabus (objectives, contents and assessment scheme), our instrument produces indicators for characterizing the syllabus in terms of a competence list and for assessing its coherence. Because assessment involves quality, the results obtained can also be used to revise and improve the course syllabus. We illustrate this process with an example of a methods course from a mathematics teacher education program at a Spanish university
- …