4,538 research outputs found
Scattertext: a Browser-Based Tool for Visualizing how Corpora Differ
Scattertext is an open source tool for visualizing linguistic variation
between document categories in a language-independent way. The tool presents a
scatterplot, where each axis corresponds to the rank-frequency a term occurs in
a category of documents. Through a tie-breaking strategy, the tool is able to
display thousands of visible term-representing points and find space to legibly
label hundreds of them. Scattertext also lends itself to a query-based
visualization of how the use of terms with similar embeddings differs between
document categories, as well as a visualization for comparing the importance
scores of bag-of-words features to univariate metrics.Comment: ACL 2017 Demos. 6 pages, 5 figures. See the Githup repo
https://github.com/JasonKessler/scattertext for source code and documentatio
Investigating the effectiveness of an efficient label placement method using eye movement data
This paper focuses on improving the efficiency and effectiveness of dynamic and interactive maps in relation to the user. A label placement method with an improved algorithmic efficiency is presented. Since this algorithm has an influence on the actual placement of the name labels on the map, it is tested if this efficient algorithms also creates more effective maps: how well is the information processed by the user. We tested 30 participants while they were working on a dynamic and interactive map display. Their task was to locate geographical names on each of the presented maps. Their eye movements were registered together with the time at which a given label was found. The gathered data reveal no difference in the user's response times, neither in the number and the duration of the fixations between both map designs. The results of this study show that the efficiency of label placement algorithms can be improved without disturbing the user's cognitive map. Consequently, we created a more efficient map without affecting its effectiveness towards the user
Reinforced Labels: Multi-Agent Deep Reinforcement Learning for Point-Feature Label Placement
Over the recent years, Reinforcement Learning combined with Deep Learning
techniques has successfully proven to solve complex problems in various
domains, including robotics, self-driving cars, and finance. In this paper, we
are introducing Reinforcement Learning (RL) to label placement, a complex task
in data visualization that seeks optimal positioning for labels to avoid
overlap and ensure legibility. Our novel point-feature label placement method
utilizes Multi-Agent Deep Reinforcement Learning to learn the label placement
strategy, the first machine-learning-driven labeling method, in contrast to the
existing hand-crafted algorithms designed by human experts. To facilitate RL
learning, we developed an environment where an agent acts as a proxy for a
label, a short textual annotation that augments visualization. Our results show
that the strategy trained by our method significantly outperforms the random
strategy of an untrained agent and the compared methods designed by human
experts in terms of completeness (i.e., the number of placed labels). The
trade-off is increased computation time, making the proposed method slower than
the compared methods. Nevertheless, our method is ideal for scenarios where the
labeling can be computed in advance, and completeness is essential, such as
cartographic maps, technical drawings, and medical atlases. Additionally, we
conducted a user study to assess the perceived performance. The outcomes
revealed that the participants considered the proposed method to be
significantly better than the other examined methods. This indicates that the
improved completeness is not just reflected in the quantitative metrics but
also in the subjective evaluation by the participants
Adaptive Layout for Interactive Documents
This thesis presents a novel approach to create automated layouts for rich illustrative material that could adapt according to the screen size and contextual requirements. The adaption not only considers global layout but also deals with the content and layout adaptation of individual illustrations in the layout. An unique solution has been developed that integrates constraint-based and force-directed techniques to create adaptive grid-based and non-grid layouts. A set of annotation layouts are developed which adapt the annotated illustrations to match the contextual requirements over time
RL-LABEL: A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios
Labels are widely used in augmented reality (AR) to display digital
information. Ensuring the readability of AR labels requires placing them
occlusion-free while keeping visual linkings legible, especially when multiple
labels exist in the scene. Although existing optimization-based methods, such
as force-based methods, are effective in managing AR labels in static
scenarios, they often struggle in dynamic scenarios with constantly moving
objects. This is due to their focus on generating layouts optimal for the
current moment, neglecting future moments and leading to sub-optimal or
unstable layouts over time. In this work, we present RL-LABEL, a deep
reinforcement learning-based method for managing the placement of AR labels in
scenarios involving moving objects. RL-LABEL considers the current and
predicted future states of objects and labels, such as positions and
velocities, as well as the user's viewpoint, to make informed decisions about
label placement. It balances the trade-offs between immediate and long-term
objectives. Our experiments on two real-world datasets show that RL-LABEL
effectively learns the decision-making process for long-term optimization,
outperforming two baselines (i.e., no view management and a force-based method)
by minimizing label occlusions, line intersections, and label movement
distance. Additionally, a user study involving 18 participants indicates that
RL-LABEL excels over the baselines in aiding users to identify, compare, and
summarize data on AR labels within dynamic scenes
- …