266,031 research outputs found
You can't always sketch what you want: Understanding Sensemaking in Visual Query Systems
Visual query systems (VQSs) empower users to interactively search for line
charts with desired visual patterns, typically specified using intuitive
sketch-based interfaces. Despite decades of past work on VQSs, these efforts
have not translated to adoption in practice, possibly because VQSs are largely
evaluated in unrealistic lab-based settings. To remedy this gap in adoption, we
collaborated with experts from three diverse domains---astronomy, genetics, and
material science---via a year-long user-centered design process to develop a
VQS that supports their workflow and analytical needs, and evaluate how VQSs
can be used in practice. Our study results reveal that ad-hoc sketch-only
querying is not as commonly used as prior work suggests, since analysts are
often unable to precisely express their patterns of interest. In addition, we
characterize three essential sensemaking processes supported by our enhanced
VQS. We discover that participants employ all three processes, but in different
proportions, depending on the analytical needs in each domain. Our findings
suggest that all three sensemaking processes must be integrated in order to
make future VQSs useful for a wide range of analytical inquiries.Comment: Accepted for presentation at IEEE VAST 2019, to be held October 20-25
in Vancouver, Canada. Paper will also be published in a special issue of IEEE
Transactions on Visualization and Computer Graphics (TVCG) IEEE VIS
(InfoVis/VAST/SciVis) 2019 ACM 2012 CCS - Human-centered computing,
Visualization, Visualization design and evaluation method
Learning Dense Correspondences between Photos and Sketches
Humans effortlessly grasp the connection between sketches and real-world
objects, even when these sketches are far from realistic. Moreover, human
sketch understanding goes beyond categorization -- critically, it also entails
understanding how individual elements within a sketch correspond to parts of
the physical world it represents. What are the computational ingredients needed
to support this ability? Towards answering this question, we make two
contributions: first, we introduce a new sketch-photo correspondence benchmark,
, containing 150K annotations of 6250 sketch-photo pairs across
125 object categories, augmenting the existing Sketchy dataset with
fine-grained correspondence metadata. Second, we propose a self-supervised
method for learning dense correspondences between sketch-photo pairs, building
upon recent advances in correspondence learning for pairs of photos. Our model
uses a spatial transformer network to estimate the warp flow between latent
representations of a sketch and photo extracted by a contrastive learning-based
ConvNet backbone. We found that this approach outperformed several strong
baselines and produced predictions that were quantitatively consistent with
other warp-based methods. However, our benchmark also revealed systematic
differences between predictions of the suite of models we tested and those of
humans. Taken together, our work suggests a promising path towards developing
artificial systems that achieve more human-like understanding of visual images
at different levels of abstraction. Project page:
https://photo-sketch-correspondence.github.ioComment: Accepted to ICML 2023. Project page:
https://photo-sketch-correspondence.github.i
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Deep neural networks have been widely adopted in recent years, exhibiting
impressive performances in several application domains. It has however been
shown that they can be fooled by adversarial examples, i.e., images altered by
a barely-perceivable adversarial noise, carefully crafted to mislead
classification. In this work, we aim to evaluate the extent to which
robot-vision systems embodying deep-learning algorithms are vulnerable to
adversarial examples, and propose a computationally efficient countermeasure to
mitigate this threat, based on rejecting classification of anomalous inputs. We
then provide a clearer understanding of the safety properties of deep networks
through an intuitive empirical analysis, showing that the mapping learned by
such networks essentially violates the smoothness assumption of learning
algorithms. We finally discuss the main limitations of this work, including the
creation of real-world adversarial examples, and sketch promising research
directions.Comment: Accepted for publication at the ICCV 2017 Workshop on Vision in
Practice on Autonomous Robots (ViPAR
Array-based architecture for FET-based, nanoscale electronics
Advances in our basic scientific understanding at the molecular and atomic level place us on the verge of engineering designer structures with key features at the single nanometer scale. This offers us the opportunity to design computing systems at what may be the ultimate limits on device size. At this scale, we are faced with new challenges and a new cost structure which motivates different computing architectures than we found efficient and appropriate in conventional very large scale integration (VLSI). We sketch a basic architecture for nanoscale electronics based on carbon nanotubes, silicon nanowires, and nano-scale FETs. This architecture can provide universal logic functionality with all logic and signal restoration operating at the nanoscale. The key properties of this architecture are its minimalism, defect tolerance, and compatibility with emerging bottom-up nanoscale fabrication techniques. The architecture further supports micro-to-nanoscale interfacing for communication with conventional integrated circuits and bootstrap loading
- …