39,914 research outputs found
Recommended from our members
Context-aware visual exploration of molecular databases
Facilitating the visual exploration of scientific data has
received increasing attention in the past decade or so. Especially
in life science related application areas the amount
of available data has grown at a breath taking pace. In this
paper we describe an approach that allows for visual inspection
of large collections of molecular compounds. In
contrast to classical visualizations of such spaces we incorporate
a specific focus of analysis, for example the outcome
of a biological experiment such as high throughout
screening results. The presented method uses this experimental
data to select molecular fragments of the underlying
molecules that have interesting properties and uses the
resulting space to generate a two dimensional map based
on a singular value decomposition algorithm and a self organizing
map. Experiments on real datasets show that
the resulting visual landscape groups molecules of similar
chemical properties in densely connected regions
Exploring the Design Space of Immersive Urban Analytics
Recent years have witnessed the rapid development and wide adoption of
immersive head-mounted devices, such as HTC VIVE, Oculus Rift, and Microsoft
HoloLens. These immersive devices have the potential to significantly extend
the methodology of urban visual analytics by providing critical 3D context
information and creating a sense of presence. In this paper, we propose an
theoretical model to characterize the visualizations in immersive urban
analytics. Further more, based on our comprehensive and concise model, we
contribute a typology of combination methods of 2D and 3D visualizations that
distinguish between linked views, embedded views, and mixed views. We also
propose a supporting guideline to assist users in selecting a proper view under
certain circumstances by considering visual geometry and spatial distribution
of the 2D and 3D visualizations. Finally, based on existing works, possible
future research opportunities are explored and discussed.Comment: 23 pages,11 figure
Single-Shot Clothing Category Recognition in Free-Configurations with Application to Autonomous Clothes Sorting
This paper proposes a single-shot approach for recognising clothing
categories from 2.5D features. We propose two visual features, BSP (B-Spline
Patch) and TSD (Topology Spatial Distances) for this task. The local BSP
features are encoded by LLC (Locality-constrained Linear Coding) and fused with
three different global features. Our visual feature is robust to deformable
shapes and our approach is able to recognise the category of unknown clothing
in unconstrained and random configurations. We integrated the category
recognition pipeline with a stereo vision system, clothing instance detection,
and dual-arm manipulators to achieve an autonomous sorting system. To verify
the performance of our proposed method, we build a high-resolution RGBD
clothing dataset of 50 clothing items of 5 categories sampled in random
configurations (a total of 2,100 clothing samples). Experimental results show
that our approach is able to reach 83.2\% accuracy while classifying clothing
items which were previously unseen during training. This advances beyond the
previous state-of-the-art by 36.2\%. Finally, we evaluate the proposed approach
in an autonomous robot sorting system, in which the robot recognises a clothing
item from an unconstrained pile, grasps it, and sorts it into a box according
to its category. Our proposed sorting system achieves reasonable sorting
success rates with single-shot perception.Comment: 9 pages, accepted by IROS201
Visually Indicated Sounds
Objects make distinctive sounds when they are hit or scratched. These sounds
reveal aspects of an object's material properties, as well as the actions that
produced them. In this paper, we propose the task of predicting what sound an
object makes when struck as a way of studying physical interactions within a
visual scene. We present an algorithm that synthesizes sound from silent videos
of people hitting and scratching objects with a drumstick. This algorithm uses
a recurrent neural network to predict sound features from videos and then
produces a waveform from these features with an example-based synthesis
procedure. We show that the sounds predicted by our model are realistic enough
to fool participants in a "real or fake" psychophysical experiment, and that
they convey significant information about material properties and physical
interactions
Becoming the Expert - Interactive Multi-Class Machine Teaching
Compared to machines, humans are extremely good at classifying images into
categories, especially when they possess prior knowledge of the categories at
hand. If this prior information is not available, supervision in the form of
teaching images is required. To learn categories more quickly, people should
see important and representative images first, followed by less important
images later - or not at all. However, image-importance is individual-specific,
i.e. a teaching image is important to a student if it changes their overall
ability to discriminate between classes. Further, students keep learning, so
while image-importance depends on their current knowledge, it also varies with
time.
In this work we propose an Interactive Machine Teaching algorithm that
enables a computer to teach challenging visual concepts to a human. Our
adaptive algorithm chooses, online, which labeled images from a teaching set
should be shown to the student as they learn. We show that a teaching strategy
that probabilistically models the student's ability and progress, based on
their correct and incorrect answers, produces better 'experts'. We present
results using real human participants across several varied and challenging
real-world datasets.Comment: CVPR 201
Proposing a hybrid approach for emotion classification using audio and video data
Emotion recognition has been a research topic in the field of Human-Computer Interaction (HCI) during recent years. Computers have become an inseparable part of human life. Users need human-like interaction to better communicate with computers. Many researchers have
become interested in emotion recognition and classification using different sources. A hybrid
approach of audio and text has been recently introduced. All such approaches have been done to raise the accuracy and appropriateness of emotion classification. In this study, a hybrid approach of audio and video has been applied for emotion recognition. The innovation of this
approach is selecting the characteristics of audio and video and their features as a unique specification for classification. In this research, the SVM method has been used for classifying the data in the SAVEE database. The experimental results show the maximum classification
accuracy for audio data is 91.63% while by applying the hybrid approach the accuracy achieved is 99.26%
- …