33,704 research outputs found
Spott : on-the-spot e-commerce for television using deep learning-based video analysis techniques
Spott is an innovative second screen mobile multimedia application which offers viewers relevant information on objects (e.g., clothing, furniture, food) they see and like on their television screens. The application enables interaction between TV audiences and brands, so producers and advertisers can offer potential consumers tailored promotions, e-shop items, and/or free samples. In line with the current views on innovation management, the technological excellence of the Spott application is coupled with iterative user involvement throughout the entire development process. This article discusses both of these aspects and how they impact each other. First, we focus on the technological building blocks that facilitate the (semi-) automatic interactive tagging process of objects in the video streams. The majority of these building blocks extensively make use of novel and state-of-the-art deep learning concepts and methodologies. We show how these deep learning based video analysis techniques facilitate video summarization, semantic keyframe clustering, and (similar) object retrieval. Secondly, we provide insights in user tests that have been performed to evaluate and optimize the application's user experience. The lessons learned from these open field tests have already been an essential input in the technology development and will further shape the future modifications to the Spott application
The role of social networks in students’ learning experiences
The aim of this research is to investigate the role of social networks in computer science education. The Internet shows great potential for enhancing collaboration between people and the role of social software has become increasingly relevant in recent years. This research focuses on analyzing the role that social networks play in students’ learning experiences. The construction of students’ social networks, the evolution of these networks, and their effects on the students’ learning experience in a university environment are examined
Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy
Objective: Surgical data science is evolving into a research field that aims
to observe everything occurring within and around the treatment process to
provide situation-aware data-driven assistance. In the context of endoscopic
video analysis, the accurate classification of organs in the field of view of
the camera proffers a technical challenge. Herein, we propose a new approach to
anatomical structure classification and image tagging that features an
intrinsic measure of confidence to estimate its own performance with high
reliability and which can be applied to both RGB and multispectral imaging (MI)
data. Methods: Organ recognition is performed using a superpixel classification
strategy based on textural and reflectance information. Classification
confidence is estimated by analyzing the dispersion of class probabilities.
Assessment of the proposed technology is performed through a comprehensive in
vivo study with seven pigs. Results: When applied to image tagging, mean
accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB)
and 96% (MI) with the confidence measure. Conclusion: Results showed that the
confidence measure had a significant influence on the classification accuracy,
and MI data are better suited for anatomical structure labeling than RGB data.
Significance: This work significantly enhances the state of art in automatic
labeling of endoscopic videos by introducing the use of the confidence metric,
and by being the first study to use MI data for in vivo laparoscopic tissue
classification. The data of our experiments will be released as the first in
vivo MI dataset upon publication of this paper.Comment: 7 pages, 6 images, 2 table
Fashion Conversation Data on Instagram
The fashion industry is establishing its presence on a number of
visual-centric social media like Instagram. This creates an interesting clash
as fashion brands that have traditionally practiced highly creative and
editorialized image marketing now have to engage with people on the platform
that epitomizes impromptu, realtime conversation. What kinds of fashion images
do brands and individuals share and what are the types of visual features that
attract likes and comments? In this research, we take both quantitative and
qualitative approaches to answer these questions. We analyze visual features of
fashion posts first via manual tagging and then via training on convolutional
neural networks. The classified images were examined across four types of
fashion brands: mega couture, small couture, designers, and high street. We
find that while product-only images make up the majority of fashion
conversation in terms of volume, body snaps and face images that portray
fashion items more naturally tend to receive a larger number of likes and
comments by the audience. Our findings bring insights into building an
automated tool for classifying or generating influential fashion information.
We make our novel dataset of {24,752} labeled images on fashion conversations,
containing visual and textual cues, available for the research community.Comment: 10 pages, 6 figures, This paper will be presented at ICWSM'1
- …