5,681 research outputs found
Semantic multimedia remote display for mobile thin clients
Current remote display technologies for mobile thin clients convert practically all types of graphical content into sequences of images rendered by the client. Consequently, important information concerning the content semantics is lost. The present paper goes beyond this bottleneck by developing a semantic multimedia remote display. The principle consists of representing the graphical content as a real-time interactive multimedia scene graph. The underlying architecture features novel components for scene-graph creation and management, as well as for user interactivity handling. The experimental setup considers the Linux X windows system and BiFS/LASeR multimedia scene technologies on the server and client sides, respectively. The implemented solution was benchmarked against currently deployed solutions (VNC and Microsoft-RDP), by considering text editing and WWW browsing applications. The quantitative assessments demonstrate: (1) visual quality expressed by seven objective metrics, e.g., PSNR values between 30 and 42 dB or SSIM values larger than 0.9999; (2) downlink bandwidth gain factors ranging from 2 to 60; (3) real-time user event management expressed by network round-trip time reduction by factors of 4-6 and by uplink bandwidth gain factors from 3 to 10; (4) feasible CPU activity, larger than in the RDP case but reduced by a factor of 1.5 with respect to the VNC-HEXTILE
Learning from Multiple Sources for Video Summarisation
Many visual surveillance tasks, e.g.video summarisation, is conventionally
accomplished through analysing imagerybased features. Relying solely on visual
cues for public surveillance video understanding is unreliable, since visual
observations obtained from public space CCTV video data are often not
sufficiently trustworthy and events of interest can be subtle. On the other
hand, non-visual data sources such as weather reports and traffic sensory
signals are readily accessible but are not explored jointly to complement
visual data for video content analysis and summarisation. In this paper, we
present a novel unsupervised framework to learn jointly from both visual and
independently-drawn non-visual data sources for discovering meaningful latent
structure of surveillance video data. In particular, we investigate ways to
cope with discrepant dimension and representation whist associating these
heterogeneous data sources, and derive effective mechanism to tolerate with
missing and incomplete data from different sources. We show that the proposed
multi-source learning framework not only achieves better video content
clustering than state-of-the-art methods, but also is capable of accurately
inferring missing non-visual semantics from previously unseen videos. In
addition, a comprehensive user study is conducted to validate the quality of
video summarisation generated using the proposed multi-source model
UG^2: a Video Benchmark for Assessing the Impact of Image Restoration and Enhancement on Automatic Visual Recognition
Advances in image restoration and enhancement techniques have led to
discussion about how such algorithmscan be applied as a pre-processing step to
improve automatic visual recognition. In principle, techniques like deblurring
and super-resolution should yield improvements by de-emphasizing noise and
increasing signal in an input image. But the historically divergent goals of
the computational photography and visual recognition communities have created a
significant need for more work in this direction. To facilitate new research,
we introduce a new benchmark dataset called UG^2, which contains three
difficult real-world scenarios: uncontrolled videos taken by UAVs and manned
gliders, as well as controlled videos taken on the ground. Over 160,000
annotated frames forhundreds of ImageNet classes are available, which are used
for baseline experiments that assess the impact of known and unknown image
artifacts and other conditions on common deep learning-based object
classification approaches. Further, current image restoration and enhancement
techniques are evaluated by determining whether or not theyimprove baseline
classification performance. Results showthat there is plenty of room for
algorithmic innovation, making this dataset a useful tool going forward.Comment: Supplemental material: https://goo.gl/vVM1xe, Dataset:
https://goo.gl/AjA6En, CVPR 2018 Prize Challenge: ug2challenge.or
Video semantic clustering with sparse and incomplete tags
© 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Clustering tagged videos into semantic groups is important but challenging due to the need for jointly learning correlations between heterogeneous visual and tag data. The task is made more difficult by inherently sparse and incomplete tag labels. In this work, we develop a method for accurately clustering tagged videos based on a novel Hierarchical-Multi- Label Random Forest model capable of correlating structured visual and tag information. Specifically, our model exploits hierarchically structured tags of different abstractness of semantics and multiple tag statistical correlations, thus discovers more accurate semantic correlations among different video data, even with highly sparse/incomplete tags
- …