4,766 research outputs found
Creation of virtual worlds from 3D models retrieved from content aware networks based on sketch and image queries
The recent emergence of user generated content requires new content creation tools that will be both easy to learn and easy to use. These new tools should enable the user to construct new high-quality content with minimum effort; it is essential to allow existing multimedia content to be reused as building blocks when creating new content. In this work we present a new tool for automatically constructing virtual worlds with minimum user intervention. Users can create these worlds by drawing a simple sketch, or by using interactively segmented 2D objects from larger images. The system receives as a query the sketch or the segmented image, and uses it to find similar 3D models that are stored in a Content Centric Network. The user selects a suitable model from the retrieved models, and the system uses it to automatically construct a virtual 3D world
A multimedia package for patient understanding and rehabilitation of non-contact anterior cruciate ligament injuries
Non-contact anterior cruciate ligament (ACL) injury is one of the most common ligament injuries in the body. Many patientsâ receive graft surgery to repair the damage, but have to undertake an extensive period of rehabilitation. However, non-compliance and lack of understanding of the injury, healing process and rehabilitation means patientâs return to activities before effective structural integrity of the graft has been reached. When clinicians educate the patient, to encourage compliance with treatment and rehabilitation, the only tools that are currently widely in use are static plastic models, line diagrams and pamphlets. As modern technology grows in use in anatomical education, we have developed a unique educational and training package for patientâs to use in gaining a better understanding of their injury and treatment plan. We have combined cadaveric dissections of the knee (and captured with high resolution digital images) with reconstructed 3D modules from the Visible Human dataset, computer generated animations, and images to produce a multimedia package, which can be used to educate the patient in their knee anatomy, the injury, the healing process and their rehabilitation, and how this links into key stages of improving graft integrity. It is hoped that this will improve patient compliance with their rehabilitation programme, and better long-term prognosis in returning to normal or near-normal activities. Feedback from healthcare professionals about this package has been positive and encouraging for its long-term use
Video Data Visualization System: Semantic Classification And Personalization
We present in this paper an intelligent video data visualization tool, based
on semantic classification, for retrieving and exploring a large scale corpus
of videos. Our work is based on semantic classification resulting from semantic
analysis of video. The obtained classes will be projected in the visualization
space. The graph is represented by nodes and edges, the nodes are the keyframes
of video documents and the edges are the relation between documents and the
classes of documents. Finally, we construct the user's profile, based on the
interaction with the system, to render the system more adequate to its
references.Comment: graphic
Piloting mobile mixed reality simulation in paramedic distance education
New pedagogical methods delivered through mobile mixed reality (via a user-supplied mobile phone incorporating 3d printing and augmented reality) are becoming possible in distance education, shifting pedagogy from 2D images, words and videos to interactive simulations and immersive mobile skill training environments. This paper presents insights from the implementation and testing of a mobile mixed reality intervention in an Australian distance paramedic science classroom. The context of this mobile simulation study is skills acquisition in airways management focusing on direct laryngoscopy with foreign body removal. The intervention aims to assist distance education learners in practicing skills prior to attending mandatory residential schools and helps build a baseline equality between those students that study face to face and those at a distance. Outcomes from the pilot study showed improvements in several key performance indicators in the distance learners, but also demonstrated problems to overcome in the pedagogical method
Video Watermarking Based on Interactive Detection of Feature Regions
International audienceVideo watermarking is very important in many areas of activity and especially in multimedia applications. Therefore, security of video stream has recently become a major concern and has attracted more and more attention in both the research and industrial domains. In this perspective, several video watermarking approaches are proposed but, based on our knowledge, there is no method which verified the compromise between invisibility and robustness against all usual attacks. In our previous work, we proposed a new video watermarking approach based on feature region generated from mosaic frame and multi-frequential embedding. This approach allowed obtaining a good invisibility and robustness against the maximum of usual attacks. In our future work, we propose to optimize the choice of the region of interest by using crowdsourcing technique. This last one is an emerging field of knowledge management that involves analyzing the behavior of users when the
Recommended from our members
Multimedia broadcast and internet satellite system design and user trial results
The EU funded project, System for Advanced Multimedia Broadcast
and IT Services (SAMBITS), has created an enhanced and synchronised,
multimedia terminal for merging satellite broadcast and internet
telecommunication services in a way that efficiently combines the large
bandwidth of the broadcast channel and the interactivity of the internet.
This paper proposes a novel broadcast and internet service concept, illustrates
this concept with two service scenarios and develops a system architecture to
demonstrate the range of key benefits provided by these new technologies.
It then describes the interactive multimedia terminal that was used for
consuming this new service concept. Finally, the results of the user trials on the
terminal are presented and discussed
'Breaking the glass': preserving social history in virtual environments
New media technologies play an important role in the evolution of our society. Traditional museums and heritage sites have evolved from the âcabinets of curiosityâ that focused mainly on the authority of the voice organising content, to the places that offer interactivity as a means to experience historical and cultural events of the past. They attempt to break down the division between visitors and historical artefacts, employing modern technologies that allow the audience to perceive a range of perspectives of the historical event. In this paper, we discuss virtual reconstruction and interactive storytelling techniques as a research methodology and educational and presentation practices for cultural heritage sites. We present the Narrating the Past project as a case study, in order to illustrate recent changes in the preservation of social history and guided tourist trails that aim to make the visitorâs experience more than just an architectural walk through
Text-based Editing of Talking-head Video
Editing talking-head video to change the speech content or to remove filler words is challenging. We propose a novel method to edit talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified, while maintaining a seamless audio-visual flow (i.e. no jump cuts). Our method automatically annotates an input talking-head video with phonemes, visemes, 3D face pose and geometry, reflectance, expression and scene illumination per frame. To edit a video, the user has to only edit the transcript, and an optimization strategy then chooses segments of the input corpus as base material. The annotated parameters corresponding to the selected segments are seamlessly stitched together and used to produce an intermediate video representation in which the lower half of the face is rendered with a parametric face model. Finally, a recurrent video generation network transforms this representation to a photorealistic video that matches the edited transcript. We demonstrate a large variety of edits, such as the addition, removal, and alteration of words, as well as convincing language translation and full sentence synthesis
- âŠ