10,936 research outputs found

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Semantic multimedia remote display for mobile thin clients

    Get PDF
    Current remote display technologies for mobile thin clients convert practically all types of graphical content into sequences of images rendered by the client. Consequently, important information concerning the content semantics is lost. The present paper goes beyond this bottleneck by developing a semantic multimedia remote display. The principle consists of representing the graphical content as a real-time interactive multimedia scene graph. The underlying architecture features novel components for scene-graph creation and management, as well as for user interactivity handling. The experimental setup considers the Linux X windows system and BiFS/LASeR multimedia scene technologies on the server and client sides, respectively. The implemented solution was benchmarked against currently deployed solutions (VNC and Microsoft-RDP), by considering text editing and WWW browsing applications. The quantitative assessments demonstrate: (1) visual quality expressed by seven objective metrics, e.g., PSNR values between 30 and 42 dB or SSIM values larger than 0.9999; (2) downlink bandwidth gain factors ranging from 2 to 60; (3) real-time user event management expressed by network round-trip time reduction by factors of 4-6 and by uplink bandwidth gain factors from 3 to 10; (4) feasible CPU activity, larger than in the RDP case but reduced by a factor of 1.5 with respect to the VNC-HEXTILE

    Towards a multimedia remote viewer for mobile thin clients

    Get PDF
    Be there a traditional mobile user wanting to connect to a remote multimedia server. In order to allow them to enjoy the same user experience remotely (play, interact, edit, store and share capabilities) as in a traditional fixed LAN environment, several dead-locks are to be dealt with: (1) a heavy and heterogeneous content should be sent through a bandwidth constrained network; (2) the displayed content should be of good quality; (3) user interaction should be processed in real-time and (4) the complexity of the practical solution should not exceed the features of the mobile client in terms of CPU, memory and battery. The present paper takes this challenge and presents a fully operational MPEG-4 BiFS solution

    Virtual Reference for Video Collections: System Infrastructure, User Interface and Pilot User Study

    Get PDF
    A new video-based Virtual Reference (VR) tool called VideoHelp was designed and developed to support video navigation escorting, a function that enables librarians to co-navigate a digital video with patrons in the web-based environment. A client/server infrastructure was adopted for the VideoHelp system and timestamps were used to achieve the video synchronization between the librarians and patrons. A pilot usability study of using VideoHelp prototype in video seeking was conducted and the preliminary results demonstrated that the system is easy to learn and use, and real-time assistance from virtual librarians in video navigation is desirable on a conditional basis

    From Keyword Search to Exploration: How Result Visualization Aids Discovery on the Web

    No full text
    A key to the Web's success is the power of search. The elegant way in which search results are returned is usually remarkably effective. However, for exploratory search in which users need to learn, discover, and understand novel or complex topics, there is substantial room for improvement. Human computer interaction researchers and web browser designers have developed novel strategies to improve Web search by enabling users to conveniently visualize, manipulate, and organize their Web search results. This monograph offers fresh ways to think about search-related cognitive processes and describes innovative design approaches to browsers and related tools. For instance, while key word search presents users with results for specific information (e.g., what is the capitol of Peru), other methods may let users see and explore the contexts of their requests for information (related or previous work, conflicting information), or the properties that associate groups of information assets (group legal decisions by lead attorney). We also consider the both traditional and novel ways in which these strategies have been evaluated. From our review of cognitive processes, browser design, and evaluations, we reflect on the future opportunities and new paradigms for exploring and interacting with Web search results

    Optimized mobile thin clients through a MPEG-4 BiFS semantic remote display framework

    Get PDF
    According to the thin client computing principle, the user interface is physically separated from the application logic. In practice only a viewer component is executed on the client device, rendering the display updates received from the distant application server and capturing the user interaction. Existing remote display frameworks are not optimized to encode the complex scenes of modern applications, which are composed of objects with very diverse graphical characteristics. In order to tackle this challenge, we propose to transfer to the client, in addition to the binary encoded objects, semantic information about the characteristics of each object. Through this semantic knowledge, the client is enabled to react autonomously on user input and does not have to wait for the display update from the server. Resulting in a reduction of the interaction latency and a mitigation of the bursty remote display traffic pattern, the presented framework is of particular interest in a wireless context, where the bandwidth is limited and expensive. In this paper, we describe a generic architecture of a semantic remote display framework. Furthermore, we have developed a prototype using the MPEG-4 Binary Format for Scenes to convey the semantic information to the client. We experimentally compare the bandwidth consumption of MPEG-4 BiFS with existing, non-semantic, remote display frameworks. In a text editing scenario, we realize an average reduction of 23% of the data peaks that are observed in remote display protocol traffic

    Embodiment in 3D virtual retail environments: exploring perceptions of the virtual shopping experience

    Get PDF
    The customer can now easily create, and customize, their own personal three dimensional (3D) virtual bodies in a variety of virtual environments; could you, by becoming a virtual body, actually enhance your online shopping and buying experiences or, would this potentially inhibit the pure visceral pleasure of retail therapy? "Second Life allows you to be a celebrity in your own lunchtime, .
you can design the body you've always wanted, and indulge your fashionista fetish for very little money. You can be the most attractive, best-dressed version of yourself you can imagine." This paper investigates online shopping in Second Life, through the experience of being avatars. We will discuss the possibilities of using avatars as brand new consumer identities for personalised and customised fashion shopping within the 3D multi user virtual environment, and question the influences and effects of these developments on the traditional high street shopping trip. The hyper un-realistic and non-sensory interface of online shopping for clothes has been hotly debated over the last decade; through the media, the industry and most importantly by the buying public. The customer’s inability to try on and experience the product has been the main inhibitor to shopping on-line, and the high levels of product returns in home shopping dramatically reflect this reality. Faster broadband connections and improved 2D web sites are making clothes shopping on the web more accessible, and for important customer groups, such as young professional females, and plus-size teenagers, virtual 3D technologies offer freedom of choice in any location. Retailers are now confidently providing different shopping experiences by combining 2D and 3D interactive visualisation technologies with advanced marketing techniques, to create virtual retail environments that attempt to actualise the true essence of shopping; by browsing, socialising, trying-on before buying and, in a new twist, leaving the store proudly wearing the item just purchased. American Apparel, Bershka, L’Oreal, Calvin Klein, Reebok, Sears, Nike and Adidas are pioneering virtual mega stores, and all offer newly innovative, and alternative shopping experiences inside 3D multi user virtual environments. An experiential and exploratory approach will be used to investigate fashion brands, and their virtual 3D stores in Second Life. As 3D avatars, we will record a range of customer perceptions and attempt to map their shopping patterns in this massively popular virtual world. The qualitative data gathered will inform discussions about the value of the virtual shopping experience for the customer and the retailer. Conclusions will also question the possibility of using avatars in a virtual shopping environment to acquire accurate body specifications for better fit and the collection of personal details for use in the future development of alternative shopping experiences

    Emerging technologies for learning report (volume 3)

    Get PDF
    • 

    corecore