90,300 research outputs found
Assessing a Collaborative Online Environment for Music Composition
The current pilot study tested the effectiveness of an e-learning environment built to enable students to compose
music collaboratively. The participants interacted online by using synchronous and asynchronous resources to
develop a project in which they composed a new music piece in collaboration. After the learning sessions,
individual semi-structured interviews with the participants were conducted to analyze the participants\u2019
perspectives regarding the e-learning environment\u2019s functionality, the resources of the e-learning platform, and
their overall experience with the e-learning process. Qualitative analyses of forum discussions with respect to
metacognitive dimensions, and semi-structured interview transcriptions were performed. The findings showed
that the participants successfully completed the composition task in the virtual environment, and that they
demonstrated the use of metacognitive processes. Moreover, four themes were apparent in the semi-structured
interview transcriptions: Teamwork, the platform, face-to-face/online differences, and strengths/weaknesses.
Overall, the participants exhibited an awareness of the potential of the online tools, and the task performed. The
results are discussed in consideration of metacognitive processes, and the following aspects that rendered virtual
activity effective for learning: The learning environment, the platform, the technological resources, the level of
challenge, and the nature of the activity. The possible implications of the findings for research on online
collaborative composition are also considered
Issues and techniques for collaborative music making on multi-touch surfaces
A range of systems exist for collaborative music making on multi-touch surfaces. Some of them have been highly successful, but currently there is no systematic way of designing them, to maximise collaboration for a particular user group. We are particularly interested in systems that will engage novices and experts. We designed a simple application in an initial attempt to clearly analyse some of the issues. Our application allows groups of users to express themselves in collaborative music making using pre-composed materials. User studies were video recorded and analysed using two techniques derived from Grounded Theory and Content Analysis. A questionnaire was also conducted and evaluated. Findings suggest that the application affords engaging interaction. Enhancements for collaborative music making on multi-touch surfaces are discussed. Finally, future work on the prototype is proposed to maximise engagement
Indexing, browsing and searching of digital video
Video is a communications medium that normally brings together moving pictures with a synchronised audio track into a discrete piece or pieces of information. The size of a âpiece â of video can variously be referred to as a frame, a shot, a scene, a clip, a programme or an episode, and these are distinguished by their lengths and by their composition. We shall return to the definition of each of these in section 4 this chapter. In modern society, video is ver
Semantic multimedia remote display for mobile thin clients
Current remote display technologies for mobile thin clients convert practically all types of graphical content into sequences of images rendered by the client. Consequently, important information concerning the content semantics is lost. The present paper goes beyond this bottleneck by developing a semantic multimedia remote display. The principle consists of representing the graphical content as a real-time interactive multimedia scene graph. The underlying architecture features novel components for scene-graph creation and management, as well as for user interactivity handling. The experimental setup considers the Linux X windows system and BiFS/LASeR multimedia scene technologies on the server and client sides, respectively. The implemented solution was benchmarked against currently deployed solutions (VNC and Microsoft-RDP), by considering text editing and WWW browsing applications. The quantitative assessments demonstrate: (1) visual quality expressed by seven objective metrics, e.g., PSNR values between 30 and 42 dB or SSIM values larger than 0.9999; (2) downlink bandwidth gain factors ranging from 2 to 60; (3) real-time user event management expressed by network round-trip time reduction by factors of 4-6 and by uplink bandwidth gain factors from 3 to 10; (4) feasible CPU activity, larger than in the RDP case but reduced by a factor of 1.5 with respect to the VNC-HEXTILE
Building self-optimized communication systems based on applicative cross-layer information
This article proposes the Implicit Packet Meta Header(IPMH) as a standard method to compute and represent common QoS properties of the Application Data Units (ADU) of multimedia streams using legacy and proprietary streamsâ headers (e.g. Real-time Transport Protocol headers). The use of IPMH by mechanisms located at different layers of the communication architecture will allow implementing fine per-packet selfoptimization of communication services regarding the actual application requirements. A case study showing how IPMH is used by error control mechanisms in the context of wireless networks is presented in order to demonstrate the feasibility and advantages of this approach
Customized television: Standards compliant advanced digital television
This correspondence describes a European Union supported collaborative project called CustomTV based on the premise that future TV sets will provide all sorts of multimedia information and interactivity, as well as manage all such services according to each userâs or group of userâs preferences/profiles. We have demonstrated the potential of recent standards (MPEG-4 and MPEG-7) to implement such a scenario by building
the following services: an advanced EPG, Weather Forecasting, and Stock Exchange/Flight Information
Digital libraries on an iPod: Beyond the client-server model
This paper describes an experimental system that enhanced an iPod with digital library capabilities. Using the open source digital library software Greenstone as a base, this paper more specifically maps out the technical steps necessary to achieve this, along with an account of our subsequent experimentation. This included command-line usage of Greenstone's basic runtime system on the device, augmenting the iPodâs main interactive menu-driven application to include searching and hierarchical browsing of digital library collections stored locally, and a selection of "launcher" applications for target documents such as text files, images and audio. Media rich applications for digital stories and collaging were also developed. We also configured the iPod to run as a web server to provide digital library content to others over a network, effectively turning the traditional mobile client-server upsidedown
- âŠ