8,574 research outputs found

    Footprints of emergence

    Get PDF
    It is ironic that the management of education has become more closed while learning has become more open, particularly over the past 10-20 years. The curriculum has become more instrumental, predictive, standardized, and micro-managed in the belief that this supports employability as well as the management of educational processes, resources, and value. Meanwhile, people have embraced interactive, participatory, collaborative, and innovative networks for living and learning. To respond to these challenges, we need to develop practical tools to help us describe these new forms of learning which are multivariate, self-organised, complex, adaptive, and unpredictable. We draw on complexity theory and our experience as researchers, designers, and participants in open and interactive learning to go beyond conventional approaches. We develop a 3D model of landscapes of learning for exploring the relationship between prescribed and emergent learning in any given curriculum. We do this by repeatedly testing our descriptive landscapes (or footprints) against theory, research, and practice across a range of case studies. By doing this, we have not only come up with a practical tool which can be used by curriculum designers, but also realised that the curriculum itself can usefully be treated as emergent, depending on the dynamicsbetween prescribed and emergent learning and how the learning landscape is curated

    Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information

    Get PDF
    Entertainment, education and training are changing because of multi-party interaction technology. In the past we have seen the introduction of embodied agents and robots that take the role of a museum guide, a news presenter, a teacher, a receptionist, or someone who is trying to sell you insurances, houses or tickets. In all these cases the embodied agent needs to explain and describe. In this paper we contribute the design of a 3D virtual presenter that uses different output channels to present and explain. Speech and animation (posture, pointing and involuntary movements) are among these channels. The behavior is scripted and synchronized with the display of a 2D presentation with associated text and regions that can be pointed at (sheets, drawings, and paintings). In this paper the emphasis is on the interaction between 3D presenter and the 2D presentation

    Using multimodal analysis to unravel a silent childā€™s learning

    Get PDF
    Although the English Foundation Stage Curriculum for children aged 3 to 5 years recognises that children learn through talk and play and through ā€˜movement and all their sensesā€™ (DfEE & QCA, 2000: 20), there is comparatively little theoretical understanding of how children learn through diverse ā€˜modesā€™, such as body movement, facial expression, gaze, the manipulation of objects and talk, and there is little practical guidance on how practitioners can support childrenā€™s ā€˜multimodalā€™ learning. Indeed, mounting research evidence indicates that since the introduction of a national early years curriculum and early years assessment schemes, practitioners have felt under increased pressure to focus on childrenā€™s verbal skills in order to provide evidence of childrenā€™s literacy and numeracy skills in preparation for primary education (see Flewitt, 2005a & 2005b). In the context of these changes, this article relates the story of Tallulah, a 3-year-old girl with a late July birthday, who, like many summer-born children in England, spent one year in an early years setting before moving to primary school aged just 4 years. The article draws on data collected as part of an ESRC-funded study that explored the different ā€˜modesā€™ young children use to make and express meaning in the different social settings of home and a preschool playgroup (Flewitt, 2003). Examples are given of how Tallulah communicated her understandings at home through skilful combinations of talk, gaze direction, body movement and facial expression, and how others in the home supported Tallulahā€™s learning. These are then compared with examples of how Tallulah communicated in playgroup, primarily by combining the silent modes of gaze, body movement and facial expression. The article identifies how the different social settings of home and preschool impacted upon her choices and uses of different expressive modes

    Understanding the performance of interactive applications

    Get PDF
    Many if not most computer systems are used by human users. The performance of such interactive systems ultimately affects those users. Thus, when measuring, understanding, and improving system performance, it makes sense to consider the human user's perspective. Essentially, the performance of interactive applications is determined by the perceptible lag in handling user requests. So, when characterizing the runtime of an interactive application we need a new approach that focuses on the perceptible lags rather than on overall and general performance characteristics. Such a new characterization approach should enable a new way to profile and improve the performance of interactive applications. Imagine a way that would seek out these perceptible lags and then investigate the causes of these lags. Performance analysts could simply optimize responsible parts of the software, thus eliminating perceptible lag for interactive applications. Unfortunately, existing profiling approaches either incur significant overhead that makes them impractical for an interactive scenario, or they lack the ability to provide insight into the causes of long latencies. An effective approach for interactive applications has to fulfill several requirements such as an accurate view of the causes of performance problems and insignificant perturbation of the interactive application. We propose a new profiling approach that helps developers to understand and improve the perceptible performance of interactive applications and satisfies the above needs

    Enhancing creativity and play through accessible projector-based interactive PC-control touch technology

    Get PDF
    Standard computer peripherals are often challenging to use for mobility impaired, and in particular controlling computer mouse movements efficiently and without risking strain injuries. Projected touch screen technology show promise in this respect. This paper presents a Projected Interactive PC-control pilot (PIP) solution for computer interaction. The paper focus on testing and improving the usability of the PIP solution through iterative user tests with mobility impaired children. It describes prototype improvements aimed at fulfilling the requirements of users with reduced motor skills, and discusses challenges and key findings from the extensive usability tests. Our results demonstrates that interactive touch based solutions may enable heavily impaired children to independently partake in creative activities and play, and points to the value of creating a touch based computer interaction solution tailored to the needs of this user group

    Multi-Sensory Interaction for Blind and Visually Impaired People

    Get PDF
    This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mindā€™s eye

    Do you agree? Contrasting Google's core web vitals and the impact of cookie consent banners with actual web QoE

    Get PDF
    Providing sophisticated web Quality of Experience (QoE) has become paramount for web service providers and network operators alike. Due to advances in web technologies (HTML5, responsive design, etc.), traditional web QoE models focusing mainly on loading times have to be refined and improved. In this work, we relate Googleā€™s Core Web Vitals, a set of metrics for improving user experience, to the loading time aspects of web QoE, and investigate whether the Core Web Vitals and web QoE agree on the perceived experience. To this end, we first perform objective measurements in the web using Googleā€™s Lighthouse. To close the gap between metrics and experience, we complement these objective measurements with subjective assessment by performing multiple crowdsourcing QoE studies. For this purpose, we developed CWeQS, a customized framework to emulate the entire web page loading process, and ask users for their experience while controlling the Core Web Vitals, which is available to the public. To properly configure CWeQS for the planned QoE study and the crowdsourcing setup, we conduct pre-studies, in which we evaluate the importance of the loading strategy of a web page and the importance of the user task. The obtained insights allow us to conduct the desired QoE studies for each of the Core Web Vitals. Furthermore, we assess the impact of cookie consent banners, which have become ubiquitous due to regulatory demands, on the Core Web Vitals and investigate their influence on web QoE. Our results suggest that the Core Web Vitals are much less predictive for web QoE than expected and that page loading times remain the main metric and influence factor in this context. We further observe that unobtrusive and acentric cookie consent banners are preferred by end-users and that additional delays caused by interacting with consent banners in order to agree to or reject cookies should be accounted along with the actual page load time to reduce waiting times and thus to improve web QoE
    • ā€¦
    corecore