49,493 research outputs found

    Transforming pre-service teacher curriculum: observation through a TPACK lens

    Get PDF
    This paper will discuss an international online collaborative learning experience through the lens of the Technological Pedagogical Content Knowledge (TPACK) framework. The teacher knowledge required to effectively provide transformative learning experiences for 21st century learners in a digital world is complex, situated and changing. The discussion looks beyond the opportunity for knowledge development of content, pedagogy and technology as components of TPACK towards the interaction between those three components. Implications for practice are also discussed. In today’s technology infused classrooms it is within the realms of teacher educators, practising teaching and pre-service teachers explore and address effective practices using technology to enhance learning

    Teaching and learning in virtual worlds: is it worth the effort?

    Get PDF
    Educators have been quick to spot the enormous potential afforded by virtual worlds for situated and authentic learning, practising tasks with potentially serious consequences in the real world and for bringing geographically dispersed faculty and students together in the same space (Gee, 2007; Johnson and Levine, 2008). Though this potential has largely been realised, it generally isn’t without cost in terms of lack of institutional buy-in, steep learning curves for all participants, and lack of a sound theoretical framework to support learning activities (Campbell, 2009; Cheal, 2007; Kluge & Riley, 2008). This symposium will explore the affordances and issues associated with teaching and learning in virtual worlds, all the time considering the question: is it worth the effort

    Remote Real-Time Collaboration Platform enabled by the Capture, Digitisation and Transfer of Human-Workpiece Interactions

    Get PDF
    In this highly globalised manufacturing ecosystem, product design and verification activities, production and inspection processes, and technical support services are spread across global supply chains and customer networks. Therefore, a platform for global teams to collaborate with each other in real-time to perform complex tasks is highly desirable. This work investigates the design and development of a remote real-time collaboration platform by using human motion capture technology powered by infrared light based depth imaging sensors borrowed from the gaming industry. The unique functionality of the proposed platform is the sharing of physical contexts during a collaboration session by not only exchanging human actions but also the effects of those actions on the task environment. This enables teams to remotely work on a common task problem at the same time and also get immediate feedback from each other which is vital for collaborative design, inspection and verifications tasks in the factories of the future

    First Steps Towards Blended Learning @ Bond

    Get PDF

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time
    • 

    corecore