1,718 research outputs found

    Work-in-Progress: Rapid Development of Advanced Virtual Labs for In-Person and Online Education

    Get PDF
    During the closure of K-12 schools and universities thanks to the COVID-19 pandemic, many educators turned to web conferencing tools such as Zoom and WebEx to deliver online lectures. For courses with labs, some teachers provide recorded videos of real labs. Watching recorded lab videos is a passive experience, as the procedures and point of view are fixed, and students do not have any control of the lab and thus miss the opportunity to explore different options, including making mistakes that is important part of the learning process. One approach that holds great potential to enhance laboratory experience for online education is the use of computer-based modeling and simulation tools. Simulation based virtual laboratories emulate lab equipment and configurations in highly realistic 3D environments and can provide very effective learning experiences. While there exist limited interactive lab computer simulations for various subjects, their presentations are still very primitive and often lack realism and complexity. This paper presents methodologies and preliminary findings on rapid development of advanced virtual labs using modeling and simulation for in-person and online education. The importance of modeling and simulation has long been recognized by the scientific community and agencies such as DoD and NSF. However, high-quality simulations are not commonplace, and simulations have not been widely employed in education. Existing simulations for education lack interoperability and compatibility. While there are sporadic uses of computer-based simulations in education that were developed in a piecemeal fashion, there was never systematic development at an industry level for such purposes. Virtual lab development usually require substantial amount of effort and lack of systematic research on rapid virtual lab development hinders their wide use in education. This paper proposes a wholistic and systematic approach for addressing the issues in rapid lab simulation development from several perspectives, including rapid generation of virtual environment, integration of state-of-the-art industry leading software tools, advanced software design techniques that enables large scale software reuse, and innovative user interface design that facilitate the configuration and use of virtual labs by instructors and students. This paper will implement a virtual circuit lab that emulates a circuit lab for the course XXX offered at XXX University and will be used to elucidate the crucial methodologies for rapid virtual lab development. The virtual lab contains highly realistic visual renderings and accurate functional representations of sophisticated equipment, such as digital oscilloscopes, function generator, and digital multimeters, and authentic rendition of the lab space. The virtual lab allows advanced analog and digital circuit simulation by integrating the de-facto industry standard circuit simulation engine SPICE and Xspice, supporting the circuit labs in the course XXX. The Unity game engine is used to develop the front end of the virtual lab. Advanced software development methodologies will be investigated to facilitate software reuse and rapid development, e.g., the same simulation code can be used to support equipment manufactured by different vendors. The paper will also investigate the impact of fidelity of the virtual lab, e.g., equipment and lab room, on student learning outcomes and efficacy

    Gaze-Based Personalized Multi-View Experiences

    Get PDF
    This paper describes a solution for delivering andpresenting stereoscopic video content to users in aninnovative way. It adopts the multi-view paradigm of theH.264-MVC video coding standard and the emergentMPEG DASH specification to provide users inheterogeneous network environments multiple and varyingperspectives of stereoscopic video sequences. Unlike existing3D systems based on multi-view technology, which requirehigh transmission bandwidth and high processing power onthe terminal device to achieve the same objective, theproposed solution is able to make an efficient use of networkresources whilst being cost-effective. It offers users a higherquality of experience by seamlessly adapting the quality ofthe delivered video content according to the networkconditions, whilst providing a more realistic sense ofimmersion by offering stereoscopic views of the scene,dynamically switching the perspective to match the interestsof the user. A non-intrusive head-tracking system using anoff-the-shelf Web camera detects the focus of attention ofthe user, transmitting this information to the server thatselects the most appropriate view to send to the client.Additionally, the system is able to generate the multipleperspective stereoscopic scenes using 2D cameras

    Gaze-Based Personalized Multi-View Experiences

    Get PDF
    This paper describes a solution for delivering andpresenting stereoscopic video content to users in aninnovative way. It adopts the multi-view paradigm of theH.264-MVC video coding standard and the emergentMPEG DASH specification to provide users inheterogeneous network environments multiple and varyingperspectives of stereoscopic video sequences. Unlike existing3D systems based on multi-view technology, which requirehigh transmission bandwidth and high processing power onthe terminal device to achieve the same objective, theproposed solution is able to make an efficient use of networkresources whilst being cost-effective. It offers users a higherquality of experience by seamlessly adapting the quality ofthe delivered video content according to the networkconditions, whilst providing a more realistic sense ofimmersion by offering stereoscopic views of the scene,dynamically switching the perspective to match the interestsof the user. A non-intrusive head-tracking system using anoff-the-shelf Web camera detects the focus of attention ofthe user, transmitting this information to the server thatselects the most appropriate view to send to the client.Additionally, the system is able to generate the multipleperspective stereoscopic scenes using 2D cameras

    Software architectural support for tangible user interfaces in distributed, heterogeneous computing environments

    Get PDF
    This research focuses on tools that support the development of tangible interaction-based applications for distributed computing environments. Applications built with these tools are capable of utilizing heterogeneous resources for tangible interaction and can be reconfigured for different contexts with minimal code changes. Current trends in computing, especially in areas such as computational science, scientific visualization and computer supported collaborative work, foreshadow increasing complexity, distribution and remoteness of computation and data. These trends imply that tangible interface developers must address concerns of both tangible interaction design and networked distributed computing. In this dissertation, we present a software architecture that supports separation of these concerns. Additionally, a tangibles-based software development toolkit based on this architecture is presented that enables the logic of elements within a tangible user interface to be mapped to configurations that vary in the number, type and location of resources within a given tangibles-based system
    • …
    corecore