54 research outputs found

    Rapid Development of Advanced Virtual Labs for In-Person and Online Education

    Get PDF
    This abstract discusses methodologies and preliminary findings on rapid development of advanced virtual labs using modeling and simulation for in-person and online education, including rapid generation of virtual environment, integration of state-of-the-art industry leading software tools, advanced software design techniques that enables large scale software reuse, and innovative user interface design that facilitate the configuration and use of virtual labs by instructors and students. The latest design and development of the virtual lab for electronic circuits is presented

    Simulating Function Generators and Oscilloscopes in a Virtual Laboratory Environment

    Get PDF
    This paper discusses the development of a virtual laboratory for simulating electronic instruments commonly used in science and engineering courses, such as function generators and digital storage oscilloscopes. Mathematical equations are used to represent continuous signals and ensure signal integrity, while C# delegates are adopted to enable communication between simulated devices. The approach allows for loose coupling between software components and high cohesion of individual components, and can be applied to other virtual laboratory developments. The virtual laboratory provides a means for students to gain hands-on experience with electronic instruments and improve their understanding of theoretical concepts

    RainDiffusion:When Unsupervised Learning Meets Diffusion Models for Real-world Image Deraining

    Full text link
    What will happen when unsupervised learning meets diffusion models for real-world image deraining? To answer it, we propose RainDiffusion, the first unsupervised image deraining paradigm based on diffusion models. Beyond the traditional unsupervised wisdom of image deraining, RainDiffusion introduces stable training of unpaired real-world data instead of weakly adversarial training. RainDiffusion consists of two cooperative branches: Non-diffusive Translation Branch (NTB) and Diffusive Translation Branch (DTB). NTB exploits a cycle-consistent architecture to bypass the difficulty in unpaired training of standard diffusion models by generating initial clean/rainy image pairs. DTB leverages two conditional diffusion modules to progressively refine the desired output with initial image pairs and diffusive generative prior, to obtain a better generalization ability of deraining and rain generation. Rain-Diffusion is a non adversarial training paradigm, serving as a new standard bar for real-world image deraining. Extensive experiments confirm the superiority of our RainDiffusion over un/semi-supervised methods and show its competitive advantages over fully-supervised ones.Comment: 9 page

    Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs

    Full text link
    Large Language Models (LLMs) have proven their exceptional capabilities in performing language-related tasks. However, their deployment poses significant challenges due to their considerable memory and storage requirements. In response to this issue, weight-only quantization, particularly 3 and 4-bit weight-only quantization, has emerged as one of the most viable solutions. As the number of bits decreases, the quantization grid broadens, thus emphasizing the importance of up and down rounding. While previous studies have demonstrated that fine-tuning up and down rounding with the addition of perturbations can enhance accuracy in some scenarios, our study is driven by the precise and limited boundary of these perturbations, where only the threshold for altering the rounding value is of significance. Consequently, we propose a concise and highly effective approach for optimizing the weight rounding task. Our method, named SignRound, involves lightweight block-wise tuning using signed gradient descent, enabling us to achieve outstanding results within 400 steps. SignRound competes impressively against recent methods without introducing additional inference overhead. The source code will be publicly available at \url{https://github.com/intel/neural-compressor} soon

    Work-in-Progress: Rapid Development of Advanced Virtual Labs for In-Person and Online Education

    Get PDF
    During the closure of K-12 schools and universities thanks to the COVID-19 pandemic, many educators turned to web conferencing tools such as Zoom and WebEx to deliver online lectures. For courses with labs, some teachers provide recorded videos of real labs. Watching recorded lab videos is a passive experience, as the procedures and point of view are fixed, and students do not have any control of the lab and thus miss the opportunity to explore different options, including making mistakes that is important part of the learning process. One approach that holds great potential to enhance laboratory experience for online education is the use of computer-based modeling and simulation tools. Simulation based virtual laboratories emulate lab equipment and configurations in highly realistic 3D environments and can provide very effective learning experiences. While there exist limited interactive lab computer simulations for various subjects, their presentations are still very primitive and often lack realism and complexity. This paper presents methodologies and preliminary findings on rapid development of advanced virtual labs using modeling and simulation for in-person and online education. The importance of modeling and simulation has long been recognized by the scientific community and agencies such as DoD and NSF. However, high-quality simulations are not commonplace, and simulations have not been widely employed in education. Existing simulations for education lack interoperability and compatibility. While there are sporadic uses of computer-based simulations in education that were developed in a piecemeal fashion, there was never systematic development at an industry level for such purposes. Virtual lab development usually require substantial amount of effort and lack of systematic research on rapid virtual lab development hinders their wide use in education. This paper proposes a wholistic and systematic approach for addressing the issues in rapid lab simulation development from several perspectives, including rapid generation of virtual environment, integration of state-of-the-art industry leading software tools, advanced software design techniques that enables large scale software reuse, and innovative user interface design that facilitate the configuration and use of virtual labs by instructors and students. This paper will implement a virtual circuit lab that emulates a circuit lab for the course XXX offered at XXX University and will be used to elucidate the crucial methodologies for rapid virtual lab development. The virtual lab contains highly realistic visual renderings and accurate functional representations of sophisticated equipment, such as digital oscilloscopes, function generator, and digital multimeters, and authentic rendition of the lab space. The virtual lab allows advanced analog and digital circuit simulation by integrating the de-facto industry standard circuit simulation engine SPICE and Xspice, supporting the circuit labs in the course XXX. The Unity game engine is used to develop the front end of the virtual lab. Advanced software development methodologies will be investigated to facilitate software reuse and rapid development, e.g., the same simulation code can be used to support equipment manufactured by different vendors. The paper will also investigate the impact of fidelity of the virtual lab, e.g., equipment and lab room, on student learning outcomes and efficacy

    Semi-MoreGAN: A New Semi-supervised Generative Adversarial Network for Mixture of Rain Removal

    Full text link
    Rain is one of the most common weather which can completely degrade the image quality and interfere with the performance of many computer vision tasks, especially under heavy rain conditions. We observe that: (i) rain is a mixture of rain streaks and rainy haze; (ii) the scene depth determines the intensity of rain streaks and the transformation into the rainy haze; (iii) most existing deraining methods are only trained on synthetic rainy images, and hence generalize poorly to the real-world scenes. Motivated by these observations, we propose a new SEMI-supervised Mixture Of rain REmoval Generative Adversarial Network (Semi-MoreGAN), which consists of four key modules: (I) a novel attentional depth prediction network to provide precise depth estimation; (ii) a context feature prediction network composed of several well-designed detailed residual blocks to produce detailed image context features; (iii) a pyramid depth-guided non-local network to effectively integrate the image context with the depth information, and produce the final rain-free images; and (iv) a comprehensive semi-supervised loss function to make the model not limited to synthetic datasets but generalize smoothly to real-world heavy rainy scenes. Extensive experiments show clear improvements of our approach over twenty representative state-of-the-arts on both synthetic and real-world rainy images.Comment: 18 page

    Detail-recovery Image Deraining via Dual Sample-augmented Contrastive Learning

    Full text link
    The intricacy of rainy image contents often leads cutting-edge deraining models to image degradation including remnant rain, wrongly-removed details, and distorted appearance. Such degradation is further exacerbated when applying the models trained on synthetic data to real-world rainy images. We observe two types of domain gaps between synthetic and real-world rainy images: one exists in rain streak patterns; the other is the pixel-level appearance of rain-free images. To bridge the two domain gaps, we propose a semi-supervised detail-recovery image deraining network (Semi-DRDNet) with dual sample-augmented contrastive learning. Semi-DRDNet consists of three sub-networks:i) for removing rain streaks without remnants, we present a squeeze-and-excitation based rain residual network; ii) for encouraging the lost details to return, we construct a structure detail context aggregation based detail repair network; to our knowledge, this is the first time; and iii) for building efficient contrastive constraints for both rain streaks and clean backgrounds, we exploit a novel dual sample-augmented contrastive regularization network.Semi-DRDNet operates smoothly on both synthetic and real-world rainy data in terms of deraining robustness and detail accuracy. Comparisons on four datasets including our established Real200 show clear improvements of Semi-DRDNet over fifteen state-of-the-art methods. Code and dataset are available at https://github.com/syy-whu/DRD-Net.Comment: 17 page

    Effective Quantization for Diffusion Models on CPUs

    Full text link
    Diffusion models have gained popularity for generating images from textual descriptions. Nonetheless, the substantial need for computational resources continues to present a noteworthy challenge, contributing to time-consuming processes. Quantization, a technique employed to compress deep learning models for enhanced efficiency, presents challenges when applied to diffusion models. These models are notably more sensitive to quantization compared to other model types, potentially resulting in a degradation of image quality. In this paper, we introduce a novel approach to quantize the diffusion models by leveraging both quantization-aware training and distillation. Our results show the quantized models can maintain the high image quality while demonstrating the inference efficiency on CPUs. The code is publicly available at: https://github.com/intel/intel-extension-for-transformers
    corecore