1,414 research outputs found

    Smart Learning Services Based on Smart Cloud Computing

    Get PDF
    Context-aware technologies can make e-learning services smarter and more efficient since context-aware services are based on the user’s behavior. To add those technologies into existing e-learning services, a service architecture model is needed to transform the existing e-learning environment, which is situation-aware, into the environment that understands context as well. The context-awareness in e-learning may include the awareness of user profile and terminal context. In this paper, we propose a new notion of service that provides context-awareness to smart learning content in a cloud computing environment. We suggest the elastic four smarts (E4S)—smart pull, smart prospect, smart content, and smart push—concept to the cloud services so smart learning services are possible. The E4S focuses on meeting the users’ needs by collecting and analyzing users’ behavior, prospecting future services, building corresponding contents, and delivering the contents through cloud computing environment. Users’ behavior can be collected through mobile devices such as smart phones that have built-in sensors. As results, the proposed smart e-learning model in cloud computing environment provides personalized and customized learning services to its users

    Face-PAST: Facial Pose Awareness and Style Transfer Networks

    Full text link
    Facial style transfer has been quite popular among researchers due to the rise of emerging technologies such as eXtended Reality (XR), Metaverse, and Non-Fungible Tokens (NFTs). Furthermore, StyleGAN methods along with transfer-learning strategies have reduced the problem of limited data to some extent. However, most of the StyleGAN methods overfit the styles while adding artifacts to facial images. In this paper, we propose a facial pose awareness and style transfer (Face-PAST) network that preserves facial details and structures while generating high-quality stylized images. Dual StyleGAN inspires our work, but in contrast, our work uses a pre-trained style generation network in an external style pass with a residual modulation block instead of a transform coding block. Furthermore, we use the gated mapping unit and facial structure, identity, and segmentation losses to preserve the facial structure and details. This enables us to train the network with a very limited amount of data while generating high-quality stylized images. Our training process adapts curriculum learning strategy to perform efficient and flexible style mixing in the generative space. We perform extensive experiments to show the superiority of Face-PAST in comparison to existing state-of-the-art methods.Comment: 20 pages, 8 figures, 2 table

    Cache Optimization for H.264/AVC Motion Compensation

    Get PDF
    In this letter, we propose a cache organization that substantially reduces the memory bandwidth of motion compensation (MC) in the H.264/AVC decoders. To reduce duplicated memory accesses to P and B pictures, we employ a four-way set-associative cache in which its index bits are composed of horizontal and vertical address bits of the frame buffer and each line stores an 8 × 2 pixel data in the reference frames. Moreover, we alleviate the data fragmentation problem by selecting its line size that equals the minimum access size of the DDR SDRAM. The bandwidth of the optimized cache averaged over five QCIF IBBP image sequences requires only 129% of the essential bandwidth of an H.264/AVC MC

    Reusable Component IP Design using Refinement-based Design Environment

    Get PDF
    We propose a method of enhancing the reusability of the component IPs by separating communication and computation for a system function. In this approach, we assume that the component designers describe mainly the computation part of the component, and the system designer can construct the communication part by using our refinement-based design environment. Moreover, we introduced a concept of the Communication Architecture Template Tree (CATree), which helps IP designers to effectively separate computation and communication for a system function. We confirmed that this approach is effective by applying it to a H.264 decoder design

    A mixed-level virtual prototyping environment for refinement-based design environment

    Get PDF
    The Communication Architecture Template Tree (CATtree) is an abstraction of the specific range of communication functions and architectures, which can facilitate system function capture and communication architecture refinement. In this paper, we explain a TLM-RTL-SW mixedlevel simulation environment that is useful for the functional verification of partially refined system models. We employed SystemC, GNU Gdb and a HDL simulator for the simulation of CATtree-based TLM, SW and HW, respectively. We also employed a new operating system, DEOS so that each SystemC-based TLMs can be cross-compiled to be executed as software models on the target processors. We evaluated the flexibility and simulation performance of the virtual simulation environment with an H.264 decoder design example
    corecore