274 research outputs found

    《朝鮮實錄》中之明弘治皇帝

    Full text link

    Energy saving potential of a counter-flow regenerative evaporative cooler for various climates of China: Experiment-based evaluation

    Get PDF
    © 2017 Recently there has been growing interest in regenerative evaporative coolers (REC), which can reduce the temperature of the supply air to below the wet-bulb of intake air and approach its dew-point. In this paper, we designed, fabricated and experimentally tested a counter-flow REC in laboratory. The REC's core heat and mass exchanger was fabricated using stacked sheets composed of high wicking evaporation (wickability of available materials was measured) and waterproof aluminium materials. The developed REC system has a much higher cooling performance compared to conventional indirect evaporative cooler. However, the decision to use the REC for China buildings depends on a dedicated evaluation of the net energy saved against the capital expended. Such an evaluation requires the hourly-based data on the availability of cooling capacity provided by the REC for various climates. The paper used an experiment-based method to estimate the cooling capacity and energy savings provided by the proposed REC for China's various climates. By using the experimental results and regional hourly-based weather data, the energy saving potential of the REC against an equivalent-sized mechanical air conditioner alone was analysed. The results indicate that, for all selected regions, the REC could reduce 53–100% of cooling load and 13–58% of electrical energy consumption annually

    Patch-based 3D Natural Scene Generation from a Single Example

    Full text link
    We target a 3D generative model for general natural scenes that are typically unique and intricate. Lacking the necessary volumes of training data, along with the difficulties of having ad hoc designs in presence of varying scene characteristics, renders existing setups intractable. Inspired by classical patch-based image models, we advocate for synthesizing 3D scenes at the patch level, given a single example. At the core of this work lies important algorithmic designs w.r.t the scene representation and generative patch nearest-neighbor module, that address unique challenges arising from lifting classical 2D patch-based framework to 3D generation. These design choices, on a collective level, contribute to a robust, effective, and efficient model that can generate high-quality general natural scenes with both realistic geometric structure and visual appearance, in large quantities and varieties, as demonstrated upon a variety of exemplar scenes.Comment: 23 pages, 26 figures, accepted by CVPR 2023. Project page: http://weiyuli.xyz/Sin3DGen

    明太祖「龍飛」官史「塑像」之分析 : 《太祖實錄》史料探源舉隅

    Full text link

    Adaptive and Sequential Methods for Clinical Trials

    Get PDF
    This special issue describes state-of-the-art statistical research in adaptive and sequential methods and the application of such methods in clinical trials. It provides 1 review article and 5 research articles contributed by some of the leading experts in this field. The review article gives a comprehensive overview of the outstanding methodology in the current literature that is related to adaptive and sequential clinical trials, while each of the 5 research articles addresses specific critical issues in contemporary clinical trials, as summarized below

    Towards a Neural Graphics Pipeline for Controllable Image Generation

    Get PDF
    In this paper, we leverage advances in neural networks towards forming a neural rendering for controllable image generation, and thereby bypassing the need for detailed modeling in conventional graphics pipeline. To this end, we present Neural Graphics Pipeline (NGP), a hybrid generative model that brings together neural and traditional image formation models. NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation. To form an image, NGP generates coarse 3D models that are fed into neural rendering modules to produce view-specific interpretable 2D maps, which are then composited into the final output image using a traditional image formation model. Our approach offers control over image generation by providing direct handles controlling illumination and camera parameters, in addition to control over shape and appearance variations. The key challenge is to learn these controls through unsupervised training that links generated coarse 3D models with unpaired real images via neural and traditional (e.g., Blinn- Phong) rendering functions, without establishing an explicit correspondence between them. We demonstrate the effectiveness of our approach on controllable image generation of single-object scenes. We evaluate our hybrid modeling framework, compare with neural-only generation methods (namely, DCGAN, LSGAN, WGAN-GP, VON, and SRNs), report improvement in FID scores against real images, and demonstrate that NGP supports direct controls common in traditional forward rendering. Code is available at http://geometry.cs.ucl.ac.uk/projects/2021/ngp.Comment: Eurographics 202

    Example-based Motion Synthesis via Generative Motion Matching

    Full text link
    We present GenMM, a generative model that "mines" as many diverse motions as possible from a single or few example sequences. In stark contrast to existing data-driven methods, which typically require long offline training time, are prone to visual artifacts, and tend to fail on large and complex skeletons, GenMM inherits the training-free nature and the superior quality of the well-known Motion Matching method. GenMM can synthesize a high-quality motion within a fraction of a second, even with highly complex and large skeletal structures. At the heart of our generative framework lies the generative motion matching module, which utilizes the bidirectional visual similarity as a generative cost function to motion matching, and operates in a multi-stage framework to progressively refine a random guess using exemplar motion matches. In addition to diverse motion generation, we show the versatility of our generative framework by extending it to a number of scenarios that are not possible with motion matching alone, including motion completion, key frame-guided generation, infinite looping, and motion reassembly. Code and data for this paper are at https://wyysf-98.github.io/GenMM/Comment: SIGGRAPH 2023. Project page: https://wyysf-98.github.io/GenMM/, Video: https://www.youtube.com/watch?v=lehnxcade4

    MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras

    Full text link
    Synthesizing novel views of dynamic humans from stationary monocular cameras is a popular scenario. This is particularly attractive as it does not require static scenes, controlled environments, or specialized hardware. In contrast to techniques that exploit multi-view observations to constrain the modeling, given a single fixed viewpoint only, the problem of modeling the dynamic scene is significantly more under-constrained and ill-posed. In this paper, we introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that models the dynamic scene using a 4D continuous time-variant function. The proposed representation is learned by an optimization which models a dynamic scene that minimizes the error of rendering all observation images. At the heart of our work lies a novel optimization formulation, which is constrained by a motion consensus regularization on the motion flow. We extensively evaluate MoCo-Flow on several datasets that contain human motions of varying complexity, and compare, both qualitatively and quantitatively, to several baseline methods and variants of our methods. Pretrained model, code, and data will be released for research purposes upon paper acceptance
    corecore