602 research outputs found

    Rapid Determination of Saponins in the Honey-Fried Processing of Rhizoma Cimicifugae by Near Infrared Diffuse Reflectance Spectroscopy.

    Get PDF
    ObjectiveA model of Near Infrared Diffuse Reflectance Spectroscopy (NIR-DRS) was established for the first time to determine the content of Shengmaxinside I in the honey-fried processing of Rhizoma Cimicifugae.MethodsShengmaxinside I content was determined by high-performance liquid chromatography (HPLC), and the data of the honey-fried processing of Rhizoma Cimicifugae samples from different batches of different origins by NIR-DRS were collected by TQ Analyst 8.0. Partial Least Squares (PLS) analysis was used to establish a near-infrared quantitative model.ResultsThe determination coefficient R² was 0.9878. The Cross-Validation Root Mean Square Error (RMSECV) was 0.0193%, validating the model with a validation set. The Root Mean Square Error of Prediction (RMSEP) was 0.1064%. The ratio of the standard deviation for the validation samples to the standard error of prediction (RPD) was 5.5130.ConclusionThis method is convenient and efficient, and the experimentally established model has good prediction ability, and can be used for the rapid determination of Shengmaxinside I content in the honey-fried processing of Rhizoma Cimicifugae

    DIP: Differentiable Interreflection-aware Physics-based Inverse Rendering

    Full text link
    We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images. To model the illumination of a scene, existing inverse rendering works either completely ignore the indirect illumination or model it by coarse approximations, leading to sub-optimal illumination, geometry, and material prediction of the scene. In this work, we propose a physics-based illumination model that explicitly traces the incoming indirect lights at each surface point based on interreflection, followed by estimating each identified indirect light through an efficient neural network. Furthermore, we utilize the Leibniz's integral rule to resolve non-differentiability in the proposed illumination model caused by one type of environment light -- the tangent lights. As a result, the proposed interreflection-aware illumination model can be learned end-to-end together with geometry and materials estimation. As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed method performs favorably against existing inverse rendering methods on novel view synthesis and inverse rendering

    D-IF: Uncertainty-aware Human Digitization via Implicit Distribution Field

    Full text link
    Realistic virtual humans play a crucial role in numerous industries, such as metaverse, intelligent healthcare, and self-driving simulation. But creating them on a large scale with high levels of realism remains a challenge. The utilization of deep implicit function sparks a new era of image-based 3D clothed human reconstruction, enabling pixel-aligned shape recovery with fine details. Subsequently, the vast majority of works locate the surface by regressing the deterministic implicit value for each point. However, should all points be treated equally regardless of their proximity to the surface? In this paper, we propose replacing the implicit value with an adaptive uncertainty distribution, to differentiate between points based on their distance to the surface. This simple ``value to distribution'' transition yields significant improvements on nearly all the baselines. Furthermore, qualitative results demonstrate that the models trained using our uncertainty distribution loss, can capture more intricate wrinkles, and realistic limbs. Code and models are available for research purposes at https://github.com/psyai-net/D-IF_release

    Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations

    Full text link
    The abundance of instructional videos and their narrations over the Internet offers an exciting avenue for understanding procedural activities. In this work, we propose to learn video representation that encodes both action steps and their temporal ordering, based on a large-scale dataset of web instructional videos and their narrations, without using human annotations. Our method jointly learns a video representation to encode individual step concepts, and a deep probabilistic model to capture both temporal dependencies and immense individual variations in the step ordering. We empirically demonstrate that learning temporal ordering not only enables new capabilities for procedure reasoning, but also reinforces the recognition of individual steps. Our model significantly advances the state-of-the-art results on step classification (+2.8% / +3.3% on COIN / EPIC-Kitchens) and step forecasting (+7.4% on COIN). Moreover, our model attains promising results in zero-shot inference for step classification and forecasting, as well as in predicting diverse and plausible steps for incomplete procedures. Our code is available at https://github.com/facebookresearch/ProcedureVRL.Comment: Accepted to CVPR 202

    Introduction to (p × n)-Type Transverse Thermoelectrics

    Get PDF
    This chapter will review (p × n)-type transverse thermoelectrics (TTE). Starting with the device advantages of single-leg (p × n)-type TTE’s over other thermoelectric paradigms, the theory of (p × n)-type TTE materials is given. Then, the figure of merit, transport equations, and thermoelectric tensors are derived for an anisotropic effective-mass model in bulk three-dimensional materials (3D), quasi-two-dimensional (2D), and quasi-one-dimensional (1D) materials. This chapter concludes with a discussion of the cooling power for transverse thermoelectrics in terms of universal heat flux and electric field scales. The importance of anisotropic ambipolar conductivity for (p × n)-type TTEs highlights the need to explore noncubic, narrow-gap semiconductor or semimetallic candidate materials

    Putting Humans in a Scene: Learning Affordance in 3D Indoor Environments

    Full text link
    Affordance modeling plays an important role in visual understanding. In this paper, we aim to predict affordances of 3D indoor scenes, specifically what human poses are afforded by a given indoor environment, such as sitting on a chair or standing on the floor. In order to predict valid affordances and learn possible 3D human poses in indoor scenes, we need to understand the semantic and geometric structure of a scene as well as its potential interactions with a human. To learn such a model, a large-scale dataset of 3D indoor affordances is required. In this work, we build a fully automatic 3D pose synthesizer that fuses semantic knowledge from a large number of 2D poses extracted from TV shows as well as 3D geometric knowledge from voxel representations of indoor scenes. With the data created by the synthesizer, we introduce a 3D pose generative model to predict semantically plausible and physically feasible human poses within a given scene (provided as a single RGB, RGB-D, or depth image). We demonstrate that our human affordance prediction method consistently outperforms existing state-of-the-art methods.Comment: https://sites.google.com/view/3d-affordance-cvpr1
    • …
    corecore