4,970 research outputs found

    7-Benzyl-3-(4-chloro­phen­yl)-2-isobutyl­amino-5,6,7,8-tetra­hydro­pyrido[4′,3′:4,5]thieno[2,3-d]pyrimidin-4(3H)-one

    Get PDF
    In the title compound, C26H27ClN4OS, the thienopyrimidine fused-ring system is close to coplanar (r.m.s. deviation = 0.0089 Å), with a maximum deviation of 0.0283 (17) Å for the N atom adjacent to the benzene ring. This ring system forms dihedral angles of 83.51 (3) and 88.20 (5)° with the adjacent benzyl and phenyl rings, respectively. In the crystal, N—H⋯Cl inter­actions and C—H⋯O hydrogen bonds are observed

    Auricle shaping using 3D printing and autologous diced cartilage.

    Get PDF
    ObjectiveTo reconstruct the auricle using a porous, hollow, three-dimensional (3D)-printed mold and autologous diced cartilage mixed with platelet-rich plasma (PRP).MethodsMaterialise Magics v20.03 was used to design a 3D, porous, hollow auricle mold. Ten molds were printed by selective laser sintering with polyamide. Cartilage grafts were harvested from one ear of a New Zealand rabbit, and PRP was prepared using 10 mL of auricular blood from the same animal. Ear cartilage was diced into 0.5- to 2.0-mm pieces, weighed, mixed with PRP, and then placed inside the hollow mold. Composite grafts were then implanted into the backs of respective rabbits (n = 10) for 4 months. The shape and composition of the diced cartilage were assessed histologically, and biomechanical testing was used to determine stiffness.ResultsThe 3D-printed auricle molds were 0.6-mm thick and showed connectivity between the internal and external surfaces, with round pores of 0.1 to 0.3 cm. After 4 months, the diced cartilage pieces had fused into an auricular shape with high fidelity to the anthropotomy. The weight of the diced cartilage was 5.157 ± 0.230 g (P > 0.05, compared with preoperative). Histological staining showed high chondrocyte viability and the production of collagen II, glycosaminoglycans, and other cartilaginous matrix components. In unrestricted compression tests, auricle stiffness was 0.158 ± 0.187 N/mm, similar to that in humans.ConclusionAuricle grafts were constructed successfully through packing a 3D-printed, porous, hollow auricle mold with diced cartilage mixed with PRP. The auricle cartilage contained viable chondrocytes, appropriate extracellular matrix components, and good mechanical properties.Levels of evidenceNA. Laryngoscope, 129:2467-2474, 2019

    APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation

    Get PDF
    Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images. Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype. However, this framework suffers from biased classification due to incomplete feature comparisons. To address this issue, we present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes and thus construct complete sample pairs for learning semantic alignment with query features. The complementary features learning manner effectively enriches feature comparison and helps yield an unbiased segmentation model in the few-shot setting. It is implemented with a two-branch end-to-end network (\ie, a class-specific branch and a class-agnostic branch), which generates prototypes and then combines query features to perform comparisons. In addition, the proposed class-agnostic branch is simple yet effective. In practice, it can adaptively generate multiple class-agnostic prototypes for query images and learn feature alignment in a self-contrastive manner. Extensive experiments on PASCAL-5 i and COCO-20 i demonstrate the superiority of our method. At no expense of inference efficiency, our model achieves state-of-the-art results in both 1-shot and 5-shot settings for few-shot semantic segmentation

    APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation

    Get PDF
    Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images. Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype. However, this framework suffers from biased classification due to incomplete feature comparisons. To address this issue, we present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes and thus construct complete sample pairs for learning semantic alignment with query features. The complementary features learning manner effectively enriches feature comparison and helps yield an unbiased segmentation model in the few-shot setting. It is implemented with a two-branch end-to-end network (\ie, a class-specific branch and a class-agnostic branch), which generates prototypes and then combines query features to perform comparisons. In addition, the proposed class-agnostic branch is simple yet effective. In practice, it can adaptively generate multiple class-agnostic prototypes for query images and learn feature alignment in a self-contrastive manner. Extensive experiments on PASCAL-5 i and COCO-20 i demonstrate the superiority of our method. At no expense of inference efficiency, our model achieves state-of-the-art results in both 1-shot and 5-shot settings for few-shot semantic segmentation

    VeRi3D: Generative Vertex-based Radiance Fields for 3D Controllable Human Image Synthesis

    Full text link
    Unsupervised learning of 3D-aware generative adversarial networks has lately made much progress. Some recent work demonstrates promising results of learning human generative models using neural articulated radiance fields, yet their generalization ability and controllability lag behind parametric human models, i.e., they do not perform well when generalizing to novel pose/shape and are not part controllable. To solve these problems, we propose VeRi3D, a generative human vertex-based radiance field parameterized by vertices of the parametric human template, SMPL. We map each 3D point to the local coordinate system defined on its neighboring vertices, and use the corresponding vertex feature and local coordinates for mapping it to color and density values. We demonstrate that our simple approach allows for generating photorealistic human images with free control over camera pose, human pose, shape, as well as enabling part-level editing

    APyCE: A Python module for parsing and visualizing 3D reservoir digital twin models

    Get PDF
    Engineers, geoscientists, and analysts can benefit from fast, easy, and real-time immersive 3D visualization to enhance their understanding and collaboration in a virtual 3D world. However, converting 3D reservoir data formats between different software programs and open-source standards can be challenging due to the complexity of programming and discrepancies in internal data structures. This paper introduces an open-source Python implementation focused on parsing industry reservoir data formats into a popular opensource visualization data format, Visual Toolkit files. Using object-oriented programming, a simple workflow was developed to export corner-point grids to Visual Toolkit-hexahedron structures. To demonstrate the utility of the software, standard raw input files of reservoir models are processed and visualized using Paraview. This tool aims to accelerate the digital transformation of the oil and gas industry in terms of 3D digital content generation and collaboration.Document Type: Short communicationCited as: Tosta, M., Oliveira, G. P., Wang, B., Chen, Z., Liao, Q. APyCE: A Python module for parsing and visualizing 3D reservoir digital twin models. Advances in Geo-Energy Research, 2023, 8(3): 206-210. https://doi.org/10.46690/ager.2023.06.0

    Creating a Dataset for High-Performance Computing Code Translation: A Bridge Between HPC Fortran and C++

    Full text link
    In this study, we present a novel dataset for training machine learning models translating between OpenMP Fortran and C++ code. To ensure reliability and applicability, the dataset is initially refined using a meticulous code similarity test. The effectiveness of our dataset is assessed using both quantitative (CodeBLEU) and qualitative (human evaluation) methods. We demonstrate how this dataset can significantly improve the translation capabilities of large-scale language models, with improvements of ×5.1\mathbf{\times 5.1} for models with no prior coding knowledge and ×9.9\mathbf{\times 9.9} for models with some coding familiarity. Our work highlights the potential of this dataset to advance the field of code translation for high-performance computing. The dataset is available at https://github.com/bin123apple/Fortran-CPP-HPC-code-translation-datase
    • …
    corecore