7,523 research outputs found

    Numerical simulation of dental resurfacing of a feldspar porcelain with coarse diamond burs

    Get PDF
    Dental bioceramics are more and more attractive to both dentists and patients due to their unique biocompatibility and esthetics; they can be fabricated efficiently using chair-side CAD/CAM dental systems. However, the failure rate of ceramic prostheses is noticeable high. The major clinical failure mode lies in surface and subsurface damage in the ceramic prostheses due to their inherent brittleness. In clinical practice, ceramic prostheses are intraorally adjusted and resurfaced using dental handpieces/burs for marginal and occlusal fit. The clinical adjustments using abrasive burs produce surface and subsurface damage in prostheses. This paper will address this issue via numerical simulation. Finite element analysis was utilised to model the dental resurfacing of a feldspar porcelain with coarse diamond burs and to predict the degrees of subsurface damage of the porcelain prostheses

    N-(3-Chloro-4-eth­oxy­benzo­yl)-N′-(2-meth­oxy­phen­yl)thio­urea

    Get PDF
    In the title compound, C17H17ClN2O3S, the central carbonyl­thio­urea unit is nearly planar [maximum atomic deviation = 0.019 (3) Å] and makes dihedral angles of 2.47 (7) and 17.76 (6)° with the terminal benzene rings. An intra­molecular N—H⋯O hydrogen bond occurs. Weak inter­molecular C—H⋯S and C—H⋯Cl hydrogen bonding is observed in the crystal structure

    GaAs droplet quantum dots with nanometer-thin capping layer for plasmonic applications

    Full text link
    We report on the growth and optical characterisation of droplet GaAs quantum dots with extremely-thin (11 nm) capping layers. To achieve such result, an internal thermal heating step is introduced during the growth and its role in the morphological properties of the quantum dots obtained is investigated via scanning electron and atomic force microscopy. Photoluminescence measurements at cryogenic temperatures show optically stable, sharp and bright emission from single quantum dots, at near-infrared wavelengths. Given the quality of their optical properties and the proximity to the surface, such emitters are ideal candidates for the investigation of near field effects, like the coupling to plasmonic modes, in order to strongly control the directionality of the emission and/or the spontaneous emission rate, crucial parameters for quantum photonic applications.Comment: 1 pages, 3 figure

    Video-assisted thoracic bronchial sleeve lobectomy with bronchoplasty for treatment of lung cancer confined to a single lung lobe: a case series of Chinese patients

    Get PDF
    BACKGROUND: The outcomes of video-assisted thoracic bronchial sleeve lobectomy (VABSL), a minimally invasive video-assisted thoracoscopic (VATS) lobectomy, are mostly unknown in Chinese patients. OBJECTIVES: To investigate operative and postoperative outcomes of VABSL in a cases series of Chinese patients with lung cancer. METHODS: Retrospective study of 9 patients (male:female 8:1; mean age 59.4 ± 17.6 years, ranging 21–79 years) diagnosed with lung cancer of a single lobe, treated with VABSL between March 2009 and November 2011, and followed up for at least 2 months (mean follow-up: 14.17 ± 12.91 months). Operative outcomes (tumor size, operation time, estimated blood loss and blood transfusion), postoperative outcomes (intensive care unit [ICU] stay, hospitalization length and pathological tumor stage), death, tumor recurrence and safety were assessed. RESULTS: Patients were diagnosed with carcinoid cancer (11.1%), squamous carcinoma (66.7%) or small cell carcinoma (22.2%), affecting the right (77.8%) or left (22.2%) lung lobes in the upper (55.6%), middle (11.1%) or lower (33.3%) regions. TNM stages were T2 (88.9%) or T3 (11.1%); N0 (66.7%), N1 (11.1%) or N2 (22.2%); and M0 (100%). No patient required conversion to thoracotomy. Mean tumor size, operation time and blood loss were 2.50 ± 0.75 cm, 203 ± 20 min and 390 ± 206 ml, respectively. Patients were treated in the ICU for 18.7 ± 0.7 hours, and overall hospitalization duration was 20.8 ± 2.0 days. No deaths, recurrences or severe complications were reported. CONCLUSIONS: VABSL surgery is safe and effective for treatment of lung cancer by experienced physicians, warranting wider implementation of VABSL and VATS training in China

    Within-network ensemble for face attributes classification

    Get PDF
    Face attributes classification is drawing attention as a research topic with applications in multiple domains, such as video surveillance and social media analysis. In this work, we propose to train attributes in groups based on their localization (head, eyes, nose, cheek, mouth, shoulder, and general areas) in an end-to-end framework considering the correlations between the different attributes. Furthermore, a novel ensemble learning technique is introduced within the network itself that reduces the time of training compared to ensemble of several models. Our approach outperforms the state-of-the-art of the attributes with an average improvement of almost 0.60% and 0.48% points, on the public CELEBA and LFWA datasets, respectively

    HeadSculpt: Crafting 3D Head Avatars with Text

    Full text link
    Recently, text-guided 3D generative methods have made remarkable advancements in producing high-quality textures and geometry, capitalizing on the proliferation of large vision-language and image diffusion models. However, existing methods still struggle to create high-fidelity 3D head avatars in two aspects: (1) They rely mostly on a pre-trained text-to-image diffusion model whilst missing the necessary 3D awareness and head priors. This makes them prone to inconsistency and geometric distortions in the generated avatars. (2) They fall short in fine-grained editing. This is primarily due to the inherited limitations from the pre-trained 2D image diffusion models, which become more pronounced when it comes to 3D head avatars. In this work, we address these challenges by introducing a versatile coarse-to-fine pipeline dubbed HeadSculpt for crafting (i.e., generating and editing) 3D head avatars from textual prompts. Specifically, we first equip the diffusion model with 3D awareness by leveraging landmark-based control and a learned textual embedding representing the back view appearance of heads, enabling 3D-consistent head avatar generations. We further propose a novel identity-aware editing score distillation strategy to optimize a textured mesh with a high-resolution differentiable rendering technique. This enables identity preservation while following the editing instruction. We showcase HeadSculpt's superior fidelity and editing capabilities through comprehensive experiments and comparisons with existing methods.Comment: Webpage: https://brandonhan.uk/HeadSculpt

    Shilling Black-box Review-based Recommender Systems through Fake Review Generation

    Full text link
    Review-Based Recommender Systems (RBRS) have attracted increasing research interest due to their ability to alleviate well-known cold-start problems. RBRS utilizes reviews to construct the user and items representations. However, in this paper, we argue that such a reliance on reviews may instead expose systems to the risk of being shilled. To explore this possibility, in this paper, we propose the first generation-based model for shilling attacks against RBRSs. Specifically, we learn a fake review generator through reinforcement learning, which maliciously promotes items by forcing prediction shifts after adding generated reviews to the system. By introducing the auxiliary rewards to increase text fluency and diversity with the aid of pre-trained language models and aspect predictors, the generated reviews can be effective for shilling with high fidelity. Experimental results demonstrate that the proposed framework can successfully attack three different kinds of RBRSs on the Amazon corpus with three domains and Yelp corpus. Furthermore, human studies also show that the generated reviews are fluent and informative. Finally, equipped with Attack Review Generators (ARGs), RBRSs with adversarial training are much more robust to malicious reviews

    SINC: Self-Supervised In-Context Learning for Vision-Language Tasks

    Full text link
    Large Pre-trained Transformers exhibit an intriguing capacity for in-context learning. Without gradient updates, these models can rapidly construct new predictors from demonstrations presented in the inputs. Recent works promote this ability in the vision-language domain by incorporating visual information into large language models that can already make in-context predictions. However, these methods could inherit issues in the language domain, such as template sensitivity and hallucination. Also, the scale of these language models raises a significant demand for computations, making learning and operating these models resource-intensive. To this end, we raise a question: ``How can we enable in-context learning without relying on the intrinsic in-context ability of large language models?". To answer it, we propose a succinct and general framework, Self-supervised IN-Context learning (SINC), that introduces a meta-model to learn on self-supervised prompts consisting of tailored demonstrations. The learned models can be transferred to downstream tasks for making in-context predictions on-the-fly. Extensive experiments show that SINC outperforms gradient-based methods in various vision-language tasks under few-shot settings. Furthermore, the designs of SINC help us investigate the benefits of in-context learning across different tasks, and the analysis further reveals the essential components for the emergence of in-context learning in the vision-language domain.Comment: Accepted by ICCV 2023; Camera Ready Versio
    corecore