108 research outputs found
CFD Evaluation of Blood Flow in an Improved Blalock-Taussig Shunt Using Patient Specific Geometries
Blalock-Taussig (BT) Shunt is a palliative surgical procedure used during a Norwood surgery on a newborn baby suffering from cyanotic heart defects. The BT Shunt can increase blood flow in patients’ pulmonary artery which can ease the “Blue Baby Syndrome.” Currently used BT Shunts do not produce a balanced flow distribution to the pulmonary arteries (PAs) which can cause high wall shear stress (WSS) and blood flow separation resulting in blood clots. A modified BT Shunt was designed to partially solve this problem. In our previous work [1], the modified BT Shunt was shown by numerical simulations to have the ability to better control the flow distribution between Innominate Artery (IA) and PA with lower and gradually varying WSS and with improved flow balance to the pulmonary artery at the T-junction of the shunt. The goal of this paper is to computationally evaluate the flow in the modified BT shunt model between innominate and pulmonary artery using a patient specific aorta model. The simulations are performed using the commercial CFD software ANSYS Fluent. The improved modified BT shunt is connected between IA and PA. A change in the length of the shunt can be made to fit it under different conditions of actual patients. In numerical simulations, a full geometry of patient’s aorta is considered. Results for different lengths of the shunt are compared to determine the length that generates the lowest WSS and improved flow distribution to the PAs. It was found that the length of nearly 26mm creates lower WSS and flow rate difference between the two sides of PA at the T-junction attachment of the shunt. A sophisticated computational model was created using SolidWorks and Blender software to create the realistic geometry which included the IA, PA and modified BT shunt. The numerical simulations provide details of the flow field including velocity and pressure field, WSS, and blood damage. Several parameters in shunt design weigh heavily in reducing the thrombosis. This study demonstrates how CFD can be effectively utilized in the design of a medical device such as BT shunt to improve the clinical outcomes in patients
Neural Vector Fields: Generalizing Distance Vector Fields by Codebooks and Zero-Curl Regularization
Recent neural networks based surface reconstruction can be roughly divided
into two categories, one warping templates explicitly and the other
representing 3D surfaces implicitly. To enjoy the advantages of both, we
propose a novel 3D representation, Neural Vector Fields (NVF), which adopts the
explicit learning process to manipulate meshes and implicit unsigned distance
function (UDF) representation to break the barriers in resolution and topology.
This is achieved by directly predicting the displacements from surface queries
and modeling shapes as Vector Fields, rather than relying on network
differentiation to obtain direction fields as most existing UDF-based methods
do. In this way, our approach is capable of encoding both the distance and the
direction fields so that the calculation of direction fields is
differentiation-free, circumventing the non-trivial surface extraction step.
Furthermore, building upon NVFs, we propose to incorporate two types of shape
codebooks, \ie, NVFs (Lite or Ultra), to promote cross-category reconstruction
through encoding cross-object priors. Moreover, we propose a new regularization
based on analyzing the zero-curl property of NVFs, and implement this through
the fully differentiable framework of our NVF (ultra). We evaluate both NVFs on
four surface reconstruction scenarios, including watertight vs non-watertight
shapes, category-agnostic reconstruction vs category-unseen reconstruction,
category-specific, and cross-domain reconstruction
Neural Vector Fields: Implicit Representation by Explicit Learning
Deep neural networks (DNNs) are widely applied for nowadays 3D surface
reconstruction tasks and such methods can be further divided into two
categories, which respectively warp templates explicitly by moving vertices or
represent 3D surfaces implicitly as signed or unsigned distance functions.
Taking advantage of both advanced explicit learning process and powerful
representation ability of implicit functions, we propose a novel 3D
representation method, Neural Vector Fields (NVF). It not only adopts the
explicit learning process to manipulate meshes directly, but also leverages the
implicit representation of unsigned distance functions (UDFs) to break the
barriers in resolution and topology. Specifically, our method first predicts
the displacements from queries towards the surface and models the shapes as
\textit{Vector Fields}. Rather than relying on network differentiation to
obtain direction fields as most existing UDF-based methods, the produced vector
fields encode the distance and direction fields both and mitigate the ambiguity
at "ridge" points, such that the calculation of direction fields is
straightforward and differentiation-free. The differentiation-free
characteristic enables us to further learn a shape codebook via Vector
Quantization, which encodes the cross-object priors, accelerates the training
procedure, and boosts model generalization on cross-category reconstruction.
The extensive experiments on surface reconstruction benchmarks indicate that
our method outperforms those state-of-the-art methods in different evaluation
scenarios including watertight vs non-watertight shapes, category-specific vs
category-agnostic reconstruction, category-unseen reconstruction, and
cross-domain reconstruction. Our code is released at
https://github.com/Wi-sc/NVF.Comment: Accepted by CVPR2023. Video:
https://www.youtube.com/watch?v=GMXKoJfmHr
On the Evaluation of Generative Models in Distributed Learning Tasks
The evaluation of deep generative models including generative adversarial
networks (GANs) and diffusion models has been extensively studied in the
literature. While the existing evaluation methods mainly target a centralized
learning problem with training data stored by a single client, many
applications of generative models concern distributed learning settings, e.g.
the federated learning scenario, where training data are collected by and
distributed among several clients. In this paper, we study the evaluation of
generative models in distributed learning tasks with heterogeneous data
distributions. First, we focus on the Fr\'echet inception distance (FID) and
consider the following FID-based aggregate scores over the clients: 1) FID-avg
as the mean of clients' individual FID scores, 2) FID-all as the FID distance
of the trained model to the collective dataset containing all clients' data. We
prove that the model rankings according to the FID-all and FID-avg scores could
be inconsistent, which can lead to different optimal generative models
according to the two aggregate scores. Next, we consider the kernel inception
distance (KID) and similarly define the KID-avg and KID-all aggregations.
Unlike the FID case, we prove that KID-all and KID-avg result in the same
rankings of generative models. We perform several numerical experiments on
standard image datasets and training schemes to support our theoretical
findings on the evaluation of generative models in distributed learning
problems.Comment: 17 pages, 10 figure
A Multi-Arm Two-Stage (MATS) Design for Proof-of-Concept and Dose Optimization in Early-Phase Oncology Trials
The Project Optimus initiative by the FDA's Oncology Center of Excellence is
widely viewed as a groundbreaking effort to change the of
conventional dose-finding strategies in oncology. Unlike in other therapeutic
areas where multiple doses are evaluated thoroughly in dose ranging studies,
early-phase oncology dose-finding studies are characterized by the practice of
identifying a single dose, such as the maximum tolerated dose (MTD) or the
recommended phase 2 dose (RP2D). Following the spirit of Project Optimus, we
propose an Multi-Arm Two-Stage (MATS) design for proof-of-concept (PoC) and
dose optimization that allows the evaluation of two selected doses from a
dose-escalation trial. The design assess the higher dose first across multiple
indications in the first stage, and adaptively enters the second stage for an
indication if the higher dose exhibits promising anti-tumor activities. In the
second stage, a randomized comparison between the higher and lower doses is
conducted to achieve proof-of-concept (PoC) and dose optimization. A Bayesian
hierarchical model governs the statistical inference and decision making by
borrowing information across doses, indications, and stages. Our simulation
studies show that the proposed MATS design yield desirable performance. An R
Shiny application has been developed and made available at
https://matsdesign.shinyapps.io/mats/
AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators
Many natural language processing (NLP) tasks rely on labeled data to train
machine learning models to achieve high performance. However, data annotation
can be a time-consuming and expensive process, especially when the task
involves a large amount of data or requires specialized domains. Recently,
GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot
ability across various NLP tasks. In this paper, we first claim that large
language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced
annotator by providing them with sufficient guidance and demonstrated examples.
To make LLMs to be better annotators, we propose a two-step approach,
'explain-then-annotate'. To be more precise, we begin by creating prompts for
every demonstrated example, which we subsequently utilize to prompt a LLM to
provide an explanation for why the specific ground truth answer/label was
chosen for that particular example. Following this, we construct the few-shot
chain-of-thought prompt with the self-generated explanation and employ it to
annotate the unlabeled data. We conduct experiments on three tasks, including
user input and keyword relevance assessment, BoolQ and WiC. The annotation
results from GPT-3.5 surpasses those from crowdsourced annotation for user
input and keyword relevance assessment. Additionally, for the other two tasks,
GPT-3.5 achieves results that are comparable to those obtained through
crowdsourced annotation
Low-dimensional perovskite nanoplatelet synthesis using in situ photophysical monitoring to establish controlled growth.
Perovskite nanoparticles have attracted the attention of research groups around the world for their impressive photophysical properties, facile synthesis and versatile surface chemistry. Here, we report a synthetic route that takes advantage of a suite of soluble precursors to generate CsPbBr3 perovskite nanoplatelets with fine control over size, thickness and optical properties. We demonstrate near unit cell precision, creating well characterized materials with sharp, narrow emission lines at 430, 460 and 490 nm corresponding to nanoplatelets that are 2, 4, and 6 unit cells thick, respectively. Nanoplatelets were characterized with optical spectroscopy, atomic force microscopy, scanning electron microscopy and transmission electron microscopy to explicitly correlate growth conditions, thickness and resulting photophysical properties. Detailed in situ photoluminescence spectroscopic studies were carried out to understand and optimize particle growth by correlating light emission with nanoplatelet growth across a range of synthetic conditions. It was found that nanoplatelet thickness and emission wavelength increase as the ratio of oleic acid to oleyl amine or the reaction temperature is increased. Using this information, we control the lateral size, width and corresponding emission wavelength of the desired nanoplatelets by modulating the temperature and ratios of the ligand
- …