14 research outputs found
Advancing Counterfactual Inference through Quantile Regression
The capacity to address counterfactual "what if" inquiries is crucial for
understanding and making use of causal influences. Traditional counterfactual
inference usually assumes a structural causal model is available. However, in
practice, such a causal model is often unknown and may not be identifiable.
This paper aims to perform reliable counterfactual inference based on the
(learned) qualitative causal structure and observational data, without a given
causal model or even directly estimating conditional distributions. We re-cast
counterfactual reasoning as an extended quantile regression problem using
neural networks. The approach is statistically more efficient than existing
ones, and further makes it possible to develop the generalization ability of
the estimated counterfactual outcome to unseen data and provide an upper bound
on the generalization error. Experiment results on multiple datasets strongly
support our theoretical claims
SmartBrush: Text and Shape Guided Object Inpainting with Diffusion Model
Generic image inpainting aims to complete a corrupted image by borrowing
surrounding information, which barely generates novel content. By contrast,
multi-modal inpainting provides more flexible and useful controls on the
inpainted content, \eg, a text prompt can be used to describe an object with
richer attributes, and a mask can be used to constrain the shape of the
inpainted object rather than being only considered as a missing area. We
propose a new diffusion-based model named SmartBrush for completing a missing
region with an object using both text and shape-guidance. While previous work
such as DALLE-2 and Stable Diffusion can do text-guided inapinting they do not
support shape guidance and tend to modify background texture surrounding the
generated object. Our model incorporates both text and shape guidance with
precision control. To preserve the background better, we propose a novel
training and sampling strategy by augmenting the diffusion U-net with
object-mask prediction. Lastly, we introduce a multi-task training strategy by
jointly training inpainting with text-to-image generation to leverage more
training data. We conduct extensive experiments showing that our model
outperforms all baselines in terms of visual quality, mask controllability, and
background preservation
Adversarial consistency for single domain generalization in medical image segmentation
7R01HL141813-06 - NIH/National Heart, Lung, and Blood Institute; NIH/National Institutes of HealthFirst author draf
Semi-Implicit Denoising Diffusion Models (SIDDMs)
Despite the proliferation of generative models, achieving fast sampling
during inference without compromising sample diversity and quality remains
challenging. Existing models such as Denoising Diffusion Probabilistic Models
(DDPM) deliver high-quality, diverse samples but are slowed by an inherently
high number of iterative steps. The Denoising Diffusion Generative Adversarial
Networks (DDGAN) attempted to circumvent this limitation by integrating a GAN
model for larger jumps in the diffusion process. However, DDGAN encountered
scalability limitations when applied to large datasets. To address these
limitations, we introduce a novel approach that tackles the problem by matching
implicit and explicit factors. More specifically, our approach involves
utilizing an implicit model to match the marginal distributions of noisy data
and the explicit conditional distribution of the forward diffusion. This
combination allows us to effectively match the joint denoising distributions.
Unlike DDPM but similar to DDGAN, we do not enforce a parametric distribution
for the reverse step, enabling us to take large steps during inference. Similar
to the DDPM but unlike DDGAN, we take advantage of the exact form of the
diffusion process. We demonstrate that our proposed method obtains comparable
generative performance to diffusion-based models and vastly superior results to
models with a small number of sampling steps
An overview on smart contracts : challenges, advances and platforms
Smart contract technology is reshaping conventional industry and business processes. Being embedded in blockchains, smart contracts enable the contractual terms of an agreement to be enforced automatically without the intervention of a trusted third party. As a result, smart contracts can cut down administration and save services costs, improve the efficiency of business processes and reduce the risks. Although smart contracts are promising to drive the new wave of innovation in business processes, there are a number of challenges to be tackled. This paper presents a survey on smart contracts. We first introduce blockchains and smart contracts. We then present the challenges in smart contracts as well as recent technical advances. We also compare typical smart contract platforms and give a categorization of smart contract applications along with some representative examples. © 2019 Elsevier B.V
Blockchain for cloud exchange : a survey
Compared with single cloud service providers, cloud exchange provides users with lower price and flexible options. However, conventional cloud exchange markets are suffering from a number of challenges such as central architecture being vulnerable to malicious attacks and cheating behaviours of third-party auctioneers. The recent advances in blockchain technologies bring the opportunities to overcome the limitations of cloud exchange. However, the integration of blockchain with cloud exchange is still in infancy and extensive research efforts are needed to tackle a number of research challenges. To bridge this gap, this paper presents an overview on using blockchain for cloud exchange. In particular, we first give an overview on cloud exchange. We then briefly survey blockchain technology and discuss the issues on using blockchain for cloud exchange in aspects of security, privacy, reputation systems and transaction management. Finally, we present the open research issues in this promising area. © 201