239 research outputs found
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
Deep neural networks (DNNs)-powered Electrocardiogram (ECG) diagnosis systems
recently achieve promising progress to take over tedious examinations by
cardiologists. However, their vulnerability to adversarial attacks still lack
comprehensive investigation. The existing attacks in image domain could not be
directly applicable due to the distinct properties of ECGs in visualization and
dynamic properties. Thus, this paper takes a step to thoroughly explore
adversarial attacks on the DNN-powered ECG diagnosis system. We analyze the
properties of ECGs to design effective attacks schemes under two attacks models
respectively. Our results demonstrate the blind spots of DNN-powered diagnosis
systems under adversarial attacks, which calls attention to adequate
countermeasures.Comment: Accepted by AAAI 202
ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces
In recent years, neural implicit surface reconstruction has emerged as a
popular paradigm for multi-view 3D reconstruction. Unlike traditional
multi-view stereo approaches, the neural implicit surface-based methods
leverage neural networks to represent 3D scenes as signed distance functions
(SDFs). However, they tend to disregard the reconstruction of individual
objects within the scene, which limits their performance and practical
applications. To address this issue, previous work ObjectSDF introduced a nice
framework of object-composition neural implicit surfaces, which utilizes 2D
instance masks to supervise individual object SDFs. In this paper, we propose a
new framework called ObjectSDF++ to overcome the limitations of ObjectSDF.
First, in contrast to ObjectSDF whose performance is primarily restricted by
its converted semantic field, the core component of our model is an
occlusion-aware object opacity rendering formulation that directly
volume-renders object opacity to be supervised with instance masks. Second, we
design a novel regularization term for object distinction, which can
effectively mitigate the issue that ObjectSDF may result in unexpected
reconstruction in invisible regions due to the lack of constraint to prevent
collisions. Our extensive experiments demonstrate that our novel framework not
only produces superior object reconstruction results but also significantly
improves the quality of scene reconstruction. Code and more resources can be
found in \url{https://qianyiwu.github.io/objectsdf++}Comment: ICCV 2023. Project Page: https://qianyiwu.github.io/objectsdf++ Code:
https://github.com/QianyiWu/objectsdf_plu
How good are sparse cutting-planes
Abstract. Sparse cutting-planes are often the ones used in mixed-integer programing (MIP) solvers, since they help in solving the linear programs encountered during branch-&-bound more efficiently. However, how well can we approximate the integer hull by just using sparse cuttingplanes? In order to understand this question better, given a polyope P (e.g. the integer hull of a MIP), let P k be its best approximation using cuts with at most k non-zero coefficients. We consider d(P, P k ) = max x∈P k (min y∈P x − y ) as a measure of the quality of sparse cuts. In our first result, we present general upper bounds on d(P, P k ) which depend on the number of vertices in the polytope and exhibits three phases as k increases. Our bounds imply that if P has polynomially many vertices, using half sparsity already approximates it very well. Second, we present a lower bound on d(P, P k ) for random polytopes that show that the upper bounds are quite tight. Third, we show that for a class of hard packing IPs, sparse cutting-planes do not approximate the integer hull well. Finally, we show that using sparse cutting-planes in extended formulations is at least as good as using them in the original polyhedron, and give an example where the former is actually much better
The Disability Burden Associated With Stroke Emerges Before Stroke Onset and Differentially Affects Blacks: Results From the Health and Retirement Study Cohort
Background.
Few longitudinal studies compare changes in instrumental activities of daily living (IADLs) among stroke-free adults to prospectively document IADL changes among adults who experience a stroke. We contrast annual declines in IADL independence for older individuals who remain stroke-free to those for individuals who experienced a stroke. We also assess whether these patterns differ by sex, race, or Southern birthplace. Methods.
Health and Retirement Study participants who were stroke-free in 1998 (n = 17,741) were followed through 2010 (average follow-up = 8.9 years) for self- or proxy-reported stroke. We used logistic regressions to compare annual changes in odds of self-reported independence in six IADLs among those who remained stroke-free throughout follow-up (n = 15,888), those who survived a stroke (n = 1,412), and those who had a stroke and did not survive to participate in another interview (n = 442). We present models adjusted for demographic and socioeconomic covariates and also stratified on sex, race, and Southern birthplace. Results.
Compared with similar cohort members who remained stroke-free, participants who developed stroke had faster declines in IADL independence and a lower probability of IADL independence prior to the stroke. After a stroke, independence declined at an annual rate similar to those who did not have a stroke. The black-white disparity in IADL independence narrowed poststroke. Conclusion.
Racial differences in IADL independence are apparent long before stroke onset. Poststroke differences in IADL independence largely reflect pre stroke disparities
Masked Lip-Sync Prediction by Audio-Visual Contextual Exploitation in Transformers
Previous studies have explored generating accurately lip-synced talking faces
for arbitrary targets given audio conditions. However, most of them deform or
generate the whole facial area, leading to non-realistic results. In this work,
we delve into the formulation of altering only the mouth shapes of the target
person. This requires masking a large percentage of the original image and
seamlessly inpainting it with the aid of audio and reference frames. To this
end, we propose the Audio-Visual Context-Aware Transformer (AV-CAT) framework,
which produces accurate lip-sync with photo-realistic quality by predicting the
masked mouth shapes. Our key insight is to exploit desired contextual
information provided in audio and visual modalities thoroughly with delicately
designed Transformers. Specifically, we propose a convolution-Transformer
hybrid backbone and design an attention-based fusion strategy for filling the
masked parts. It uniformly attends to the textural information on the unmasked
regions and the reference frame. Then the semantic audio information is
involved in enhancing the self-attention computation. Additionally, a
refinement network with audio injection improves both image and lip-sync
quality. Extensive experiments validate that our model can generate
high-fidelity lip-synced results for arbitrary subjects.Comment: Accepted to SIGGRAPH Asia 2022 (Conference Proceedings). Project
page: https://hangz-nju-cuhk.github.io/projects/AV-CA
Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch to Portrait Generation
Creating the photo-realistic version of people sketched portraits is useful
to various entertainment purposes. Existing studies only generate portraits in
the 2D plane with fixed views, making the results less vivid. In this paper, we
present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the
possibility of creating Stereoscopic 3D-aware portraits from simple contour
sketches by involving 3D generative models. Our key insight is to design
sketch-aware constraints that can fully exploit the prior knowledge of a
tri-plane-based 3D-aware generative model. Specifically, our designed
region-aware volume rendering strategy and global consistency constraint
further enhance detail correspondences during sketch encoding. Moreover, in
order to facilitate the usage of layman users, we propose a Contour-to-Sketch
module with vector quantized representations, so that easily drawn contours can
directly guide the generation of 3D portraits. Extensive comparisons show that
our method generates high-quality results that match the sketch. Our usability
study verifies that our system is greatly preferred by user.Comment: Project Page on https://hangz-nju-cuhk.github.io
- …