358 research outputs found
Confidence bands for survival functions under semiparametric random censorship models
In medical reports point estimates and pointwise confidence intervals of parameters are usually displayed. When the parameter is a survival function, however, the approach of joining the upper end points of individual interval estimates obtained at several points and likewise for the lower end points would not produce bands that include the entire survival curve with a given confidence. Simultaneous confidence bands, which allow confidence statements to be valid for the entire survival curve,would be more meaningful
This dissertation focuses on a novel method of developing one-sample confidence bands for survival functions from right censored data. The approach is model- based, relying on a parametric model for the conditional expectation of the censoring indicator given the observed minimum, and derives its chief strength from easy access to a good-fitting model among a plethora of choices currently available for binary response data. The substantive methodological contribution is in exploiting an available semiparametric estimator of the survival function for the one-sample case to produce improved simultaneous confidence bands. Since the relevant limiting distribution cannot be transformed to a Brownian Bridge unlike for the normalized Kaplan{Meier process, a two-stage bootstrap approach that combines the classical bootstrap with the more recent model-based regeneration of censoring indicators is proposed and a justification of its asymptotic validity is also provided. Several different confidence bands are studied using the proposed approach. Numerical studies, including robustness of the proposed bands to misspecification, are carriedout to check efficacy. The method is illustrated using two lung cancer data sets
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing
Deep neural networks (DNN) have been shown to be useful in a wide range of
applications. However, they are also known to be vulnerable to adversarial
samples. By transforming a normal sample with some carefully crafted human
imperceptible perturbations, even highly accurate DNN make wrong decisions.
Multiple defense mechanisms have been proposed which aim to hinder the
generation of such adversarial samples. However, a recent work show that most
of them are ineffective. In this work, we propose an alternative approach to
detect adversarial samples at runtime. Our main observation is that adversarial
samples are much more sensitive than normal samples if we impose random
mutations on the DNN. We thus first propose a measure of `sensitivity' and show
empirically that normal samples and adversarial samples have distinguishable
sensitivity. We then integrate statistical hypothesis testing and model
mutation testing to check whether an input sample is likely to be normal or
adversarial at runtime by measuring its sensitivity. We evaluated our approach
on the MNIST and CIFAR10 datasets. The results show that our approach detects
adversarial samples generated by state-of-the-art attacking methods efficiently
and accurately.Comment: Accepted by ICSE 201
Towards Certified Probabilistic Robustness with High Accuracy
Adversarial examples pose a security threat to many critical systems built on
neural networks (such as face recognition systems, and self-driving cars).
While many methods have been proposed to build robust models, how to build
certifiably robust yet accurate neural network models remains an open problem.
For example, adversarial training improves empirical robustness, but they do
not provide certification of the model's robustness. On the other hand,
certified training provides certified robustness but at the cost of a
significant accuracy drop. In this work, we propose a novel approach that aims
to achieve both high accuracy and certified probabilistic robustness. Our
method has two parts, i.e., a probabilistic robust training method with an
additional goal of minimizing variance in terms of divergence and a runtime
inference method for certified probabilistic robustness of the prediction. The
latter enables efficient certification of the model's probabilistic robustness
at runtime with statistical guarantees. This is supported by our training
objective, which minimizes the variance of the model's predictions in a given
vicinity, derived from a general definition of model robustness. Our approach
works for a variety of perturbations and is reasonably efficient. Our
experiments on multiple models trained on different datasets demonstrate that
our approach significantly outperforms existing approaches in terms of both
certification rate and accuracy
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System
In recent years, the security issues of artificial intelligence have become
increasingly prominent due to the rapid development of deep learning research
and applications. Backdoor attack is an attack targeting the vulnerability of
deep learning models, where hidden backdoors are activated by triggers embedded
by the attacker, thereby outputting malicious predictions that may not align
with the intended output for a given input. In this work, we propose a novel
black-box backdoor attack based on machine unlearning. The attacker first
augments the training set with carefully designed samples, including poison and
mitigation data, to train a `benign' model. Then, the attacker posts unlearning
requests for the mitigation samples to remove the impact of relevant data on
the model, gradually activating the hidden backdoor. Since backdoors are
implanted during the iterative unlearning process, it significantly increases
the computational overhead of existing defense methods for backdoor detection
or mitigation. To address this new security threat, we proposes two methods for
detecting or mitigating such malicious unlearning requests. We conduct the
experiment in both exact unlearning and approximate unlearning (i.e., SISA)
settings. Experimental results indicate that: 1) our attack approach can
successfully implant backdoor into the model, and sharding increases the
difficult of attack; 2) our detection algorithms are effective in identifying
the mitigation samples, while sharding reduces the effectiveness of our
detection algorithms
Unifying Qualitative and Quantitative Safety Verification of DNN-Controlled Systems
The rapid advance of deep reinforcement learning techniques enables the
oversight of safety-critical systems through the utilization of Deep Neural
Networks (DNNs). This underscores the pressing need to promptly establish
certified safety guarantees for such DNN-controlled systems. Most of the
existing verification approaches rely on qualitative approaches, predominantly
employing reachability analysis. However, qualitative verification proves
inadequate for DNN-controlled systems as their behaviors exhibit stochastic
tendencies when operating in open and adversarial environments. In this paper,
we propose a novel framework for unifying both qualitative and quantitative
safety verification problems of DNN-controlled systems. This is achieved by
formulating the verification tasks as the synthesis of valid neural barrier
certificates (NBCs). Initially, the framework seeks to establish almost-sure
safety guarantees through qualitative verification. In cases where qualitative
verification fails, our quantitative verification method is invoked, yielding
precise lower and upper bounds on probabilistic safety across both infinite and
finite time horizons. To facilitate the synthesis of NBCs, we introduce their
-inductive variants. We also devise a simulation-guided approach for
training NBCs, aiming to achieve tightness in computing precise certified lower
and upper bounds. We prototype our approach into a tool called
and showcase its efficacy on four classic DNN-controlled systems.Comment: This work is a technical report for the paper with the same name to
appear in the 36th International Conference on Computer Aided Verification
(CAV 2024
EFFECTS OF RUNNING BIOMECHANICS ON THE OCCURRENCE OF ILIOTIBIAL SYNDROME IN MALE RUNNERS — A PROSPECTIVE STUDY
This study aimed to determine the gait characteristics that easily induce ITBS and explore the gait changes after the occurrence of ITBS. 30 healthy male runners participated in our study, 15 in ITBS and control group respectively. All participants underwent two gait trials, namely, before the first day of their routine running and after 8 weeks. After 8 weeks of running, the ITBS group exhibited greater peak anterior pelvic tilt and hip flexion angle than the control group. The ITBS group showed increased peak trunk inclination angle, whereas the control group demonstrated lower peak hip flexion and peak hip adduction than those at the beginning of running. Decreased peak hip flexion and peak hip adduction angle was a gait adjustment strategy that could be used to avoid ITBS occurrence. Excessive trunk posture and pelvic activity during running are also ITBS risk factors
EFFECTS OF PNF INTERVENTION ON PAIN, JOINT PROPRIOCEPTION AND KNEE MOMENTS IN THE ELDERLY WITH KNEE OSTEOARTHRITIS DURING STAIR ASCENDING
In this study, we aimed to explore the effects of a 6-week proprioceptive neuromuscular facilitation (PNF) intervention on stair pain, joint proprioception, and external knee moment in the elderly patients with knee osteoarthritis (KOA) during stair ascending. A total of 27 elderly patients with KOA participated in our study. Fourteen of the patients were included in the PNF group, and 13 were included in the control group. The WOMAC measures for specific pain and joint motion sense measures were used, and gait test were performed at weeks 0 and 6. After a 6-week PNF intervention, the PNF group showed a decreased “using stairs” pain score, decreased difficulty with “climbing stairs” score, decreased joint kinesthesia threshold, increased knee flexion moment (KFM), and decreased knee adduction moment (KAM) during climbing stairs. We suggest the use of PNF intervention, which relieves joint pain, enhances muscles strength and proprioception recovery, increases KFM, and decreases KAM, in the treatment of KOA in elderly patients
- …