84 research outputs found
Vid-ODE: Continuous-Time Video Generation with Neural Ordinary Differential Equation
Video generation models often operate under the assumption of fixed frame
rates, which leads to suboptimal performance when it comes to handling flexible
frame rates (e.g., increasing the frame rate of the more dynamic portion of the
video as well as handling missing video frames). To resolve the restricted
nature of existing video generation models' ability to handle arbitrary
timesteps, we propose continuous-time video generation by combining neural ODE
(Vid-ODE) with pixel-level video processing techniques. Using ODE-ConvGRU as an
encoder, a convolutional version of the recently proposed neural ODE, which
enables us to learn continuous-time dynamics, Vid-ODE can learn the
spatio-temporal dynamics of input videos of flexible frame rates. The decoder
integrates the learned dynamics function to synthesize video frames at any
given timesteps, where the pixel-level composition technique is used to
maintain the sharpness of individual frames. With extensive experiments on four
real-world video datasets, we verify that the proposed Vid-ODE outperforms
state-of-the-art approaches under various video generation settings, both
within the trained time range (interpolation) and beyond the range
(extrapolation). To the best of our knowledge, Vid-ODE is the first work
successfully performing continuous-time video generation using real-world
videos.Comment: Accepted to AAAI 2021, 22 page
ContraCluster: Learning to Classify without Labels by Contrastive Self-Supervision and Prototype-Based Semi-Supervision
The recent advances in representation learning inspire us to take on the
challenging problem of unsupervised image classification tasks in a principled
way. We propose ContraCluster, an unsupervised image classification method that
combines clustering with the power of contrastive self-supervised learning.
ContraCluster consists of three stages: (1) contrastive self-supervised
pre-training (CPT), (2) contrastive prototype sampling (CPS), and (3)
prototype-based semi-supervised fine-tuning (PB-SFT). CPS can select highly
accurate, categorically prototypical images in an embedding space learned by
contrastive learning. We use sampled prototypes as noisy labeled data to
perform semi-supervised fine-tuning (PB-SFT), leveraging small prototypes and
large unlabeled data to further enhance the accuracy. We demonstrate
empirically that ContraCluster achieves new state-of-the-art results for
standard benchmark datasets including CIFAR-10, STL-10, and ImageNet-10. For
example, ContraCluster achieves about 90.8% accuracy for CIFAR-10, which
outperforms DAC (52.2%), IIC (61.7%), and SCAN (87.6%) by a large margin.
Without any labels, ContraCluster can achieve a 90.8% accuracy that is
comparable to 95.8% by the best supervised counterpart.Comment: Accepted at ICPR 202
Skin care benefits of bioactive compounds isolated from Zanthoxylum piperitum DC. (Rutaceae)
Purpose: To investigate skin care efficacies of Zanthoxylum pipetitum extract and isolated compounds.
Methods: Ethanol extracts of leaves, branches and fruits of what were partitioned into n-hexane, chloroform, ethyl acetate, n-butanol and aqueous layers and some fractions were further analyzed to isolate five compounds. The isolated compounds were identified based on the proton and carbon nuclear magnetic resonance (NMR) spectra. Cosmetic efficacy tests of the extracts and isolated compounds were evaluated by in vitro tests.
Results: Phytochemical studies of the chloroform and ethyl acetate layers led to the isolation of five compounds; quercitrin (1), afzelin (2), hydroxy-α-sanshool (3), α-sanshool (4) and hyperoside (5). In activity tests, the extracts showed inhibitory activity against inflammation response and melanin synthesis, and induction of procollagen type I C-peptide (PIP). Among the isolated compounds, hydroxy-α-sanshool (3) and α-sanshool (4) displayed significant anti-inflammatory activity.
Conclusion: The results demonstrate that Z. piperitum extract and its active compounds possess a significant potential as a cosmeeutical agent for enhancing skin quality
Shuffle & Divide: Contrastive Learning for Long Text
We propose a self-supervised learning method for long text documents based on
contrastive learning. A key to our method is Shuffle and Divide (SaD), a simple
text augmentation algorithm that sets up a pretext task required for
contrastive updates to BERT-based document embedding. SaD splits a document
into two sub-documents containing randomly shuffled words in the entire
documents. The sub-documents are considered positive examples, leaving all
other documents in the corpus as negatives. After SaD, we repeat the
contrastive update and clustering phases until convergence. It is naturally a
time-consuming, cumbersome task to label text documents, and our method can
help alleviate human efforts, which are most expensive resources in AI. We have
empirically evaluated our method by performing unsupervised text classification
on the 20 Newsgroups, Reuters-21578, BBC, and BBCSport datasets. In particular,
our method pushes the current state-of-the-art, SS-SB-MT, on 20 Newsgroups by
20.94% in accuracy. We also achieve the state-of-the-art performance on
Reuters-21578 and exceptionally-high accuracy performances (over 95%) for
unsupervised classification on the BBC and BBCSport datasets.Comment: Accepted at ICPR 202
Estimating the Biases of the Korean National Cholesterol Proficiency Test
It is recommended that clinical laboratories keep the bias of serum total cholesterol analysis at <= 3.0% compared to a reference method. In Korea, national cholesterol proficiency testing has long been available, but there has been little information about the magnitude of analytical bias. The authors calculated the bias of the peer group mean for Korea`s national cholesterol proficiency test through an indirect approach that overcomes the potential matrix effect of proficiency test materials. One laboratory was selected among the proficiency test participants to represent Korean laboratories. Total cholesterol levels of six fresh serums spanning a wide range of concentrations were measured by the representative laboratory and three reference laboratories. The relationship between the proficiency test mean and the reference method mean was established by linear regression analysis. The peer group mean of the proficiency test was calculated to have a bias of +2.4 to +2.5% at the medical decision levels. When grouped by instrument and reagent, 29 to 66% of the laboratories showed biases < 3.0%. Thus it was determined that the peer group mean of the Korean cholesterol proficiency test has an acceptable level of positive bias. The indirect approach used in this study provides a practical model for estimating cholesterol analytical bias for proficiency testing.Ross JW, 1998, ARCH PATHOL LAB MED, V122, P587MYERS GL, 2000, CLIN CHEM, V46, P762Cleeman JI, 2001, JAMA-J AM MED ASSOC, V285, P2486, DOI 10.1001/jama.285.19.2486BROTONS C, 2003, EUR J GEN PRACT, V9, P124SHIN HH, 2003, KOR J LIPIDOL, V12, P226MIN WK, 2006, J LAB MED QUAL ASSUR, V28, P1MIN WK, 2007, J LAB MED QUAL ASSUR, V29, P1Teramoto T, 2007, J ATHEROSCLER THROMB, V14, P45MIN WK, 2008, J LAB MED QUAL ASSUR, V30, P1Stockl D, 1996, CLIN CHEM, V42, P469ROSS JW, 1993, ARCH PATHOL LAB MED, V117, P393*NIH, 1993, NIH PUBL*BUR INT POIDS MES, 2009, DAT HIGH ORD REF MAT*CDCP DIV LAB SCI, 2009, CHOL REF METH LAB NEELLERBE P, 1990, CLIN CHEM, V36, P370
VisAlign: Dataset for Measuring the Degree of Alignment between AI and Humans in Visual Perception
AI alignment refers to models acting towards human-intended goals,
preferences, or ethical principles. Given that most large-scale deep learning
models act as black boxes and cannot be manually controlled, analyzing the
similarity between models and humans can be a proxy measure for ensuring AI
safety. In this paper, we focus on the models' visual perception alignment with
humans, further referred to as AI-human visual alignment. Specifically, we
propose a new dataset for measuring AI-human visual alignment in terms of image
classification, a fundamental task in machine perception. In order to evaluate
AI-human visual alignment, a dataset should encompass samples with various
scenarios that may arise in the real world and have gold human perception
labels. Our dataset consists of three groups of samples, namely Must-Act (i.e.,
Must-Classify), Must-Abstain, and Uncertain, based on the quantity and clarity
of visual information in an image and further divided into eight categories.
All samples have a gold human perception label; even Uncertain (severely
blurry) sample labels were obtained via crowd-sourcing. The validity of our
dataset is verified by sampling theory, statistical theories related to survey
design, and experts in the related fields. Using our dataset, we analyze the
visual alignment and reliability of five popular visual perception models and
seven abstention methods. Our code and data is available at
\url{https://github.com/jiyounglee-0523/VisAlign}
The association of clinical indication for exercise stress testing with all-cause mortality: the FIT Project
INTRODUCTION: We hypothesized that the indication for stress testing provided by the referring physician would be an independent predictor of all-cause mortality.
MATERIAL AND METHODS: We studied 48,914 patients from The Henry Ford Exercise Testing Project (The FIT Project) without known congestive heart failure who were referred for a clinical treadmill stress test and followed for 11 ±4.7 years. The reason for stress test referral was abstracted from the clinical test order, and should be considered the primary concerning symptom or indication as stated by the ordering clinician. Hierarchical multivariable Cox proportional hazards regression was performed, after controlling for potential confounders including demographics, risk factors, and medication use as well as additional adjustment for exercise capacity in the final model.
RESULTS: A total of 67% of the patients were referred for chest pain, 12% for shortness of breath (SOB), 4% for palpitations, 3% for pre-operative evaluation, 6% for abnormal prior testing, and 7% for risk factors only. There were 6,211 total deaths during follow-up. Compared to chest pain, those referred for palpitations (HR = 0.72, 95% CI: 0.60-0.86) and risk factors only (HR = 0.72, 95% CI: 0.63-0.82) had a lower risk of all-cause mortality, whereas those referred for SOB (HR = 1.15, 95% CI: 1.07-1.23) and pre-operative evaluation (HR = 2.11, 95% CI: 1.94-2.30) had an increased risk. In subgroup analysis, referral for palpitations was protective only in those without coronary artery disease (CAD) (HR = 0.75, 95% CI: 0.62-0.90), while SOB increased mortality risk only in those with established CAD (HR = 1.25, 95% CI: 1.10-1.44).
CONCLUSIONS: The indication for stress testing is an independent predictor of mortality, showing an interaction with CAD status. Importantly, SOB may be associated with higher mortality risk than chest pain, particularly in patients with CAD
- …