1,472 research outputs found
Septic Cavernous Sinus Thrombosis: An Unusual and Fatal Disease
BackgroundSeptic cavernous sinus thrombosis (CST) is a rare and fatal disease. Clinical presentations in the early stage are nonspecific, and the sensitivity of cranial axial computed tomography (CT) with thick section is low. This study analyzed the clinical manifestation and neuroimaging findings in patients with septic CST in a medical center in Taiwan.MethodsThis retrospective case series included nine patients with septic CST who had typical symptoms and clinical course, evidence of infection, and imaging studies which demonstrated cavernous sinus lesion, and who were treated between 1995 and 2003 at National Taiwan University Hospital.ResultsSeven (77.8 %) patients were more than 50 years old. Five (55.6%) had diabetes, and three (33.3%) had hematologic diseases. All cases were associated with paranasal sinusitis. The most frequent initial symptom was headache (66.7%), followed by ophthalmic complaints (diplopia or ophthalmoplegia, 55.6%; blurred vision or blindness, 55.6%), and ptosis (44.4%). Initial cranial images failed to identify CTS in all patients. Subsequent magnetic resonance imaging (MRI) or coronal contrast-enhanced CT (CECT) with thin section confirmed the diagnosis. Fungi were the most common pathogens (55.6%). The inhospital case-fatality rate was high (44.4%).ConclusionDue to the high case-fatality rate and low yield rate of blood cultures, fungal CST should be suspected in an immunocompromised patient with ophthalmic complaints that progress from one eye to the other. Coronal thin-section CECT may be a useful alternative to MRI as a diagnostic modality for this condition
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
The prediction accuracy has been the long-lasting and sole standard for
comparing the performance of different image classification models, including
the ImageNet competition. However, recent studies have highlighted the lack of
robustness in well-trained deep neural networks to adversarial examples.
Visually imperceptible perturbations to natural images can easily be crafted
and mislead the image classifiers towards misclassification. To demystify the
trade-offs between robustness and accuracy, in this paper we thoroughly
benchmark 18 ImageNet models using multiple robustness metrics, including the
distortion, success rate and transferability of adversarial examples between
306 pairs of models. Our extensive experimental results reveal several new
insights: (1) linear scaling law - the empirical and
distortion metrics scale linearly with the logarithm of classification error;
(2) model architecture is a more critical factor to robustness than model size,
and the disclosed accuracy-robustness Pareto frontier can be used as an
evaluation criterion for ImageNet model designers; (3) for a similar network
architecture, increasing network depth slightly improves robustness in
distortion; (4) there exist models (in VGG family) that exhibit
high adversarial transferability, while most adversarial examples crafted from
one model can only be transferred within the same family. Experiment code is
publicly available at \url{https://github.com/huanzhang12/Adversarial_Survey}.Comment: Accepted by the European Conference on Computer Vision (ECCV) 201
The first 40 million years of circumstellar disk evolution: the signature of terrestrial planet formation
We characterize the first 40 Myr of evolution of circumstellar disks through
a unified study of the infrared properties of members of young clusters and
associations with ages from 2 Myr up to ~ 40 Myr: NGC 1333, NGC 1960, NGC 2232,
NGC 2244, NGC 2362, NGC 2547, IC 348, IC 2395, IC 4665, Chamaeleon I, Orion
OB1a and OB1b, Taurus, the \b{eta} Pictoris Moving Group, \r{ho} Ophiuchi, and
the associations of Argus, Carina, Columba, Scorpius-Centaurus, and
Tucana-Horologium. Our work features: 1.) a filtering technique to flag noisy
backgrounds, 2.) a method based on the probability distribution of deflections,
P(D), to obtain statistically valid photometry for faint sources, and 3.) use
of the evolutionary trend of transitional disks to constrain the overall
behavior of bright disks. We find that the fraction of disks three or more
times brighter than the stellar photospheres at 24 {\mu}m decays relatively
slowly initially and then much more rapidly by ~ 10 Myr. However, there is a
continuing component until ~ 35 Myr, probably due primarily to massive clouds
of debris generated in giant impacts during the oligarchic/chaotic growth
phases of terrestrial planets. If the contribution from primordial disks is
excluded, the evolution of the incidence of these oligarchic/chaotic debris
disks can be described empirically by a log-normal function with the peak at 12
- 20 Myr, including ~ 13 % of the original population, and with a post-peak
mean duration of 10 - 20 Myr.Comment: accepted for publication, the Astrophysical Journal (2017
Hopf bifurcation control for a class of delay differential systems with discrete-time delayed feedback controller
This paper is concerned with asymptotical stabilization for a class of delay differential equations, which undergo Hopf bifurcation at equilibrium as delay increasing. Two types of controllers, continuous-time and discrete-time delay feedback controllers, are presented. Although discrete-time control problems have been discussed by several authors, to the best of our knowledge, so few controllers relate to both delay and sampling period, and the method of Hopf bifurcation has not been seen. Here, we first give a range of control parameter which ensures the asymptotical stability of equilibrium for the continuous time controlled system. And then, for the discrete-time controller we also obtain an efficient control interval provided that the sampling period is sufficiently small. Meanwhile, we try our best to estimate a well bound on sampling period and get a more complete conclusion. Finally, the theoretical results are applied to a physiological system to illustrate the effectiveness of the two controllers
Biomedical Language Models are Robust to Sub-optimal Tokenization
As opposed to general English, many concepts in biomedical terminology have
been designed in recent history by biomedical professionals with the goal of
being precise and concise. This is often achieved by concatenating meaningful
biomedical morphemes to create new semantic units. Nevertheless, most modern
biomedical language models (LMs) are pre-trained using standard domain-specific
tokenizers derived from large scale biomedical corpus statistics without
explicitly leveraging the agglutinating nature of biomedical language. In this
work, we first find that standard open-domain and biomedical tokenizers are
largely unable to segment biomedical terms into meaningful components.
Therefore, we hypothesize that using a tokenizer which segments biomedical
terminology more accurately would enable biomedical LMs to improve their
performance on downstream biomedical NLP tasks, especially ones which involve
biomedical terms directly such as named entity recognition (NER) and entity
linking. Surprisingly, we find that pre-training a biomedical LM using a more
accurate biomedical tokenizer does not improve the entity representation
quality of a language model as measured by several intrinsic and extrinsic
measures such as masked language modeling prediction (MLM) accuracy as well as
NER and entity linking performance. These quantitative findings, along with a
case study which explores entity representation quality more directly, suggest
that the biomedical pre-training process is quite robust to instances of
sub-optimal tokenization.Comment: BioNLP @ ACL 202
- …