91,136 research outputs found
Towards long-tailed, multi-label disease classification from chest X-ray: Overview of the CXR-LT challenge
Many real-world image recognition problems, such as diagnostic medical
imaging exams, are "long-tailed" \unicode{x2013} there are a few common
findings followed by many more relatively rare conditions. In chest
radiography, diagnosis is both a long-tailed and multi-label problem, as
patients often present with multiple findings simultaneously. While researchers
have begun to study the problem of long-tailed learning in medical image
recognition, few have studied the interaction of label imbalance and label
co-occurrence posed by long-tailed, multi-label disease classification. To
engage with the research community on this emerging topic, we conducted an open
challenge, CXR-LT, on long-tailed, multi-label thorax disease classification
from chest X-rays (CXRs). We publicly release a large-scale benchmark dataset
of over 350,000 CXRs, each labeled with at least one of 26 clinical findings
following a long-tailed distribution. We synthesize common themes of
top-performing solutions, providing practical recommendations for long-tailed,
multi-label medical image classification. Finally, we use these insights to
propose a path forward involving vision-language foundation models for few- and
zero-shot disease classification
Measuring Employee Perceptions of Organizational Tolerance for Failure
The empirical concept of Organizational Tolerance for Organizational Failure was examined. First, a clear definition of the concept was established and, second, the concept\u27s dimensionality was explored. Based on data collected from 140 participants, four main scale components were identified: Organizational Values and Beliefs, Organizational and Supervisor Support and Motivation, Compensation and Reward Systems, and Recognition. Even though the final scale developed represented a good research base, further development is needed to improve some of the subscale\u27s internal consistencies
Recommended from our members
CA1-projecting subiculum neurons facilitate object-place learning.
Recent anatomical evidence suggests a functionally significant back-projection pathway from the subiculum to the CA1. Here we show that the afferent circuitry of CA1-projecting subicular neurons is biased by inputs from CA1 inhibitory neurons and the visual cortex, but lacks input from the entorhinal cortex. Efferents of the CA1-projecting subiculum neurons also target the perirhinal cortex, an area strongly implicated in object-place learning. We identify a critical role for CA1-projecting subicular neurons in object-location learning and memory, and show that this projection modulates place-specific activity of CA1 neurons and their responses to displaced objects. Together, these experiments reveal a novel pathway by which cortical inputs, particularly those from the visual cortex, reach the hippocampal output region CA1. Our findings also implicate this circuitry in the formation of complex spatial representations and learning of object-place associations
A critical look at power law modelling of the Internet
This paper takes a critical look at the usefulness of power law models of the
Internet. The twin focuses of the paper are Internet traffic and topology
generation. The aim of the paper is twofold. Firstly it summarises the state of
the art in power law modelling particularly giving attention to existing open
research questions. Secondly it provides insight into the failings of such
models and where progress needs to be made for power law research to feed
through to actual improvements in network performance.Comment: To appear Computer Communication
Empirically Analyzing the Effect of Dataset Biases on Deep Face Recognition Systems
It is unknown what kind of biases modern in the wild face datasets have
because of their lack of annotation. A direct consequence of this is that total
recognition rates alone only provide limited insight about the generalization
ability of a Deep Convolutional Neural Networks (DCNNs). We propose to
empirically study the effect of different types of dataset biases on the
generalization ability of DCNNs. Using synthetically generated face images, we
study the face recognition rate as a function of interpretable parameters such
as face pose and light. The proposed method allows valuable details about the
generalization performance of different DCNN architectures to be observed and
compared. In our experiments, we find that: 1) Indeed, dataset bias has a
significant influence on the generalization performance of DCNNs. 2) DCNNs can
generalize surprisingly well to unseen illumination conditions and large
sampling gaps in the pose variation. 3) Using the presented methodology we
reveal that the VGG-16 architecture outperforms the AlexNet architecture at
face recognition tasks because it can much better generalize to unseen face
poses, although it has significantly more parameters. 4) We uncover a main
limitation of current DCNN architectures, which is the difficulty to generalize
when different identities to not share the same pose variation. 5) We
demonstrate that our findings on synthetic data also apply when learning from
real-world data. Our face image generator is publicly available to enable the
community to benchmark other DCNN architectures.Comment: Accepted to CVPR 2018 Workshop on Analysis and Modeling of Faces and
Gestures (AMFG
- …