3,749 research outputs found
Semi-supervised Contrastive Outlier removal for Pseudo Expectation Maximization (SCOPE)
Semi-supervised learning is the problem of training an accurate predictive
model by combining a small labeled dataset with a presumably much larger
unlabeled dataset. Many methods for semi-supervised deep learning have been
developed, including pseudolabeling, consistency regularization, and
contrastive learning techniques. Pseudolabeling methods however are highly
susceptible to confounding, in which erroneous pseudolabels are assumed to be
true labels in early iterations, thereby causing the model to reinforce its
prior biases and thereby fail to generalize to strong predictive performance.
We present a new approach to suppress confounding errors through a method we
describe as Semi-supervised Contrastive Outlier removal for Pseudo Expectation
Maximization (SCOPE). Like basic pseudolabeling, SCOPE is related to
Expectation Maximization (EM), a latent variable framework which can be
extended toward understanding cluster-assumption deep semi-supervised
algorithms. However, unlike basic pseudolabeling which fails to adequately take
into account the probability of the unlabeled samples given the model, SCOPE
introduces an outlier suppression term designed to improve the behavior of EM
iteration given a discrimination DNN backbone in the presence of outliers. Our
results show that SCOPE greatly improves semi-supervised classification
accuracy over a baseline, and furthermore when combined with consistency
regularization achieves the highest reported accuracy for the semi-supervised
CIFAR-10 classification task using 250 and 4000 labeled samples. Moreover, we
show that SCOPE reduces the prevalence of confounding errors during
pseudolabeling iterations by pruning erroneous high-confidence pseudolabeled
samples that would otherwise contaminate the labeled set in subsequent
retraining iterations
A Safety-Case Approach for Ethical Considerations for Autonomous Vehicles
Ethical considerations for autonomous vehicles (AVs) go beyond the “trolley problem” to include such aspects as risk / benefit trade-offs, informed consent, risk responsibility and risk mitigation within a system of systems. In this paper we present a methodology for arguing that the behaviour of a given AV meets desired ethical characteristics. We identify some of the ethical imperatives surrounding the introduction of AVs and consider how decisions made during development can impact the ethics of the AV’s behaviour
Ethics and the safety of autonomous systems
The ethical landscape surrounding the introduction of autonomous vehicles is complex, and there are real concerns over whether the operational safety of these systems can be adequately demonstrated. In this paper we focus on the ethical factors relevant to the design and safety justification of autonomous systems, considering issues such as risk transfer, ALARP considerations, capability vs risk trade-offs and emergent behaviours. We look beyond the "trolley problem” to consider how design decisions can reflect a wider ethical framework. We also look at the wider landscape around the emergence of autonomous systems, with a particular focus on the driving social factors which encourage early adoption of new technologies in this domain. We present some arguments for encouraging an explicit discussion of social and ethical factors within the safety framework for autonomous systems
P7C3-A20 neuroprotection is independent of Wallerian degeneration in Primary Neuronal Culture
The anti-apoptotic, neuroprotective compound P7C3-A20 reduces neurological deficits when administered to murine in vivo models of traumatic brain injury. P7C3-A20 is thought to exert its activity through small-molecule activation of the enzyme nicotinamide phosphoribosyltransferase (NAMPT). This enzyme converts nicotinamide to nicotinamide mononucleotide (NMN), the precursor to nicotinamide adenine dinucleotide (NAD) synthesis. Alterations to this bioenergetic pathway have been shown to induce Wallerian degeneration of the distal neurite following injury. This study aimed to establish whether P7C3-A20, through induction of NAMPT activity, would affect the rate of Wallerian degeneration. The model systems used were dissociated primary cortical neurons, dissociated superior cervical ganglion neurons, and superior cervical ganglion explants. P7C3-A20 failed to demonstrate any protection against Wallerian degeneration induced by neurite transection or vincristine administration. Furthermore, there was a concentration dependent neurotoxicity. These findings are important in understanding the mechanism by which P7C3-A20 mediates its effects- a
key step before moving to human clinical trials.Wellcome Trus
BIS and spectral entropy monitoring during sedation with midazolam/remifentanil and dexmedetomidine/remifentanil
Haenggi and colleagues report considerable intra- and inter-individual variability in derived electroencephalogram (EEG) parameters (Bispectral Index (BIS), response entropy and state entropy) recorded in volunteers sedated with midazolam or dexmedetomidine infusions titrated to modified Ramsay scores of 2, 3 and 4, and a remifentanil infusion at a fixed target concentration. Possible explanations for the low, variable and fluctuating EEG parameters are that volunteers were intermittently asleep, and that remifentanil gave rise to a low amplitude, slowed EEG pattern despite maintained consciousness. BIS and entropy values should be interpreted in combination with clinical findings in patients sedated with these agents
27 - Monte Carlo Simulation of a Geiger Counter, Gamma-Ray Sources and Shielding
Geiger counters are extensively utilized for measuring ionization radiation from alpha-particle, beta-particle, and gamma-ray sources. Geiger counters are used for radiation monitoring and field characterization in areas such as radiological protection, experimental physics, and nuclear industry. In this study, Monte Carlo simulations of a Geiger counter were completed using a Monte Carlo N-Particle Transport Code (MCNP). The purpose of this study was to expand the Geiger counter model to include gamma-ray shielding. To verify the code effectiveness, different simulations have been conducted using a variety of radiation sources (e.g., Cs-137 and Co-60) and materials including iron, lead, and concrete. The results are compared to the accepted general mathematical models for gamma ray shielding
A taxonomy of parallel sorting
TR 84-601In this paper, we propose a taxonomy of parallel sorting that includes a broad range of array
and file sorting algorithms. We analyze the evolution of research on parallel sorting, from the
earliest sorting networks to the shared memory algorithms and the VLSI sorters. In the context
of sorting networks, we describe two fundamental parallel merging schemes - the odd-even and
the bitonic merge. Sorting algorithms have been derived from these merging algorithms for parallel
computers where processors communicate through interconnection networks such as the perfect
shuffle, the mesh and a number of other sparse networks. After describing the network sorting
algorithms, we show that, with a shared memory model of parallel computation, faster algorithms
have been derived from parallel enumeration sorting schemes, where keys are first ranked and
then rearranged according to their rank
Ten false beliefs in neurocritical care.
In acute brain injury, the need for specific expertise on central nervous pathophysiology is evident. However, even when the primary reason for ICU admission is extracranial, the brain may be affected too, through inadequate substrate and oxygen delivery, blood brain barrier leek, harmful effects of sedatives, and excitotoxicity. The resulting spectrum of brain dysfunction includes delirium, encephalopathy, coma, and non-convulsive seizures. Therefore, all intensive care should integrate neuro-intensive care, with the primary goal to preserve the brain
- …