1,078 research outputs found
Perceiving environmental structure from optical motion
Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined
What is binocular disparity?
What are the geometric primitives of binocular disparity? The Venetian blind effect and other converging lines of evidence indicate that stereo-scopic depth perception derives from disparities of higher-order structure in images of surfaces. Image structure entails spatial variations of in-tensity, texture, and motion, jointly structured by observed surfaces. The spatial structure of bin-ocular disparity corresponds to the spatial struc-ture of surfaces. Independent spatial coordinates are not necessary for stereoscopic vision. Stere-opsis is highly sensitive to structural disparities associated with local surface shape. Disparate positions on retinal anatomy are neither neces-sary nor sufficient for stereopsis
Federal estate taxation and planning
Thesis (M.B.A.)--Boston Universit
Failure of non-vacuum steam sterilization processes for dental handpieces
Background:
Dental handpieces are used in critical and semi-critical operative interventions. Although a number of dental professional bodies recommend that dental handpieces are sterilized between patient use there is a lack of clarity and understanding of the effectiveness of different steam sterilization processes. The internal mechanisms of dental handpieces contain narrow lumens (0·8-2·3mm) which can impede the removal of air and ingress of saturated steam required to achieve sterilization conditions.
Aim:
To identify the extent of sterilization failure in dental handpieces using a non-vacuum process.
Methods:
In-vitro and in-vivo investigations were conducted on commonly used UK benchtop steam sterilizers and three different types of dental handpieces. The sterilization process was monitored inside the lumens of dental handpieces using thermometric (TM) methods (dataloggers), chemical indicators (CI) and biological indicators (BI).
Findings:
All three methods of assessing achievement of sterility within dental handpieces that had been exposed to non-vacuum sterilization conditions demonstrated a significant number of failures (CI=8/3,024(fails/n tests); BI=15/3,024; TM=56/56) compared to vacuum sterilization conditions (CI=2/1,944; BI=0/1,944; TM=0/36). The dental handpiece most likely to fail sterilization in the non-vacuum process was the surgical handpiece. Non-vacuum sterilizers located in general dental practice had a higher rate of sterilization failure (CI=25/1,620; BI=32/1,620; TM=56/56) with no failures in vacuum process.
Conclusion:
Non-vacuum downward/gravity displacement, type-N steam sterilizers are an unreliable method for sterilization of dental handpieces in general dental practice. The handpiece most likely to fail sterilization is the type most frequently used for surgical interventions
Investigating steam penetration using thermometric methods in dental handpieces with narrow internal lumens during sterilizing processes with non-vacuum or vacuum processes
Background:
Dental handpieces are required to be sterilized between patient use. Vacuum steam sterilization processes with fractionated pre/post-vacuum phases or unique cycles for specified medical devices, are required for hollow instruments with internal lumens to assure successful air removal. Entrapped air will compromise achievement of required sterilization conditions. Many countries and professional organisations still advocate non-vacuum sterilization processes for these devices.
Aim:
To investigate non-vacuum downward/gravity displacement, type-N steam sterilization of dental handpieces, using thermometric methods to measure time to achieve sterilization temperature at different handpiece locations.
Methods:
Measurements at different positions within air turbines were undertaken with thermocouples and dataloggers. Two examples of commonly used UK benchtop steam sterilizers were tested; a non-vacuum benchtop sterilizer (Little Sister 3, Eschmann, UK) and a vacuum benchtop sterilizer (Lisa, W&H, Austria). Each sterilizer cycle was completed with three handpieces and each cycle in triplicate.
Findings:
A total of 140 measurements inside dental handpiece lumens were recorded. We demonstrate that the non-vacuum process fails (time range 0-150 seconds) to reliably achieve sterilization temperatures within the time limit specified by the International standard (15 seconds equilibration time). The measurement point at the base of the handpiece failed in all test runs (n=9) to meet the standard. No failures were detected with the vacuum steam sterilization type B process with fractionated pre-vacuum and post-vacuum phases.
Conclusion:
Non-vacuum downward/gravity displacement, type-N steam sterilization processes are unreliable in achieving sterilization conditions inside dental handpieces and the base of the handpiece is the site most likely to fail
Predicting Human Metaphor Paraphrase Judgments with Deep Neural Networks
We propose a new annotated corpus for metaphor interpretation by paraphrase, and a novel DNN model for performing this task. Our corpus consists of 200 sets of 5 sen- tences, with each set containing one reference metaphorical sentence, and four ranked candi- date paraphrases. Our model is trained for a binary classification of paraphrase candidates, and then used to predict graded paraphrase ac- ceptability. It reaches an encouraging 75% ac- curacy on the binary classification task, and high Pearson (.75) and Spearman (.68) correla- tions on the gradient judgment prediction task
Methamphetamine and heightened risk for early-onset stroke and Parkinson's disease: A review
Introduction: Methamphetamine users are typically young adults, placing them at risk for significant drug-related harms. Neurological harms include stroke and Parkinson's disease, both of which may develop prematurely in the context of methamphetamine use. Material and methods: We conducted a narrative review examining the evidence first, for stroke under 45 years and second, early onset of Parkinson's disease (PD) and parkinsonism related to methamphetamine use. We summarise epidemiological factors and common clinical features, before examining in detail the underlying pathology and causal mechanisms. Results and discussion: Methamphetamine use among young people (<45 years) is associated with heightened risk for haemorrhagic stroke. Compared to age-matched all-cause fatal stroke, haemorrhage secondary to aneurysmal rupture is more common among young people with methamphetamine-related stroke and is associated with significantly poorer prognosis. Aetiology is related primarily to both acute and chronic hypertension associated with methamphetamine's sympathomimetic action. Evidence from a variety of sources supports a link between methamphetamine use and increased risk for the development of PD and parkinsonism, and with their early onset in a subset of individuals. Despite this, direct evidence of degeneration of dopaminergic neurons in methamphetamine users has not been demonstrated to date. Conclusions: Stroke and Parkinson's Disease/parkinsonism are neurological harms observed prematurely in methamphetamine users
Using Deep Neural Networks to Learn Syntactic Agreement
We consider the extent to which different deep neural network (DNN) configurations can learn syntactic relations, by taking up Linzen et al.âs (2016) work on subject-verb agreement with LSTM RNNs. We test their methods on a much larger corpus than they used (a â 24 million example part of the WaCky corpus, instead of their â 1.35 million example corpus, both drawn from Wikipedia). We experiment with several different DNN architectures (LSTM RNNs, GRUs, and CNNs), and alternative parameter settings for these systems (vocabulary size, training to test ratio, number of layers, memory size, drop out rate, and lexical embedding dimension size). We also try out our own unsupervised DNN language model. Our results are broadly compatible with those that Linzen et al. report. However, we discovered some interesting, and in some cases, surprising features of DNNs and language models in their performance of the agreement learning task. In particular, we found that DNNs require large vocabularies to form substantive lexical embeddings in order to learn structural patterns. This finding has interesting consequences for our understanding of the way in which DNNs represent syntactic information. It suggests that DNNs learn syntactic patterns more efficiently through rich lexical embeddings, with semantic as well as syntactic cues, than from training on lexically impoverished strings that highlight structural patterns
Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge
The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of wellâformed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to simply reduce acceptability to probability. The acceptability of a sentence is not the same as the likelihood of its occurrence, which is, in part, determined by factors like sentence length and lexical frequency. In this paper, we present the results of a set of largeâscale experiments using crowdâsourced acceptability judgments that demonstrate gradience to be a pervasive feature in acceptability judgments. We then show how one can predict acceptability judgments on the basis of probability by augmenting probabilistic language models with an acceptability measure. This is a function that normalizes probability values to eliminate the confounding factors of length and lexical frequency. We describe a sequence of modeling experiments with unsupervised language models drawn from stateâofâtheâart machine learning methods in natural language processing. Several of these models achieve very encouraging levels of accuracy in the acceptability prediction task, as measured by the correlation between the acceptability measure scores and mean human acceptability values. We consider the relevance of these results to the debate on the nature of grammatical competence, and we argue that they support the view that linguistic knowledge can be intrinsically probabilistic
- âŠ