95 research outputs found
Covariation Among Vowel Height Effects on Acoustic Measures
Covariation among vowel height effects on vowel intrinsic fundamental frequency (IF0), voice onset time (VOT), and voiceless interval duration (VID) is analyzed to assess the plausibility of a common physiological mechanism underlying variation in these measures. Phrases spoken by 20 young adults, containing words composed of initial voiceless stops or /s/ and high or low vowels, were produced in habitual and voluntarily increased F0 conditions. High vowels were associated with increased IF0 and longer VIDs. VOT and VID exhibited significant covariation with IF0 only for males at habitua
Identity of electrons and ionization equilibrium
It is perhaps appropriate that, in a year marking the 90th anniversary of
Meghnad Saha seminal paper (1920), new developments should call fresh attention
to the problem of ionization equilibrium in gases. Ionization equilibrium is
considered in the simplest "physical" model for an electronic subsystem of
matter in a rarefied state, consisting of one localized electronic state in
each nucleus and delocalized electronic states considered as free ones. It is
shown that, despite the qualitative agreement, there is a significant
quantitative difference from the results of applying the Saha formula to the
degree of ionization. This is caused by the fact that the Saha formula
corresponds to the "chemical" model of matter.Comment: 9 pages, 2 figure
Explanation of the Gibbs paradox within the framework of quantum thermodynamics
The issue of the Gibbs paradox is that when considering mixing of two gases
within classical thermodynamics, the entropy of mixing appears to be a
discontinuous function of the difference between the gases: it is finite for
whatever small difference, but vanishes for identical gases. The resolution
offered in the literature, with help of quantum mixing entropy, was later shown
to be unsatisfactory precisely where it sought to resolve the paradox.
Macroscopic thermodynamics, classical or quantum, is unsuitable for explaining
the paradox, since it does not deal explicitly with the difference between the
gases. The proper approach employs quantum thermodynamics, which deals with
finite quantum systems coupled to a large bath and a macroscopic work source.
Within quantum thermodynamics, entropy generally looses its dominant place and
the target of the paradox is naturally shifted to the decrease of the maximally
available work before and after mixing (mixing ergotropy). In contrast to
entropy this is an unambiguous quantity. For almost identical gases the mixing
ergotropy continuously goes to zero, thus resolving the paradox. In this
approach the concept of ``difference between the gases'' gets a clear
operational meaning related to the possibilities of controlling the involved
quantum states. Difficulties which prevent resolutions of the paradox in its
entropic formulation do not arise here. The mixing ergotropy has several
counter-intuitive features. It can increase when less precise operations are
allowed. In the quantum situation (in contrast to the classical one) the mixing
ergotropy can also increase when decreasing the degree of mixing between the
gases, or when decreasing their distinguishability. These points go against a
direct association of physical irreversibility with lack of information.Comment: Published version. New title. 17 pages Revte
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
PATTERN-BASED LEARNING AND SPATIALLY ORIENTED CONCEPT FORMATION IN A MULTI-AGENT, DECISION-MAKING EXPERT
Recommended from our members
Developing an artificial intelligence diagnostic tool for paediatric distal radius fractures, a proof of concept study
Introduction: In the UK 1 in 50 children will sustain a fractured bone yearly yet studies have shown that 34% of children sustaining an injury do not have a visible fracture on initial radiographs. Wrist fractures are particularly difficult to identify as the growth plate poses diagnostic challenges when interpreting radiographs.
Materials and methods: We developed convolutional neural network (CNN) image recognition software to detect fractures in radiographs of children. A consecutive dataset of 5000 radiographs of the distal radius in children aged less than 19 years from 2014-2019 were used to train the CNN. Additionally transfer learning from a VGG16 CNN pre-trained on non-radiological images was applied to improve generalization of the network and classification of radiographs. Hypermeter tuning techniques were used to compare the model to the radiology reports that accompanied the original images to determine diagnostic test accuracy.
Results: The training set consisted of 2881 radiographs with a fracture and 1571 without a fracture, 548 radiographs were outliers. With additional augmentation the final dataset consisted of 15,498 images. The dataset was randomly split into three subsets, a training dataset (70%), a validation dataset (10%), and a test dataset (20%). After training for 20 epochs, the diagnostic test accuracy was 85%.
Discussion: A CNN model is feasible in diagnosing paediatric wrist fractures. We demonstrated that this application could be utilized as a tool for improving diagnostic accuracy. Future work would involve developing automated treatment pathways for diagnosis, reducing unnecessary hospital visits and allowing staff redeployment to other areas
- …