2,753,273 research outputs found
Students' drafting strategies and text quality
The study reports an analysis of the drafts produced by two groups of students during an exam. Drafts were categorized as a function of some of their graphic features (e.g. their length), and of their different planning strategies used for their production (e.g. note draft, organized draft, composed draft). Grades obtained by the students on their essays related to the different categories of drafts. Results show that 2/3 of both groups of students made some kinds of draft. Drafts mostly consisted of note drafts or long composed drafts. Very few consisted of organized drafts. However, students that wrote these latter drafts obtained the best ratings. Drafting strategy was homogeneous for half of the students who used one category. The other half successively used two drafting modes. In that case they mostly associated writing with jotting down notes or with some marks of organization. Here, again, students who organized, even partially, their drafts obtained the highest grades.
Very few corrections were brought to the long drafts and they concerned the surface (spelling or lexicon), not the content or the plan. This research shows that only a limited number of students used an efficient drafting (organized draft) even though such a strategy is generally associated with the highest ratings
Single-crystal diamond low-dissipation cavity optomechanics
Single-crystal diamond cavity optomechanical devices are a promising example
of a hybrid quantum system: by coupling mechanical resonances to both light and
electron spins, they can enable new ways for photons to control solid state
qubits. However, realizing cavity optomechanical devices from high quality
diamond chips has been an outstanding challenge. Here we demonstrate
single-crystal diamond cavity optomechanical devices that can enable
photon-phonon-spin coupling. Cavity optomechanical coupling to
frequency () mechanical resonances is observed. In room temperature
ambient conditions, these resonances have a record combination of low
dissipation (mechanical quality factor, ) and high
frequency, with sufficient
for room temperature single phonon coherence. The system exhibits high optical
quality factor () resonances at infrared and visible
wavelengths, is nearly sideband resolved, and exhibits optomechanical
cooperativity . The devices' potential for optomechanical control of
diamond electron spins is demonstrated through radiation pressure excitation of
mechanical self-oscillations whose 31 pm amplitude is predicted to provide 0.6
MHz coupling rates to diamond nitrogen vacancy center ground state transitions
(6 Hz / phonon), and stronger coupling rates to excited state
transitions.Comment: 12 pages, 5 figure
Quality-Efficiency Trade-offs in Machine Learning for Text Processing
Data mining, machine learning, and natural language processing are powerful
techniques that can be used together to extract information from large texts.
Depending on the task or problem at hand, there are many different approaches
that can be used. The methods available are continuously being optimized, but
not all these methods have been tested and compared in a set of problems that
can be solved using supervised machine learning algorithms. The question is
what happens to the quality of the methods if we increase the training data
size from, say, 100 MB to over 1 GB? Moreover, are quality gains worth it when
the rate of data processing diminishes? Can we trade quality for time
efficiency and recover the quality loss by just being able to process more
data? We attempt to answer these questions in a general way for text processing
tasks, considering the trade-offs involving training data size, learning time,
and quality obtained. We propose a performance trade-off framework and apply it
to three important text processing problems: Named Entity Recognition,
Sentiment Analysis and Document Classification. These problems were also chosen
because they have different levels of object granularity: words, paragraphs,
and documents. For each problem, we selected several supervised machine
learning algorithms and we evaluated the trade-offs of them on large publicly
available data sets (news, reviews, patents). To explore these trade-offs, we
use different data subsets of increasing size ranging from 50 MB to several GB.
We also consider the impact of the data set and the evaluation technique. We
find that the results do not change significantly and that most of the time the
best algorithms is the fastest. However, we also show that the results for
small data (say less than 100 MB) are different from the results for big data
and in those cases the best algorithm is much harder to determine.Comment: Ten pages, long version of paper that will be presented at IEEE Big
Data 2017 (8 pages
Chunking clinical text containing non-canonical language
Free text notes typed by primary care physicians during patient consultations typically contain highly non-canonical language. Shallow syntactic analysis of free text notes can help to reveal valuable information for the study of disease and treatment. We present an exploratory study into chunking such text using off-the-shelf language processing tools and pre-trained statistical models. We evaluate chunking accuracy with respect to part-of-speech tagging quality, choice of chunk representation, and breadth of context features. Our results indicate that narrow context feature windows give the best results, but that chunk representation and minor differences in tagging quality do not have a significant impact on chunking accuracy
- …
