283 research outputs found
Palliative Care and Hospice: Opportunities to Improve Care for the Sickest Patients
The article discusses how palliative care and hospice services address the quality and cost concerns in the U.S. health care system. By focusing on symptom management, coordination among providers, and improved transitions of care, the services meet the needs of the sickest persons at lower costs. The author suggests putting in place the right leadership and resources and strengthening the workforce to successfully expand the programs
Do We See Eye to Eye? Moderators of Correspondence Between Student and Faculty Evaluations of Day-to-Day Teaching
Students and instructors show moderate levels of agreement about the quality of day-to-day teaching. In the present study, we replicated and extended this finding by asking how correspondence between student and instructor ratings is moderated by time of semester and student demographic variables. Participants included 137 students and 5 instructors. On 10 separate days, students and instructors rated teaching effectiveness and challenge level of the material. Multilevel modeling indicated that student and instructor ratings of teaching effectiveness converged overall, but more advanced students and Caucasian students converged more closely with instructors. Student and instructor ratings of challenge converged early but diverged later in the semester. These results extend our knowledge about the connection between student and faculty judgments of teaching
Recent advances in the application of stable isotope ratio analysis in forensic chemistry
This review paper updates the previous literature in relation to the continued and developing use of stable isotope ratio analysis in samples which are relevant to forensic science. Recent advances in the analysis of drug samples, explosive materials, and samples derived from human and animal samples are discussed. The paper also aims to put the use of isotope ratio mass spectrometry into a forensic context and discuss its evidential potential
Cranial Electrotherapy Stimulation in the Treatment of Posttraumatic Stress Disorder: A Pilot Study of Two Military Veterans
This case study investigated the effects of cranial electrotherapy stimulation (CES) on the prevalence and intensity of posttraumatic stress disorder (PTSD) symptoms and self- perceived improvement of performance and satisfaction in daily activities in war veterans. Two male Caucasian veterans (ages 54 and 38) diagnosed with PTSD participated in these case studies with a pretest–posttest design. The Canadian Occupational Performance Measure (COPM) and the PTSD Symptom Scale–Interview (PSS-I) were administered before and after the 4-week CES treatment. The participants self-administered the 4-week CES treatment protocol using Alpha-Stim SCS CES device in their home for 20 to 60 min a day, 3 to 5 days a week with a comfortable, self-selected, current level between 100 and 500 microamperes. They were asked to document the settings and responses in a daily treatment log. Through visual trend analysis and change scores, the results revealed daily PTSD symptoms decreased in frequency and severity for both participants from PSSI-I and daily treatment log. Self-perceived efficacy of performance and satisfaction as measured by the COPM also improved in the 54-year-old participant as his change scores (performance: +5.4; satisfaction: +7.9) were over the clinical significance of 2 points of COPM. Both participants reported a decrease in PTSD symptoms and an overall improvement in self-perceived occupational performance after a trial of CES. Findings from this study suggest that future research could contribute to the role of occupational therapists using CES in the treatment of veterans with PTSD. This preliminary study, if confirmed, indicates that CES could provide occupational therapists with a safe and effective way to reduce the symptom burden of PTSD while facilitating occupational performance for a rapidly increasing population of war veterans
Detecting Hallucination and Coverage Errors in Retrieval Augmented Generation for Controversial Topics
We explore a strategy to handle controversial topics in LLM-based chatbots
based on Wikipedia's Neutral Point of View (NPOV) principle: acknowledge the
absence of a single true answer and surface multiple perspectives. We frame
this as retrieval augmented generation, where perspectives are retrieved from a
knowledge base and the LLM is tasked with generating a fluent and faithful
response from the given perspectives. As a starting point, we use a
deterministic retrieval system and then focus on common LLM failure modes that
arise during this approach to text generation, namely hallucination and
coverage errors. We propose and evaluate three methods to detect such errors
based on (1) word-overlap, (2) salience, and (3) LLM-based classifiers. Our
results demonstrate that LLM-based classifiers, even when trained only on
synthetic errors, achieve high error detection performance, with ROC AUC scores
of 95.3% for hallucination and 90.5% for coverage error detection on
unambiguous error cases. We show that when no training data is available, our
other methods still yield good results on hallucination (84.0%) and coverage
error (85.2%) detection.Comment: Accepted at LREC-COLING 202
Data Quality Objectives Supporting Radiological Air Emissions Monitoring for the PNNL Site
This document of Data Quality Objectives (DQOs) was prepared based on the U.S. Environmental Protection Agency (EPA) Guidance on Systematic Planning Using the Data Quality Objectives Process, EPA, QA/G4, 2/2006 (EPA 2006) as well as several other published DQOs. Pacific Northwest National Laboratory (PNNL) is in the process of developing a radiological air monitoring program for the PNNL Site that is distinct from that of the nearby Hanford Site. Radiological emissions at the PNNL Site result from Physical Sciences Facility (PSF) major emissions units. A team was established to determine how the PNNL Site would meet federal regulations and address guidelines developed to monitor and estimate offsite air emissions of radioactive materials. The result is a program that monitors the impact to the public from the PNNL Site
A General Framework for Complex Time-Driven Simulations on Hypercubes
We describe a general framework for building and running complex time-driven simulations with several levels of concurrency. The framework has been implemented on the Caltech/JPL Mark IIIfp hypercube using the Centaur communications protocol. Our framework allows the programmer to break the hypercube up into one or more subcubes of arbitrary size (task parallelism). Each subcube runs a separate application using data parallelism and synchronous communications internal to the subcube. Communications between subcubes are performed with asynchronous messages. Subcubes can each define their own parameters and commands which drive their particular application. These are collected and organized by the Control Processor (CP) in order that the entire simulation can be driven from a single command-driven shell. This system allows several programmers to develop disjoint pieces of a large simulation in parallel and to then integrate them with little effort. Each programmer is, of course, also able to take advantage of the separate data and I/O processors on each hypercube node in order to overlap calculation and communication (on-board parallelism) as well as the pipelined floating point processor on each node (pipelined processor parallelism).
We show, as an example of the framework, a large space defense simulation. Functions (sensing, tracking, etc.) each comprise a subcube; functions are collected into defense platforms (satellites); and many platforms comprise the defense architecture. Software in the CP uses simple input to determine the node allocation to each function based on the desired defense architecture and number of platforms simulated in the hypercube. This allows many different architectures to be simulated. The set of simulated platforms, the results, and the messages between them are shown on color graphics displays. The methods used herein can be generalized to other simulations of a similar nature in a straightforward manner
Cluster M Mycobacteriophages Bongo, PegLeg, and Rey with Unusually Large Repertoires of tRNA Isotopes
Genomic analysis of a large set of phages infecting the common hostMycobacterium smegmatis mc2155 shows that they span considerable genetic diversity. There are more than 20 distinct types that lack nucleotide similarity with each other, and there is considerable diversity within most of the groups. Three newly isolated temperate mycobacteriophages, Bongo, PegLeg, and Rey, constitute a new group (cluster M), with the closely related phages Bongo and PegLeg forming subcluster M1 and the more distantly related Rey forming subcluster M2. The cluster M mycobacteriophages have siphoviral morphologies with unusually long tails, are homoimmune, and have larger than average genomes (80.2 to 83.7 kbp). They exhibit a variety of features not previously described in other mycobacteriophages, including noncanonical genome architectures and several unusual sets of conserved repeated sequences suggesting novel regulatory systems for both transcription and translation. In addition to containing transfer-messenger RNA and RtcB-like RNA ligase genes, their genomes encode 21 to 24 tRNA genes encompassing complete or nearly complete sets of isotypes. We predict that these tRNAs are used in late lytic growth, likely compensating for the degradation or inadequacy of host tRNAs. They may represent a complete set of tRNAs necessary for late lytic growth, especially when taken together with the apparent lack of codons in the same late genes that correspond to tRNAs that the genomes of the phages do not obviously encode
- …