72 research outputs found

    Discourse Communication in Individuals with and without Traumatic Brain Injury

    Get PDF
    Traumatic brain injury (TBI) is a global health epidemic that has detrimental consequences for individuals who sustain the brain injury, their families, and society. As a result of TBI, many individuals experience significant cognitive-communicative impairments, including difficulties with structuring and eliciting discourse. The purpose of this study was to gain a better understanding of these language difficulties and their possible clinical implications by comparing discourse communication samples from adults with TBI to those from adults without TBI. Audio recordings of 18 adults, consisting of narratives on different genres of discourse communication (e.g., conversational, procedural, personal narrative, and fictional narrative), were used for the purposes of this project. The discourse samples of 4 individuals with TBI were compared with the discourse samples of 14 individuals without TBI on the basis of several discourse communication measures including: (1) story length, (2) frequency of discourse errors, (3) elements, (4) story organization, (5) information content, and (6) information relevance. Overall, the differences observed between the TBI and non-TBI individuals on the discourse communication tasks reflect the typical communication impairments experienced by those living with TBI. Compared to the discourse samples of participants without TBI, the individuals with TBI produced more linguistic dysfluencies and discourse errors which indicated impairments related to pragmatic skill, information transfer and relevance, linking the events in a story, and effectively structuring discourse communication. The participants without TBI showed strengths in the quality and completeness of their spoken narratives. Ultimately, the differences observed among participants from each group provide important insight into what types of speech-language therapy might be appropriate and effective for these individuals

    Discourses on Methods

    Get PDF
    Il volume tratta dei mediti di analisi di diritto internazionale. In particolare prende in esame la formazione e la determinazione della consuetudine; i principi generali del diritto, il legale reasoning delle corti internazionali

    A Voice-Based Automated System for PTSD Screening and Monitoring

    Get PDF
    Comprehensive evaluation of PTSD includes diagnostic interviews, self-report testing, and physiological reactivity measures. It is often difficult and costly to diagnose PTSD due to patient access and the variability in symptoms presented. Additionally, potential patients are often reluctant to seek help due to the stigma associated with the disorder. A voice-based automated system that is able to remotely screen individuals at high risk for PTSD and monitor their symptoms during treatment has the potential to make great strides in alleviating the barriers to cost effective PTSD assessment and progress monitoring. In this paper we present a voice-based automated Tele-PTSD Monitor (TPM) system currently in development, designed to remotely screen, and provide assistance to clinicians in diagnosing PTSD. The TPM system can be accessed via a Public Switched Telephone Network (PSTN) or the Internet. The acquired voice data is then sent to a secure server to invoke the PTSD Scoring Engine (PTSD-SE) where a PTSD mental health score is computed. If the score exceeds a predefined threshold, the system will notify clinicians (via email or short message service) for confirmation and/or an appropriate follow-up assessment and intervention. The TPM system requires only voice input and performs computer-based automated PTSD scoring, resulting in low cost and easy field-deployment. The concept of the TPM system was supported using a limited dataset with an average detection accuracy of up to 95.88%

    Dedicated JPSS VIIRS Ocean Color Calibration/Validation Cruise

    Get PDF
    The NOAA/STAR ocean color team is focused on “end-to-end” production of high quality satellite ocean color products. In situ validation of satellite data is essential to produce the high quality, “fit for purpose” remotely sensed ocean color products that are required and expected by all NOAA line offices, as well as by external (both applied and research) users. In addition to serving the needs of its diverse users within the U.S., NOAA has an ever increasing role in supporting the international ocean color community and is actively engaged in the International Ocean-Colour Coordinating Group (IOCCG). The IOCCG, along with the Committee on Earth Observation Satellites (CEOS) Ocean Colour Radiometry Virtual Constellation (OCR-VC), is developing the International Network for Sensor Inter-comparison and Uncertainty assessment for Ocean Color Radiometry (INSITU-OCR). The INSITU-OCR has identified, amongst other issues, the crucial need for sustained in situ observations for product validation, with longterm measurement programs established and maintained beyond any individual mission. Recently, the NOAA/STAR Ocean Color Team has been making in situ validation measurements continually since the launch in fall 2011 of the Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (SNPP) platform, part of the U.S. Joint Polar Satellite System (JPSS) program. NOAA ship time for the purpose of ocean color validation, however, had never been allocated until the cruise described herein. As the institutional lead for this cruise, NOAA/STAR invited external collaborators based on scientific objectives and existing institutional collaborations. The invited collaborators are all acknowledged professionals in the ocean color remote sensing community. Most of the cruise principal investigators (PIs) are also PIs of the VIIRS Ocean Color Calibration and Validation (Cal/Val) team, including groups from Stennis Space Center/Naval Research Laboratory (SSC/NRL) and the University of Southern Mississippi (USM); City College of New York (CCNY); University of Massachusetts Boston (UMB); University of South Florida (USF); University of Miami (U. Miami); and, the National Institute of Standards and Technology (NIST). These Cal/Val PIs participated directly, sent qualified researchers from their labs/groups, or else contributed specific instruments or equipment. Some of the cruise PIs are not part of the NOAA VIIRS Ocean Color Cal/Val team but were chosen to complement and augment the strengths of the Cal/Val team participants. Outside investigator groups included NASA Goddard Space Flight Center (NASA/GSFC), Lamont-Doherty Earth Observatory at Columbia University (LDEO), and the Joint Research Centre of the European Commission (JRC). This report documents the November 2014 cruise off the U.S. East Coast aboard the NOAA Ship Nancy Foster. This cruise was the first dedicated ocean color validation cruise to be supported by the NOAA Office of Marine and Air Operations (OMAO). A second OMAO-supported cruise aboard the Nancy Foster is being planned for late 2015. We at NOAA/STAR are looking forward to continuing dedicated ocean color validation cruises, supported by OMAO on NOAA vessels, on an annual basis in support of JPSS VIIRS on SNPP, J-1, J-2 and other forthcoming satellite ocean color missions from the U.S as well as other countries. We also look forward to working with the U.S. and the international ocean community for improving our understanding of global ocean optical, biological, and biogeochemical properties.JRC.H.1-Water Resource

    Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge

    Get PDF
    Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures

    Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge

    Get PDF
    Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures

    On-line analysis and in situ pH monitoring of mixed acid fermentation by Escherichia coli using combined FTIR and Raman techniques

    Get PDF
    We introduce an experimental setup allowing continuous monitoring of bacterial fermentation processes by simultaneous optical density (OD) measurements, long-path FTIR headspace monitoring of CO2, acetaldehyde and ethanol, and liquid Raman spectroscopy of acetate, formate, and phosphate anions, without sampling. We discuss which spectral features are best suited for detection, and how to obtain partial pressures and concentrations by integrations and least squares fitting of spectral features. Noise equivalent detection limits are about 2.6 mM for acetate and 3.6 mM for formate at 5 min integration time, improving to 0.75 mM for acetate and 1.0 mM for formate at 1 h integration. The analytical range extends to at least 1 M with a standard deviation of percentage error of about 8%. The measurement of the anions of the phosphate buffer allows the spectroscopic, in situ determination of the pH of the bacterial suspension via a modified Henderson-Hasselbalch equation in the 6–8 pH range with an accuracy better than 0.1. The 4 m White cell FTIR measurements provide noise equivalent detection limits of 0.21 μbar for acetaldehyde and 0.26 μbar for ethanol in the gas phase, corresponding to 3.2 μM acetaldehyde and 22 μM ethanol in solution, using Henry’s law. The analytical dynamic range exceeds 1 mbar ethanol corresponding to 85 mM in solution. As an application example, the mixed acid fermentation of Escherichia coli is studied. The production of CO2, ethanol, acetaldehyde, acids such as formate and acetate, and the changes in pH are discussed in the context of the mixed acid fermentation pathways. Formate decomposition into CO2 and H2 is found to be governed by a zeroth-order kinetic rate law, showing that adding exogenous formate to a bioreactor with E. coli is expected to have no beneficial effect on the rate of formate decomposition and biohydrogen production

    Multi-messenger observations of a binary neutron star merger

    Get PDF
    On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ~1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40+8-8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 Mo. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ~40 Mpc) less than 11 hours after the merger by the One- Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ~9 and ~16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta
    corecore