15,246 research outputs found
Recommended from our members
The LONI QC System: A Semi-Automated, Web-Based and Freely-Available Environment for the Comprehensive Quality Control of Neuroimaging Data.
Quantifying, controlling, and monitoring image quality is an essential prerequisite for ensuring the validity and reproducibility of many types of neuroimaging data analyses. Implementation of quality control (QC) procedures is the key to ensuring that neuroimaging data are of high-quality and their validity in the subsequent analyses. We introduce the QC system of the Laboratory of Neuro Imaging (LONI): a web-based system featuring a workflow for the assessment of various modality and contrast brain imaging data. The design allows users to anonymously upload imaging data to the LONI-QC system. It then computes an exhaustive set of QC metrics which aids users to perform a standardized QC by generating a range of scalar and vector statistics. These procedures are performed in parallel using a large compute cluster. Finally, the system offers an automated QC procedure for structural MRI, which can flag each QC metric as being 'good' or 'bad.' Validation using various sets of data acquired from a single scanner and from multiple sites demonstrated the reproducibility of our QC metrics, and the sensitivity and specificity of the proposed Auto QC to 'bad' quality images in comparison to visual inspection. To the best of our knowledge, LONI-QC is the first online QC system that uniquely supports the variety of functionality where we compute numerous QC metrics and perform visual/automated image QC of multi-contrast and multi-modal brain imaging data. The LONI-QC system has been used to assess the quality of large neuroimaging datasets acquired as part of various multi-site studies such as the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) Study and the Alzheimer's Disease Neuroimaging Initiative (ADNI). LONI-QC's functionality is freely available to users worldwide and its adoption by imaging researchers is likely to contribute substantially to upholding high standards of brain image data quality and to implementing these standards across the neuroimaging community
Hectospec, the MMT's 300 Optical Fiber-Fed Spectrograph
The Hectospec is a 300 optical fiber fed spectrograph commissioned at the MMT
in the spring of 2004. A pair of high-speed six-axis robots move the 300 fiber
buttons between observing configurations within ~300 s and to an accuracy ~25
microns. The optical fibers run for 26 m between the MMT's focal surface and
the bench spectrograph operating at R~1000-2000. Another high dispersion bench
spectrograph offering R~5,000, Hectochelle, is also available. The system
throughput, including all losses in the telescope optics, fibers, and
spectrograph peaks at ~10% at the grating blaze in 1" FWHM seeing. Correcting
for aperture losses at the 1.5" diameter fiber entrance aperture, the system
throughput peaks at 17%. Hectospec has proven to be a workhorse
instrument at the MMT. Hectospec and Hectochelle together were scheduled for
1/3 of the available nights since its commissioning. Hectospec has returned
\~60,000 reduced spectra for 16 scientific programs during its first year of
operation.Comment: 68 pages, 28 figures, to appear in December 2005 PAS
Simulated case management of home telemonitoring to assess the impact of different alert algorithms on work-load and clinical decisions
© 2017 The Author(s). Background: Home telemonitoring (HTM) of chronic heart failure (HF) promises to improve care by timely indications when a patient's condition is worsening. Simple rules of sudden weight change have been demonstrated to generate many alerts with poor sensitivity. Trend alert algorithms and bio-impedance (a more sensitive marker of fluid change), should produce fewer false alerts and reduce workload. However, comparisons between such approaches on the decisions made and the time spent reviewing alerts has not been studied. Methods: Using HTM data from an observational trial of 91 HF patients, a simulated telemonitoring station was created and used to present virtual caseloads to clinicians experienced with HF HTM systems. Clinicians were randomised to either a simple (i.e. an increase of 2 kg in the past 3 days) or advanced alert method (either a moving average weight algorithm or bio-impedance cumulative sum algorithm). Results: In total 16 clinicians reviewed the caseloads, 8 randomised to a simple alert method and 8 to the advanced alert methods. Total time to review the caseloads was lower in the advanced arms than the simple arm (80 ± 42 vs. 149 ± 82 min) but agreements on actions between clinicians were low (Fleiss kappa 0.33 and 0.31) and despite having high sensitivity many alerts in the bio-impedance arm were not considered to need further action. Conclusion: Advanced alerting algorithms with higher specificity are likely to reduce the time spent by clinicians and increase the percentage of time spent on changes rated as most meaningful. Work is needed to present bio-impedance alerts in a manner which is intuitive for clinicians
User-centered visual analysis using a hybrid reasoning architecture for intensive care units
One problem pertaining to Intensive Care Unit information systems is that, in some cases, a very dense display of data can result. To ensure the overview and readability of the increasing volumes of data, some special features are required (e.g., data prioritization, clustering, and selection mechanisms) with the application of analytical methods (e.g., temporal data abstraction, principal component analysis, and detection of events). This paper addresses the problem of improving the integration of the visual and analytical methods applied to medical monitoring systems. We present a knowledge- and machine learning-based approach to support the knowledge discovery process with appropriate analytical and visual methods. Its potential benefit to the development of user interfaces for intelligent monitors that can assist with the detection and explanation of new, potentially threatening medical events. The proposed hybrid reasoning architecture provides an interactive graphical user interface to adjust the parameters of the analytical methods based on the users' task at hand. The action sequences performed on the graphical user interface by the user are consolidated in a dynamic knowledge base with specific hybrid reasoning that integrates symbolic and connectionist approaches. These sequences of expert knowledge acquisition can be very efficient for making easier knowledge emergence during a similar experience and positively impact the monitoring of critical situations. The provided graphical user interface incorporating a user-centered visual analysis is exploited to facilitate the natural and effective representation of clinical information for patient care
Integrated Design and Implementation of Embedded Control Systems with Scilab
Embedded systems are playing an increasingly important role in control
engineering. Despite their popularity, embedded systems are generally subject
to resource constraints and it is therefore difficult to build complex control
systems on embedded platforms. Traditionally, the design and implementation of
control systems are often separated, which causes the development of embedded
control systems to be highly time-consuming and costly. To address these
problems, this paper presents a low-cost, reusable, reconfigurable platform
that enables integrated design and implementation of embedded control systems.
To minimize the cost, free and open source software packages such as Linux and
Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers
for interfacing Scilab with several communication protocols including serial,
Ethernet, and Modbus are developed. Experiments are conducted to test the
developed embedded platform. The use of Scilab enables implementation of
complex control algorithms on embedded platforms. With the developed platform,
it is possible to perform all phases of the development cycle of embedded
control systems in a unified environment, thus facilitating the reduction of
development time and cost.Comment: 15 pages, 14 figures; Open Access at
http://www.mdpi.org/sensors/papers/s8095501.pd
Lineage specific recombination rates and microevolution in Listeria monocytogenes
Background: The bacterium Listeria monocytogenes is a saprotroph as well as an opportunistic human foodborne pathogen, which has previously been shown to consist of at least two widespread lineages (termed lineages I and II) and an uncommon lineage (lineage III). While some L. monocytogenes strains show evidence for considerable diversification by homologous recombination, our understanding of the contribution of recombination to L. monocytogenes evolution is still limited. We therefore used
STRUCTURE and ClonalFrame, two programs that model the effect of recombination, to make inferences about the population structure and different aspects of the recombination process in L. monocytogenes. Analyses were performed using sequences for seven loci (including the house-keeping genes gap, prs, purM and ribC, the stress response gene sigB, and the virulence genes actA and inlA) for 195 L. monocytogenes isolates.
Results: Sequence analyses with ClonalFrame and the Sawyer's test showed that recombination is more
prevalent in lineage II than lineage I and is most frequent in two house-keeping genes (ribC and purM) and the two virulence genes (actA and inlA). The relative occurrence of recombination versus point mutation is about six times higher in lineage II than in lineage I, which causes a higher genetic variability in lineage II. Unlike lineage I, lineage II represents a genetically heterogeneous population with a relatively high proportion (30% average) of genetic material imported from external sources. Phylograms, constructed with correcting for recombination, as well as Tajima's D data suggest that both lineages I and II have suffered a population bottleneck.
Conclusion: Our study shows that evolutionary lineages within a single bacterial species can differ
considerably in the relative contributions of recombination to genetic diversification. Accounting for recombination in phylogenetic studies is critical, and new evolutionary models that account for the possibility of changes in the rate of recombination would be required. While previous studies suggested that only L. monocytogenes lineage I has experienced a recent bottleneck, our analyses clearly show that lineage II experienced a bottleneck at about the same time, which was subsequently obscured by abundant
homologous recombination after the lineage II bottleneck. While lineage I and lineage II should be considered separate species from an evolutionary viewpoint, maintaining single species name may be warranted since both lineages cause the same type of human disease
- …