30 research outputs found
Integrated sampling-and-sensing using microdialysis and biosensing by particle motion for continuous cortisol monitoring
Microdialysis catheters are small probes that allow sampling from biological systems and human subjects with minimal perturbation. Traditionally, microdialysis samples are collected in vials, transported to a laboratory, and analysed with typical turnaround times of hours to days. To realize a continuous sampling-and-sensing methodology with minimal time delay, we studied the integration of microdialysis sampling with a sensor for continuous biomolecular monitoring based on Biosensing by Particle Motion (BPM). A microfluidic flow cell was designed with a volume of 12 μl in order to be compatible with flowrates of microdialysis sampling. The analyte recovery and the time characteristics of the sampling-and-sensing system were studied using a food colorant in buffer and using cortisol in buffer and in blood plasma. Concentration step functions were applied, and the system response was measured using optical absorption and a continuous BPM cortisol sensor. The cortisol recovery was around 80% for a 30 mm microdialysis membrane with a 20 kDa molecular weight cut-off and a flowrate of 2 μl min−1. The concentration-time data could be fitted with a transport delay time and single-exponential relaxation curves. The total delay time of the sampling-and-sensing methodology was about 15 minutes. Continuous sampling-and-sensing was demonstrated over a period of 5 hours. These results represent an important step toward integrated sampling-and-sensing for the continuous monitoring of a wide variety of low-concentration biomolecular substances for applications in biological and biomedical research.</p
Integrated sampling-and-sensing using microdialysis and biosensing by particle motion for continuous cortisol monitoring
Microdialysis catheters are small probes that allow sampling from biological systems and human subjects with minimal perturbation. Traditionally, microdialysis samples are collected in vials, transported to a laboratory, and analysed with typical turnaround times of hours to days. To realize a continuous sampling-and-sensing methodology with minimal time delay, we studied the integration of microdialysis sampling with a sensor for continuous biomolecular monitoring based on Biosensing by Particle Motion (BPM). A microfluidic flow cell was designed with a volume of 12 μl in order to be compatible with flowrates of microdialysis sampling. The analyte recovery and the time characteristics of the sampling-and-sensing system were studied using a food colorant in buffer and using cortisol in buffer and in blood plasma. Concentration step functions were applied, and the system response was measured using optical absorption and a continuous BPM cortisol sensor. The cortisol recovery was around 80% for a 30 mm microdialysis membrane with a 20 kDa molecular weight cut-off and a flowrate of 2 μl min−1. The concentration-time data could be fitted with a transport delay time and single-exponential relaxation curves. The total delay time of the sampling-and-sensing methodology was about 15 minutes. Continuous sampling-and-sensing was demonstrated over a period of 5 hours. These results represent an important step toward integrated sampling-and-sensing for the continuous monitoring of a wide variety of low-concentration biomolecular substances for applications in biological and biomedical research.</p
Don't be misled: 3 misconceptions about external validation of clinical prediction models
Clinical prediction models provide risks of health outcomes that can inform patients and support medical decisions. However, most models never make it to actual implementation in practice. A commonly heard reason for this lack of implementation is that prediction models are often not externally validated. While we generally encourage external validation, we argue that an external validation is often neither sufficient nor required as an essential step before implementation. As such, any available external validation should not be perceived as a license for model implementation. We clarify this argument by discussing 3 common misconceptions about external validation. We argue that there is not one type of recommended validation design, not always a necessity for external validation, and sometimes a need for multiple external validations. The insights from this paper can help readers to consider, design, interpret, and appreciate external validation studies
Metrics reloaded: Pitfalls and recommendations for image analysis validation
Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. Particularly in automatic biomedical image analysis, chosen performance metrics often do not reflect the domain interest, thus failing to adequately measure scientific progress and hindering translation of ML techniques into practice. To overcome this, our large international expert consortium created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. The framework was developed in a multi-stage Delphi process and is based on the novel concept of a problem fingerprint - a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), data set and algorithm output. Based on the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as a classification task at image, object or pixel level, namely image-level classification, object detection, semantic segmentation, and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool, which also provides a point of access to explore weaknesses, strengths and specific recommendations for the most common validation metrics. The broad applicability of our framework across domains is demonstrated by an instantiation for various biological and medical image analysis use cases
Common Limitations of Image Processing Metrics:A Picture Story
While the importance of automatic image analysis is continuously increasing,
recent meta-research revealed major flaws with respect to algorithm validation.
Performance metrics are particularly key for meaningful, objective, and
transparent performance assessment and validation of the used automatic
algorithms, but relatively little attention has been given to the practical
pitfalls when using specific metrics for a given image analysis task. These are
typically related to (1) the disregard of inherent metric properties, such as
the behaviour in the presence of class imbalance or small target structures,
(2) the disregard of inherent data set properties, such as the non-independence
of the test cases, and (3) the disregard of the actual biomedical domain
interest that the metrics should reflect. This living dynamically document has
the purpose to illustrate important limitations of performance metrics commonly
applied in the field of image analysis. In this context, it focuses on
biomedical image analysis problems that can be phrased as image-level
classification, semantic segmentation, instance segmentation, or object
detection task. The current version is based on a Delphi process on metrics
conducted by an international consortium of image analysis experts from more
than 60 institutions worldwide.Comment: This is a dynamic paper on limitations of commonly used metrics. The
current version discusses metrics for image-level classification, semantic
segmentation, object detection and instance segmentation. For missing use
cases, comments or questions, please contact [email protected] or
[email protected]. Substantial contributions to this document will be
acknowledged with a co-authorshi
Understanding metric-related pitfalls in image analysis validation
Validation metrics are key for the reliable tracking of scientific progress
and for bridging the current chasm between artificial intelligence (AI)
research and its translation into practice. However, increasing evidence shows
that particularly in image analysis, metrics are often chosen inadequately in
relation to the underlying research problem. This could be attributed to a lack
of accessibility of metric-related knowledge: While taking into account the
individual strengths, weaknesses, and limitations of validation metrics is a
critical prerequisite to making educated choices, the relevant knowledge is
currently scattered and poorly accessible to individual researchers. Based on a
multi-stage Delphi process conducted by a multidisciplinary expert consortium
as well as extensive community feedback, the present work provides the first
reliable and comprehensive common point of access to information on pitfalls
related to validation metrics in image analysis. Focusing on biomedical image
analysis but with the potential of transfer to other fields, the addressed
pitfalls generalize across application domains and are categorized according to
a newly created, domain-agnostic taxonomy. To facilitate comprehension,
illustrations and specific examples accompany each pitfall. As a structured
body of information accessible to researchers of all levels of expertise, this
work enhances global comprehension of a key topic in image analysis validation.Comment: Shared first authors: Annika Reinke, Minu D. Tizabi; shared senior
authors: Paul F. J\"ager, Lena Maier-Hei
Continuous small-molecule monitoring with a digital single particle switch
The ability to continuously measure concentrations of small molecules is important for biomedical, environmental and industrial monitoring. However, due to their low molecular mass it is difficult to quantify concentrations of such molecules, particularly at low concentrations. Here we describe a small-molecule sensor that is generalizable, sensitive, specific, reversible, and suited for continuous monitoring over long durations. The sensor consists of particles attached to a sensing surface via a double stranded DNA tether. The particles transiently bind to the sensing surface via single molecular affinity interactions and the transient binding is optically detected as digital binding events via the Brownian motion of the particles. The rate of binding events decreases with increasing analyte concentration, because analyte molecules inhibit binding of the tethered particle to the surface. The sensor enables continuous measurements of analyte concentrations due to the reversibility of the inter-molecular bonds and digital read-out of particle motion. We show results for the monitoring of short single-stranded DNA sequences and creatinine, a small-molecule biomarker (113 Da) for kidney function, demonstrating a temporal resolution of a few minutes. The precision of the sensor is determined by the statistics of the digital switching events, which means that the precision is tunable by the number of particles and the measurement time
Continuous Small-Molecule Monitoring with a Digital Single-Particle Switch
The ability to continuously measure concentrations of small molecules is important for biomedical, environmental and industrial monitoring. However, due to their low molecular mass it is difficult to quantify concentrations of such molecules, particularly at low concentrations. Here we describe a small-molecule sensor that is generalizable, sensitive, specific, reversible, and suited for continuous monitoring over long durations. The sensor consists of particles attached to a sensing surface via a double stranded DNA tether. The particles transiently bind to the sensing surface via single molecular affinity interactions and the transient binding is optically detected as digital binding events via the Brownian motion of the particles. The rate of binding events decreases with increasing analyte concentration, because analyte molecules inhibit binding of the tethered particle to the surface. The sensor enables continuous measurements of analyte concentrations due to the reversibility of the inter-molecular bonds and digital read-out of particle motion. We show results for the monitoring of short single-stranded DNA sequences and creatinine, a small-molecule biomarker (113 Da) for kidney function, demonstrating a temporal resolution of a few minutes. The precision of the sensor is determined by the statistics of the digital switching events, which means that the precision is tunable by the number of particles and the measurement time