10 research outputs found

    Development of a decision analytic model to support decision making and risk communication about thrombolytic treatment

    Get PDF
    Background Individualised prediction of outcomes can support clinical and shared decision making. This paper describes the building of such a model to predict outcomes with and without intravenous thrombolysis treatment following ischaemic stroke. Methods A decision analytic model (DAM) was constructed to establish the likely balance of benefits and risks of treating acute ischaemic stroke with thrombolysis. Probability of independence, (modified Rankin score mRS ≤ 2), dependence (mRS 3 to 5) and death at three months post-stroke was based on a calibrated version of the Stroke-Thrombolytic Predictive Instrument using data from routinely treated stroke patients in the Safe Implementation of Treatments in Stroke (SITS-UK) registry. Predictions in untreated patients were validated using data from the Virtual International Stroke Trials Archive (VISTA). The probability of symptomatic intracerebral haemorrhage in treated patients was incorporated using a scoring model from Safe Implementation of Thrombolysis in Stroke-Monitoring Study (SITS-MOST) data. Results The model predicts probabilities of haemorrhage, death, independence and dependence at 3-months, with and without thrombolysis, as a function of 13 patient characteristics. Calibration (and inclusion of additional predictors) of the Stroke-Thrombolytic Predictive Instrument (S-TPI) addressed issues of under and over prediction. Validation with VISTA data confirmed that assumptions about treatment effect were just. The C-statistics for independence and death in treated patients in the DAM were 0.793 and 0.771 respectively, and 0.776 for independence in untreated patients from VISTA. Conclusions We have produced a DAM that provides an estimation of the likely benefits and risks of thrombolysis for individual patients, which has subsequently been embedded in a computerised decision aid to support better decision-making and informed consent

    SoilCover: Bestimmung der Bodenbedeckung mittels Smartphon

    Get PDF
    Der Bodenbedeckungsgrad mit lebendem Pflanzenmaterial und toter organischer Masse ist eine fundamentale Kennzahl für die nachhaltige Bewirtschaftung des Bodens. Durch den Bodenbedeckungsgrad wird der Schutz des Bodens vor Erosion beschrieben und somit bildet die Bedeckung ein Qualitätsmaß für die Bodenbearbeitung. Derzeitige Standardmethoden sind entweder zeitaufwendig oder basieren auf einer qualitativen Schätzung geschulter Personen. Erste Bildanalysemethoden basieren auf einer automatischen Segmentierung und Klassifizierung in Boden, Ernterückstände und lebendes Pflanzenmaterial erfordern eine manuelle Einstellung diverser Merkmalsschwellwerte. Die von uns verwendete Methode basiert auf automatisch gelernten Schwellwerten mittels einer speziellen Technik namens „Entangled Forest“. Durch diese Technik werden eine höhere Robustheit und eine bessere Generalisierung erreicht. Der „Entangled Forest“ klassifiziert einzelne Pixel durch paarweise Pixelvergleiche und einem gelernten Offset. Glättung des Ergebnisses wird im selben Verarbeitungsschritt durch spezielle Maximum-a-posteriori-Merkmale erreicht, indem die Entscheidung in einem Entscheidungsknoten anhand der derzeitigen A-posteriori-Wahrscheinlichkeit benachbarter Pixel getroffen wird. Die Evaluierung unseres Systems an Bildern mit unterschiedlichen Lichtbedingungen, bei einer Bodenbedeckung zw. 0% und 100% mit lebender organischer Masse, Stroh bzw. sonstigen Ernterückständen zeigt vergleichbare Ergebnisse zur manuellen Auswertung. Der verendete Algorithmus wurde in eine App (Anwendungssoftware) für Smartphones und ein Webinterface eingearbeitet und kann somit online direkt im Feld verwendet werden

    Multimodal cue integration through Hypotheses Verification for RGB-D object recognition and 6DOF pose estimation

    No full text
    This paper proposes an effective algorithm for recognizing objects and accurately estimating their 6DOF pose in scenes acquired by a RGB-D sensor. The proposed method is based on a combination of different recognition pipelines, each exploiting the data in a diverse manner and generating object hypotheses that are ultimately fused together in an Hypothesis Verification stage that globally enforces geometrical consistency between model hypotheses and the scene. Such a scheme boosts the overall recognition performance as it enhances the strength of the different recognition pipelines while diminishing the impact of their specific weaknesses. The proposed method outperforms the state-of-the-art on two challenging benchmark datasets for object recognition comprising 35 object models and, respectively, 176 and 353 scenes

    Petroleum and Coal

    No full text
    corecore