1,566 research outputs found

    The associations of palliative care experts regarding food refusal : a cross-sectional study with an open question evaluated by triangulation analysis

    Get PDF
    Introduction: Health professionals in oncologic and palliative care settings are often faced with the problem that patients stop eating and drinking. While the causes of food refusal are very different, the result is often malnutrition, which is linked to health comorbidities and a high mortality rate. However, the professionals lack the time and knowledge to clarify the cause for each patient. What associations do health professionals have when faced with food refusal? Objective: To investigate the associations that health professionals in oncological and palliative settings have about denied eating behavior. Methods: A cross-sectional study, starting with an open question focusing professionals’ associations regarding food refusal. The results were inductively analyzed, whereby generic categories were developed. Subsequently, the categories were transformed into quantitative data to calculate the relationships between the categories. Results: A total of 350 out of 2000 participants completed the survey, resulting in a response rate of 17.5%. Food refusal is primarily associated with physical and ethical aspects and with endof-life. Half of the participants frequently find that patients refuse to eat. The attitudes show that the autonomy of the patient is the highest good and is to be respected. Even in the case of patients with limited decision-making capacity, the refusal to eat is acceptable. Conclusion: Clarifying the cause of food refusal requires a great deal of knowledge and is strongly influenced by the associations of health professionals. While the associations have very negative connotations, information and training is needed to make professionals aware of this and to change their associations. With this knowledge and in an interprofessional cooperation, mis-labelling of patient settings can be avoided and fears can be reduce

    Evaluation of deep learning training strategies for the classification of bone marrow cell images

    Get PDF
    BACKGROUND AND OBJECTIVE The classification of bone marrow (BM) cells by light microscopy is an important cornerstone of hematological diagnosis, performed thousands of times a day by highly trained specialists in laboratories worldwide. As the manual evaluation of blood or BM smears is very time-consuming and prone to inter-observer variation, new reliable automated systems are needed. METHODS We aim to improve the automatic classification performance of hematological cell types. Therefore, we evaluate four state-of-the-art Convolutional Neural Network (CNN) architectures on a dataset of 171,374 microscopic cytological single-cell images obtained from BM smears from 945 patients diagnosed with a variety of hematological diseases. We further evaluate the effect of an in-domain vs. out-of-domain pre-training, and assess whether class activation maps provide human-interpretable explanations for the models' predictions. RESULTS The best performing pre-trained model (Regnet_y_32gf) yields a mean precision, recall, and F1 scores of 0.787±0.060, 0.755±0.061, and 0.762±0.050, respectively. This is a 53.5% improvement in precision and 7.3% improvement in recall over previous results with CNNs (ResNeXt-50) that were trained from scratch. The out-of-domain pre-training apparently yields general feature extractors/filters that apply very well to the BM cell classification use case. The class activation maps on cell types with characteristic morphological features were found to be consistent with the explanations of a human domain expert. For example, the Auer rods in the cytoplasm were the predictive cellular feature for correctly classified images of faggot cells. CONCLUSIONS Our study provides data that can help hematology laboratories to choose the optimal training strategy for blood cell classification deep learning models to improve computer-assisted blood and bone marrow cell identification. It also highlights the need for more specific training data, i.e. images of difficult-to-classify classes, including cells labeled with disease information

    Variational methods with coupled Gaussian functions for Bose-Einstein condensates with long-range interactions. II. Applications

    Full text link
    Bose-Einstein condensates with an attractive 1/r interaction and with dipole-dipole interaction are investigated in the framework of the Gaussian variational ansatz introduced by S. Rau, J. Main, and G. Wunner [Phys. Rev. A, submitted]. We demonstrate that the method of coupled Gaussian wave packets is a full-fledged alternative to direct numerical solutions of the Gross-Pitaevskii equation, or even superior in that coupled Gaussians are capable of producing both, stable and unstable states of the Gross-Pitaevskii equation, and thus of giving access to yet unexplored regions of the space of solutions of the Gross-Pitaevskii equation. As an alternative to numerical solutions of the Bogoliubov-de Gennes equations, the stability of the stationary condensate wave functions is investigated by analyzing the stability properties of the dynamical equations of motion for the Gaussian variational parameters in the local vicinity of the stationary fixed points. For blood-cell-shaped dipolar condensates it is shown that on the route to collapse the condensate passes through a pitchfork bifurcation, where the ground state itself turns unstable, before it finally vanishes in a tangent bifurcation.Comment: 14 pages, 14 figures, submitted to Phys. Rev. A, some equations correcte

    Emotion recognition from speech using representation learning in extreme learning machines

    Get PDF
    We propose the use of an Extreme Learning Machine initialised as auto-encoder for emotion recognition from speech. This method is evaluated on three different speech corpora, namely EMO-DB, eNTERFACE and SmartKom. We compare our approach against state-of-the-art recognition rates achieved by Support Vector Machines (SVMs) and a deep learning approach based on Generalised Discriminant Analysis (GerDA). We could improve the recognition rate compared to SVMs by 3%-14% on all three corpora and those compared to GerDA by 8%-13% on two of the three corpora

    Pericenter passage of the gas cloud G2 in the Galactic Center

    Full text link
    We have further followed the evolution of the orbital and physical properties of G2, the object currently falling toward the massive black hole in the Galactic Center on a near-radial orbit. New, very sensitive data were taken in April 2013 with NACO and SINFONI at the ESO VLT . The 'head' of G2 continues to be stretched ever further along the orbit in position-velocity space. A fraction of its emission appears to be already emerging on the blue-shifted side of the orbit, past pericenter approach. Ionized gas in the head is now stretched over more than 15,000 Schwarzschild radii RS around the pericenter of the orbit, at ~ 2000 RS ~ 20 light hours from the black hole. The pericenter passage of G2 will be a process stretching over a period of at least one year. The Brackett-{\gamma} luminosity of the head has been constant over the past 9 years, to within +- 25%, as have the line ratios Brackett-{\gamma} / Paschen-{\alpha} and Brackett-{\gamma} / Helium-I. We do not see any significant evidence for deviations of G2's dynamical evolution, due to hydrodynamical interactions with the hot gas around the black hole, from a ballistic orbit of an initially compact cloud with moderate velocity dispersion. The constant luminosity and the increasingly stretched appearance of the head of G2 in the position-velocity plane, without a central peak, is not consistent with several proposed models with continuous gas release from an initially bound zone around a faint star on the same orbit as G2.Comment: 10 figures, submitted to Ap

    Risk Based Maintenance (RBM) : Minimierung der Nutzerrisiken und Betriebskosten mit einer risikobasierten Methode für den Unterhalt der BSA

    Get PDF
    In Strassentunneln werden verschiedene Betriebs- und Sicherheitsausrüstungen (BSA) installiert, um einen sicheren Betrieb zu gewährleisten. Störungen oder Ausfälle dieser Systeme erzeugen ein Risiko für die Verkehrsteilnehmer, den Betreiber und die Umwelt. Damit die BSA möglichst zuverlässig funktionieren, werden regelmässige Wartungs- und Instandhaltungsarbeiten durchgeführt. Diese Arbeiten sind mit Kosten verbunden, sorgen im Gegenzug im Optimalfall jedoch für ein tieferes Risiko. Um ein möglichst tiefes Gesamtrisiko zu erzielen, muss das zur Verfügung stehende Budget optimal eingesetzt werden. Dazu muss untersucht werden, wie stark die einzelnen Wartungs- und Instandhaltungstätigkeiten das Gesamtrisiko beeinflussen, und wie auf dieser Grundlage die optimale Kombination von Tätigkeiten (Instandhaltungsstrategie) gefunden werden kann, die für ein gegebenes Budget eine maximale Risikoreduktion erzeugen. In diesem Dokument wird eine Methodik zur Entwicklung einer risikobasierten Instandhaltung bei Betriebs- und Sicherheitsausrüstungen beschrieben. Mithilfe dieser Methodik können unterschiedliche Instandhaltungsstrategien für einzelne Anlagen hinsichtlich Kosten und Risiko vergleichbar gemacht werden, um so die Basis für eine risikobasierte Instandhaltung zu schaffen. Darüber hinaus erlaubt die Methodik, für ein Anlagenportfolio aus vielen Anlagen eine optimale Gesamt-Instandhaltungsstrategie zu bestimmen, die entweder bei gegebenem Gesamtbudget ein minimales Gesamtrisiko erzeugt, oder bei einem vorgegebenen Gesamtrisiko ein minimales Budget benötigt. Dabei werden im Wesentlichen folgende Kernpunkte behandelt: - Vorgehen zur Identifikation der Instandhaltungstätigkeiten und Beurteilung/Abschätzung ihres Optimierungspotentials hinsichtlich ihrer Wiederholungsfrequenz. - Methodik und Modellierungsgrundsätze zu einer systematischen und strukturierten Berechnung von Risiko und Kosten der Instandhaltungstätigkeiten. - Einheitlicher Vergleich des Risikoreduktionbeitrages von verschiedenen Instandhaltungstätigkeiten an unterschiedlichen Betriebs- und Sicherheitsausrüstungen. - Optimierungsverfahren hinsichtlich Risiko und Kosten über ein Portfolio von Betriebs- und Sicherheitsausrüstungen bzw. deren Wartungs- und Instandhaltungstätigkeiten, um damit eine optimale Gesamt-Strategie abzuleiten. Im vorliegendem Forschungsprojekt wurde eine praxistaugliche Methodik zur Anwendung der risikobasierten Instandhaltung entwickelt. In 6 standardisierten Phasen kann der Zusammenhang zwischen gewählter Instandhaltungsstrategie (Tätigkeiten, Häufigkeiten), deren Kosten und resultierendes Risiko einer Anlage ermittelt werden. Mit dieser Methode konnte erstmals im BSA-Kontext quantitativ der Zusammenhang zwischen den Ausgaben für Wartung und Instandhaltung und dem daraus resultierenden Risiko ermittelt werden. Diese Standardisierung erlaubt es einerseits für eine gegebene Risikoschranke (akzeptiertes Risiko) die minimal notwendigen Kosten zu ermitteln. Umgekehrt kann für ein gegebenes Gesamtbudget das minimale damit erreichbare Risiko ermittelt und die dazugehörige Instandhaltungsstrategie identifiziert werden. Dies ist möglich auf der Ebene einer einzelnen Anlage oder Anlagenkategorie, für alle Anlagen eines oder mehrerer Tunnel, aber auch auf der Ebene des Gesamtportfolios aller in der Schweiz installierten Anlagen. Die Methodik wurde an den Pilotanwendungen Adaptationsbeleuchtung, Lüftung, VMSystem, Brandmeldeanlage und Notstromanlage durchgeführt. Die Resultate zeigen, dass durch eine Änderung der Instandhaltungsstrategie für die innerhalb des Forschungsprojekts modellierten Wartungstätigkeiten sowohl die Gesamtkosten als auch das Gesamtrisiko um bis zu rund 20% reduziert werden können. Ein weiterer Vorteil dieser standardisierten Methode ist, dass durch die konsequente Verknüpfung von Instandhaltungstätigkeiten und den zugehörigen Risiken eine einheitliche Wissenslage über die positiven Wirkungen der verschiedenen Tätigkeiten entsteht. Dies wird zu einer Harmonisierung und Optimierung der Instandhaltungsaktivitäten der verschiedenen Betreiber führen und kann das lokal vorhandene Expertenwissen in optimaler Weise für das Gesamtportfolio der Schweizer BSA nutzbar machen. Eine Einschätzung des Potentials bei einer schweizweiten Einführung zeigt auf, dass die jährlichen Wartungskosten um rund 2.8 Mio. CHF gesenkt werden könnten, ohne das Gesamtrisiko über alle Tunnel zu erhöhen. Gesehen auf die gesamten Wartungskosten in Tunneln bedeutet dies eine mögliche Kostenreduktion von bis zu 12%

    Clockwise Stellar Disk and the Dark Mass in the Galactic Center

    Full text link
    Two disks of young stars have recently been discovered in the Galactic Center. The disks are rotating in the gravitational field of the central black hole at radii r=0.1-0.3 pc and thus open a new opportunity to measure the central mass. We find that the observed motion of stars in the clockwise disk implies M=4.3+/-0.5 million solar masses for the fiducial distance to the Galactic Center R_0=8 kpc and derive the scaling of M with R_0. As a tool for our estimate we use orbital roulette, a recently developed method. The method reconstructs the three-dimensional orbits of the disk stars and checks the randomness of their orbital phases. We also estimate the three-dimensional positions and orbital eccentricities of the clockwise-disk stars.Comment: Comments: 16 pages, 5 figures, ApJ, in pres

    The Lazarus Effect: Healing Compromised Devices in the Internet of Small Things

    Full text link
    We live in a time when billions of IoT devices are being deployed and increasingly relied upon. This makes ensuring their availability and recoverability in case of a compromise a paramount goal. The large and rapidly growing number of deployed IoT devices make manual recovery impractical, especially if the devices are dispersed over a large area. Thus, there is a need for a reliable and scalable remote recovery mechanism that works even after attackers have taken full control over devices, possibly misusing them or trying to render them useless. To tackle this problem, we present Lazarus, a system that enables the remote recovery of compromised IoT devices. With Lazarus, an IoT administrator can remotely control the code running on IoT devices unconditionally and within a guaranteed time bound. This makes recovery possible even in case of severe corruption of the devices' software stack. We impose only minimal hardware requirements, making Lazarus applicable even for low-end constrained off-the-shelf IoT devices. We isolate Lazarus's minimal recovery trusted computing base from untrusted software both in time and by using a trusted execution environment. The temporal isolation prevents secrets from being leaked through side-channels to untrusted software. Inside the trusted execution environment, we place minimal functionality that constrains untrusted software at runtime. We implement Lazarus on an ARM Cortex-M33-based microcontroller in a full setup with an IoT hub, device provisioning and secure update functionality. Our prototype can recover compromised embedded OSs and bare-metal applications and prevents attackers from bricking devices, for example, through flash wear out. We show this at the example of FreeRTOS, which requires no modifications but only a single additional task. Our evaluation shows negligible runtime performance impact and moderate memory requirements.Comment: In Proceedings of the 15th ACM Asia Conference on Computer and Communications Security (ASIA CCS 20

    Annual proxy data from Lago Grande di Monticchio (southern Italy) between 76 and 112 ka: new chronological constraints and insights on abrupt climatic oscillations

    Get PDF
    We present new annual sedimentological proxies and sub-annual element scanner data from the Lago Grande di Monticchio (MON) sediment record for the sequence 76–112 thousand years before present (ka). They are combined with the previously published decadal to centennial resolved pollen assemblage in order to provide a comprehensive reconstruction of six major abrupt stadial spells (MON 1–6) in the central Mediterranean during the early phase of the last glaciation. These climatic oscillations are defined by intervals of thicker varves and high Ti-counts and coincide with episodes of forest depletion interpreted as Mediterranean stadial conditions (cold winter/dry summer). Our chronology, labelled as MON-2014, has been updated for the study interval by tephrochronology and repeated and more precise varve counts and is independent from ice-core and speleothem chronologies. The high-resolution Monticchio data then have been compared in detail with the Greenland ice-core δ<sup>18</sup>O record (NorthGRIP) and the northern Alps speleothem δ<sup>18</sup>O<sub>calcite</sub> data (NALPS). Based on visual inspection of major changes in the proxy data, MON 2–6 are suggested to correlate with Greenland stadials (GS) 25–20. MON 1 (Woillard event), the first and shortest cooling spell in the Mediterranean after a long phase of stable interglacial conditions, has no counterpart in the Greenland ice core, but coincides with the lowest isotope values at the end of the gradual decrease in δ<sup>18</sup>O<sub>ice</sub> in NorthGRIP during the second half of the Greenland interstadial (GI) 25. MON 3 is the least pronounced cold spell and shows gradual transitions, whereas its NorthGRIP counterpart GS 24 is characterized by sharp changes in the isotope records. MON 2 and MON 4 are the longest and most pronounced oscillations in the MON sediments in good agreement with their counterparts identified in the ice and spelethem records. The length of MON 4 (correlating with GS 22) supports the duration of stadial proposed by the NALPS timescales and suggests ca. 500 year longer duration than calculated by the ice-core chronologies GICC05<sub>modelext</sub> and AICC2012. Absolute dating of the cold spells provided by the MON-2014 chronology shows good agreement among the MON-2014, the GICC05<sub>modelext</sub> and the NALPS timescales for the period between 112 and 100 ka. In contrast, the MON-2014 varve chronology dates the oscillations MON 4 to MON 6 (92–76 ka) as ca. 3500 years older than the most likely corresponding stadials GS 22 to GS 20 by the other chronologies

    Evaluation of deep learning training strategies for the classification of bone marrow cell images

    Get PDF
    Background and Objective: The classification of bone marrow (BM) cells by light mi- croscopy is an important cornerstone of hematological diagnosis, performed thousands of times a day by highly trained specialists in laboratories worldwide. As the manual evaluation of blood or BM smears is very time-consuming and prone to inter-observer variation, new reliable automated systems are needed. Methods: We aim to improve the automatic classification performance of hematolog- ical cell types. Therefore, we evaluate four state-of-the-art Convolutional Neural Net- work (CNN) architectures on a dataset of 171, 374 microscopic cytological single-cell images obtained from BM smears from 945 patients diagnosed with a variety of hema- tological diseases. We further evaluate the effect of an in-domain vs. out-of-domain pre-training, and assess whether class activation maps provide human-interpretable ex- planations for the models’ predictions. Results: The best performing pre-trained model (Regnet y 32gf) yields a mean pre- cision, recall, and F1 scores of 0.787 ± 0.060, 0.755 ± 0.061, and 0.762 ± 0.050, re- spectively. This is a 53.5% improvement in precision and 7.3% improvement in recall over previous results with CNNs (ResNeXt-50) that were trained from scratch. The out-of-domain pre-training apparently yields general feature extractors/filters that ap- ply very well to the BM cell classification use case. The class activation maps on cell types with characteristic morphological features were found to be consistent with the explanations of a human domain expert. For example, the Auer rods in the cytoplasm were the predictive cellular feature for correctly classified images of faggot cells. Conclusions: Our study provides data that can help hematology laboratories to choose the optimal training strategy for blood cell classification deep learning mod- els to improve computer-assisted blood and bone marrow cell identification. It also highlights the need for more specific training data, i.e. images of difficult-to-classify classes, including cells labeled with disease information
    corecore