22,444 research outputs found
Patients' perspective on emergency treatment of ophthalmologic diseases during the first phase of SARS-CoV2 pandemic in a tertiary referral center in Germany - the COVID-DETOUR questionnaire study
Background During the first wave of the COVID-19 pandemic, the need of treatment of urgent ophthalmological diseases and the possible risk of a SARS-CoV-2 infection had to be weighed against each other. In this questionnaire study, we aimed to analyze potential barriers and patients' health beliefs during and after the lockdown early 2020 in a tertiary referral center in Kiel, Germany. Results Ninety-three patients were included, 43 in subgroup A (before April 20th) and 50 in subgroup B (April 20th or later). Retinal disorders were the most common causes for admission (approximately 60%).. Only 8 patients (8.6%) experienced a delay between their decision to visit a doctor until the actual examination. Every fourth patient was afraid of a COVID-19 infection, and expected a higher likelihood for an infection at the hospital. Patients with comorbidities tended to be more likely to be afraid of an infection (correlation coefficient 0.183, p = 0.0785) and were significantly more likely to be concerned about problems with organizing follow-up care (corr. Coefficient 0.222, p = 0.0328). Higher age was negatively correlated with fear of infection (corr. Coefficient - 0.218, p-value 0.034). Conclusion In this questionnaire study, only a minority of patients indicated a delay in treatment, regardless of whether symptoms occurred before or after the lockdown before April 20th, 2020. While patients with comorbidities were more concerned about infection and problems during follow-up care, patients of higher age - who have a higher mortality - were less afraid. Protection of high-risk groups should be prioritized during the SARS-CoV-2 pandemic. Trial registration The study was registered as DRKS00021630 at the DRKS (Deutsches Register Klinischer Studien) before the conduction of the study on May 5th, 2020
A Survey on Deep Learning-based Architectures for Semantic Segmentation on 2D images
Semantic segmentation is the pixel-wise labelling of an image. Since the
problem is defined at the pixel level, determining image class labels only is
not acceptable, but localising them at the original image pixel resolution is
necessary. Boosted by the extraordinary ability of convolutional neural
networks (CNN) in creating semantic, high level and hierarchical image
features; excessive numbers of deep learning-based 2D semantic segmentation
approaches have been proposed within the last decade. In this survey, we mainly
focus on the recent scientific developments in semantic segmentation,
specifically on deep learning-based methods using 2D images. We started with an
analysis of the public image sets and leaderboards for 2D semantic
segmantation, with an overview of the techniques employed in performance
evaluation. In examining the evolution of the field, we chronologically
categorised the approaches into three main periods, namely pre-and early deep
learning era, the fully convolutional era, and the post-FCN era. We technically
analysed the solutions put forward in terms of solving the fundamental problems
of the field, such as fine-grained localisation and scale invariance. Before
drawing our conclusions, we present a table of methods from all mentioned eras,
with a brief summary of each approach that explains their contribution to the
field. We conclude the survey by discussing the current challenges of the field
and to what extent they have been solved.Comment: Updated with new studie
Variance Loss in Variational Autoencoders
In this article, we highlight what appears to be major issue of Variational
Autoencoders, evinced from an extensive experimentation with different network
architectures and datasets: the variance of generated data is significantly
lower than that of training data. Since generative models are usually evaluated
with metrics such as the Frechet Inception Distance (FID) that compare the
distributions of (features of) real versus generated images, the variance loss
typically results in degraded scores. This problem is particularly relevant in
a two stage setting, where we use a second VAE to sample in the latent space of
the first VAE. The minor variance creates a mismatch between the actual
distribution of latent variables and those generated by the second VAE, that
hinders the beneficial effects of the second stage. Renormalizing the output of
the second VAE towards the expected normal spherical distribution, we obtain a
sudden burst in the quality of generated samples, as also testified in terms of
FID.Comment: Article accepted at the Sixth International Conference on Machine
Learning, Optimization, and Data Science. July 19-23, 2020 - Certosa di
Pontignano, Siena, Ital
PredNet and Predictive Coding: A Critical Review
PredNet, a deep predictive coding network developed by Lotter et al.,
combines a biologically inspired architecture based on the propagation of
prediction error with self-supervised representation learning in video. While
the architecture has drawn a lot of attention and various extensions of the
model exist, there is a lack of a critical analysis. We fill in the gap by
evaluating PredNet both as an implementation of the predictive coding theory
and as a self-supervised video prediction model using a challenging video
action classification dataset. We design an extended model to test if
conditioning future frame predictions on the action class of the video improves
the model performance. We show that PredNet does not yet completely follow the
principles of predictive coding. The proposed top-down conditioning leads to a
performance gain on synthetic data, but does not scale up to the more complex
real-world action classification dataset. Our analysis is aimed at guiding
future research on similar architectures based on the predictive coding theory
PathologyGAN: Learning deep representations of cancer tissue
We apply Generative Adversarial Networks (GANs) to the domain of digital
pathology. Current machine learning research for digital pathology focuses on
diagnosis, but we suggest a different approach and advocate that generative
models could drive forward the understanding of morphological characteristics
of cancer tissue. In this paper, we develop a framework which allows GANs to
capture key tissue features and uses these characteristics to give structure to
its latent space. To this end, we trained our model on 249K H&E breast cancer
tissue images, extracted from 576 TMA images of patients from the Netherlands
Cancer Institute (NKI) and Vancouver General Hospital (VGH) cohorts. We show
that our model generates high quality images, with a Frechet Inception Distance
(FID) of 16.65. We further assess the quality of the images with cancer tissue
characteristics (e.g. count of cancer, lymphocytes, or stromal cells), using
quantitative information to calculate the FID and showing consistent
performance of 9.86. Additionally, the latent space of our model shows an
interpretable structure and allows semantic vector operations that translate
into tissue feature transformations. Furthermore, ratings from two expert
pathologists found no significant difference between our generated tissue
images from real ones. The code, generated images, and pretrained model are
available at https://github.com/AdalbertoCq/Pathology-GANComment: MIDL 2020 final versio
3D Object Reconstruction from Imperfect Depth Data Using Extended YOLOv3 Network
State-of-the-art intelligent versatile applications provoke the usage of full 3D, depth-based streams, especially in the scenarios of intelligent remote control and communications, where virtual and augmented reality will soon become outdated and are forecasted to be replaced by point cloud streams providing explorable 3D environments of communication and industrial data. One of the most novel approaches employed in modern object reconstruction methods is to use a priori knowledge of the objects that are being reconstructed. Our approach is different as we strive to reconstruct a 3D object within much more difficult scenarios of limited data availability. Data stream is often limited by insufficient depth camera coverage and, as a result, the objects are occluded and data is lost. Our proposed hybrid artificial neural network modifications have improved the reconstruction results by 8.53 which allows us for much more precise filling of occluded object sides and reduction of noise during the process. Furthermore, the addition of object segmentation masks and the individual object instance classification is a leap forward towards a general-purpose scene reconstruction as opposed to a single object reconstruction task due to the ability to mask out overlapping object instances and using only masked object area in the reconstruction process
- …