922 research outputs found
Development of a decision analytic model to support decision making and risk communication about thrombolytic treatment
Background
Individualised prediction of outcomes can support clinical and shared decision making. This paper describes the building of such a model to predict outcomes with and without intravenous thrombolysis treatment following ischaemic stroke.
Methods
A decision analytic model (DAM) was constructed to establish the likely balance of benefits and risks of treating acute ischaemic stroke with thrombolysis. Probability of independence, (modified Rankin score mRS ≤ 2), dependence (mRS 3 to 5) and death at three months post-stroke was based on a calibrated version of the Stroke-Thrombolytic Predictive Instrument using data from routinely treated stroke patients in the Safe Implementation of Treatments in Stroke (SITS-UK) registry. Predictions in untreated patients were validated using data from the Virtual International Stroke Trials Archive (VISTA). The probability of symptomatic intracerebral haemorrhage in treated patients was incorporated using a scoring model from Safe Implementation of Thrombolysis in Stroke-Monitoring Study (SITS-MOST) data.
Results
The model predicts probabilities of haemorrhage, death, independence and dependence at 3-months, with and without thrombolysis, as a function of 13 patient characteristics. Calibration (and inclusion of additional predictors) of the Stroke-Thrombolytic Predictive Instrument (S-TPI) addressed issues of under and over prediction. Validation with VISTA data confirmed that assumptions about treatment effect were just. The C-statistics for independence and death in treated patients in the DAM were 0.793 and 0.771 respectively, and 0.776 for independence in untreated patients from VISTA.
Conclusions
We have produced a DAM that provides an estimation of the likely benefits and risks of thrombolysis for individual patients, which has subsequently been embedded in a computerised decision aid to support better decision-making and informed consent
Geometry meets semantics for semi-supervised monocular depth estimation
Depth estimation from a single image represents a very exciting challenge in
computer vision. While other image-based depth sensing techniques leverage on
the geometry between different viewpoints (e.g., stereo or structure from
motion), the lack of these cues within a single image renders ill-posed the
monocular depth estimation task. For inference, state-of-the-art
encoder-decoder architectures for monocular depth estimation rely on effective
feature representations learned at training time. For unsupervised training of
these models, geometry has been effectively exploited by suitable images
warping losses computed from views acquired by a stereo rig or a moving camera.
In this paper, we make a further step forward showing that learning semantic
information from images enables to improve effectively monocular depth
estimation as well. In particular, by leveraging on semantically labeled images
together with unsupervised signals gained by geometry through an image warping
loss, we propose a deep learning approach aimed at joint semantic segmentation
and depth estimation. Our overall learning framework is semi-supervised, as we
deploy groundtruth data only in the semantic domain. At training time, our
network learns a common feature representation for both tasks and a novel
cross-task loss function is proposed. The experimental findings show how,
jointly tackling depth prediction and semantic segmentation, allows to improve
depth estimation accuracy. In particular, on the KITTI dataset our network
outperforms state-of-the-art methods for monocular depth estimation.Comment: 16 pages, Accepted to ACCV 201
Graph Laplacian–Based Strategies and Convex Optimization via Primal-Dual Methods
This thesis focuses on the analysis of different variational approaches for solving inverse problems. In the first part, we examine the graph Laplacian operator within an l2-l1 framework, where q is less than 1. A key challenge in using this linear operator is its dependence on an initial reconstruction, which can be obtained through a general reconstruction method. However, we demonstrate that, under very weak assumptions on the chosen reconstruction method, the resulting strategy is both convergent and stable, achieving high quality final reconstructions. Additionally, we analyze the fractional graph Laplacian operator, showing that the use of fractional powers can surpass the standard approach by providing more detailed final images.
The second part of this thesis considers a more general framework, where the optimization problem consists of the sum of a differentiable term and a non-smooth but convex term. The variable metric approach we propose results in a convergent method that fixes a priori the number of nested iterations required to compute inexact approximations of the proximal gradient step. We also introduce an iterated Tikhonov-based strategy, which accelerates convergence while maintaining high-quality reconstructions. In the context of image deblurring, the variable metric approach can be reinterpreted as a right preconditioning strategy. Therefore, the final section is devoted to the analysis of a left preconditioning approach.This thesis focuses on the analysis of different variational approaches for solving inverse problems. In the first part, we examine the graph Laplacian operator within an l2-l1 framework, where q is less than 1. A key challenge in using this linear operator is its dependence on an initial reconstruction, which can be obtained through a general reconstruction method. However, we demonstrate that, under very weak assumptions on the chosen reconstruction method, the resulting strategy is both convergent and stable, achieving high quality final reconstructions. Additionally, we analyze the fractional graph Laplacian operator, showing that the use of fractional powers can surpass the standard approach by providing more detailed final images.
The second part of this thesis considers a more general framework, where the optimization problem consists of the sum of a differentiable term and a non-smooth but convex term. The variable metric approach we propose results in a convergent method that fixes a priori the number of nested iterations required to compute inexact approximations of the proximal gradient step. We also introduce an iterated Tikhonov-based strategy, which accelerates convergence while maintaining high-quality reconstructions. In the context of image deblurring, the variable metric approach can be reinterpreted as a right preconditioning strategy. Therefore, the final section is devoted to the analysis of a left preconditioning approach
A 2-pyridyl-2,1-borazaronaphthalene derivative as forefather of a new class of bidentate ligands: synthesis and application in luminescent Ir(III) complexes
Borazaro compounds (or azaborines) are aromatic compounds in which a C=C unit is replaced by an isoelectronic B-N unit. The possibility to generate chemical diversity has led to an increasing interest in azaborines, especially in the fields of biomedical research and optoelectonics. In particular, Dewar’s synthesis of borazaronaphthalene is a common starting step to obtain different 1,2-azaborines via nucleophilic substitution on the boron atom. Here we present the synthesis of a novel 1,2-azaborine (i.e. 4-methyl-2-(pyridin-2-yl)-2,1-borazaronaphthalene, named FAAH) via functionalization of 2-chloro-4-methyl-2,1-borazaronaphthalene with a 2-pyridyl unit. FAAH can be used as an anionic bidentate ligand for transition metal complexes, since it can chelate the metal center with both the pyridine and the azaborine nitrogen atoms. FAAH was used for the synthesis of a series of neutral luminescent Ir(III) complexes (named FAV, FAB and FAR) of general formula [Ir(C^N )2(FAA)], where C^N indicates three different cyclometalating ligands: i.e. 2-phenylpyridine in the case of FAV; 2-(2,4-difluorophenyl)pyridine in the case of FAB; 2-methyl-3-phenylquinoxaline in the case of FAR. The reaction yields are quite low, however it was always possible to characterize all the compounds by means of NMR spectroscopy. A complete photophysical and theoretical characterization is also presented. FAAH displays a good chemical stability and a high photoluminescence quantum yield (up to 28 % in solution). On the contrary, the Iridium complexes undergo degradation over time in solution. Despite this stability problem, it was possible to get a good understanding of the photophysics of the three complexes: the emission of both FAV and FAB is observed around 500 nm and arises from a 3LC state centered on the azaborine ligand. In the case of FAR, the emitting state is basically 3MLCT/3LLCT in nature and the resulting broad and unstructured emission band is centered around 700 nm
Evaluation of the Oculus Rift S tracking system in room scale virtual reality
In specific virtual reality applications that require high accuracy it may be advisable to replace the built-in tracking system of the HMD with a third party solution. The purpose of this research work is to evaluate the accuracy of the built-in tracking system of the Oculus Rift S Head Mounted Display (HMD) in room scale environments against a motion capture system. In particular, an experimental evaluation of the Oculus Rift S inside-out tracking technology was carried out, compared to the performance of an outside-in tracking method based on the OptiTrack motion capture system. In order to track the pose of the HMD using the motion capture system the Oculus Rift S was instrumented with passive retro-reflective markers and calibrated. Experiments have been performed on a dataset of multiple paths including simple motions as well as more complex paths. Each recorded path contained simultaneous changes in both position and orientation of the HMD. Our results indicate that in room-scale environments the average translation error for the Oculus Rift S tracking system is about 1.83 cm, and the average rotation error is about 0. 77°, which is 2 orders of magnitude higher than the performance that can be achieved using a motion capture system
Learning optical flow from still images
This paper deals with the scarcity of data for training optical flow
networks, highlighting the limitations of existing sources such as labeled
synthetic datasets or unlabeled real videos. Specifically, we introduce a
framework to generate accurate ground-truth optical flow annotations quickly
and in large amounts from any readily available single real picture. Given an
image, we use an off-the-shelf monocular depth estimation network to build a
plausible point cloud for the observed scene. Then, we virtually move the
camera in the reconstructed environment with known motion vectors and rotation
angles, allowing us to synthesize both a novel view and the corresponding
optical flow field connecting each pixel in the input image to the one in the
new frame. When trained with our data, state-of-the-art optical flow networks
achieve superior generalization to unseen real data compared to the same models
trained either on annotated synthetic datasets or unlabeled videos, and better
specialization if combined with synthetic images.Comment: CVPR 2021. Project page with supplementary and code:
https://mattpoggi.github.io/projects/cvpr2021aleotti
- …
