51 research outputs found

    Evaluation of a CTA-based convolutional neural network for infarct volume prediction in anterior cerebral circulation ischaemic stroke

    Get PDF
    Background Computed tomography angiography (CTA) imaging is needed in current guideline-based stroke diagnosis, and infarct core size is one factor in guiding treatment decisions. We studied the efficacy of a convolutional neural network (CNN) in final infarct volume prediction from CTA and compared the results to a CT perfusion (CTP)-based commercially available software (RAPID, iSchemaView). Methods We retrospectively selected 83 consecutive stroke cases treated with thrombolytic therapy or receiving supportive care that presented to Helsinki University Hospital between January 2018 and July 2019. We compared CNN-derived ischaemic lesion volumes to final infarct volumes that were manually segmented from follow-up CT and to CTP-RAPID ischaemic core volumes. Results An overall correlation of r = 0.83 was found between CNN outputs and final infarct volumes. The strongest correlation was found in a subgroup of patients that presented more than 9 h of symptom onset (r = 0.90). A good correlation was found between the CNN outputs and CTP-RAPID ischaemic core volumes (r = 0.89) and the CNN was able to classify patients for thrombolytic therapy or supportive care with a 1.00 sensitivity and 0.94 specificity. Conclusions A CTA-based CNN software can provide good infarct core volume estimates as observed in follow-up imaging studies. CNN-derived infarct volumes had a good correlation to CTP-RAPID ischaemic core volumes.Peer reviewe

    On solving generalized convex MINLP problems using supporting hyperplane techniques

    Get PDF
    Solution methods for convex mixed integer nonlinear programming (MINLP) problems have, usually, proven convergence properties if the functions involved are differentiable and convex. For other classes of convex MINLP problems fewer results have been given. Classical differential calculus can, though, be generalized to more general classes of functions than differentiable, via subdifferentials and subgradients. In addition, more general than convex functions can be included in a convex problem if the functions involved are defined from convex level sets, instead of being defined as convex functions only. The notion generalized convex, used in the heading of this paper, refers to such additional properties. The generalization for the differentiability is made by using subgradients of Clarke’s subdifferential. Thus, all the functions in the problem are assumed to be locally Lipschitz continuous. The generalization of the functions is done by considering quasiconvex functions. Thus, instead of differentiable convex functions, nondifferentiable f ∘  f∘ -quasiconvex functions can be included in the actual problem formulation and a supporting hyperplane approach is given for the solution of the considered MINLP problem. Convergence to a global minimum is proved for the algorithm, when minimizing an f ∘  f∘ -pseudoconvex function, subject to f ∘  f∘ -pseudoconvex constraints. With some additional conditions, the proof is also valid for f ∘  f∘ -quasiconvex functions, which sums up the properties of the method, treated in the paper. The main contribution in this paper is the generalization of the Extended Supporting Hyperplane method in Eronen et al. (J Glob Optim 69(2):443–459, 2017) to also solve problems with f ∘  f∘ -pseudoconvex objective function.</p

    Using projected cutting planes in the extended cutting plane method

    Get PDF
    In this paper we show that simple projections can improve the algorithmic performance of cutting plane-based optimization methods. Projected cutting planes can, for example, be used as alternatives to standard cutting planes or supporting hyperplanes in the extended cutting plane (ECP) method. In the paper we analyse the properties of such an algorithm and prove that it will converge to a global optimum for smooth and nonsmooth convex mixed integer nonlinear programming problems. Additionally, we show that we are able to solve two old but very difficult facility layout problems (FLP), with previously unknown optimal solutions, to verified global optimum by using projected cutting planes in the algorithm. These solution results are also given in the paper

    Optimized reference spectrum for rating the facade sound insulation

    Get PDF
    Objectively determined single-number-quantities (SNQs) describing the airborne sound insulation of a facade should correspond to the subjective perception of annoyance to road traffic sounds transmitted through a facade. The reference spectra for spectrum adaptation terms C and Ctr in standard ISO 717-7 (International Organization for Standardization, 2013) are not based on psycho-acoustic evidence. The aim of this study is to develop reference spectra which result in SNQs that explain the subjective annoyance of road traffic sounds transmitted through a facade well. Data from a psycho-acoustic experiment by Hongisto, Oliva, and Rekola [J. Acoust. Soc. Am. 144(2), 1100-1112 (2018)] were used. The data included annoyance ratings for road traffic sounds (five different spectrum alternatives) attenuated by the facade (twelve different sound insulation spectrum alternatives), rated by 43 participants. The reference spectrum for each road traffic spectrum was found using mathematical optimization. The performance of the acquired SNQs was estimated with nested cross-validation. The SNQs determined with the optimized reference spectra performed better than the existing SNQs for two road traffic spectra out of five and for an aggregate of the five road traffic sound types. The results can be exploited in the development of standardized SNQs

    Automatic CT Angiography Lesion Segmentation Compared to CT Perfusion in Ischemic Stroke Detection: a Feasibility Study

    Get PDF
    In stroke imaging, CT angiography (CTA) is used for detecting arterial occlusions. These images could also provide information on the extent of ischemia. The study aim was to develop and evaluate a convolutional neural network (CNN)-based algorithm for detecting and segmenting acute ischemic lesions from CTA images of patients with suspected middle cerebral artery stroke. These results were compared to volumes reported by widely used CT perfusion-based RAPID software (IschemaView). A 42-layer-deep CNN was trained on 50 CTA volumes with manually delineated targets. The lower bound for predicted lesion size to reliably discern stroke from false positives was estimated. The severity of false positives and false negatives was reviewed visually to assess the clinical applicability and to further guide the method development. The CNN model corresponded to the manual segmentations with voxel-wise sensitivity 0.54 (95% confidence interval: 0.44-0.63), precision 0.69 (0.60-0.76), and Sorensen-Dice coefficient 0.61 (0.52-0.67). Stroke/nonstroke differentiation accuracy 0.88 (0.81-0.94) was achieved when only considering the predicted lesion size (i.e., regardless of location). By visual estimation, 46% of cases showed some false findings, such as CNN highlighting chronic periventricular white matter changes or beam hardening artifacts, but only in 9% the errors were severe, translating to 0.91 accuracy. The CNN model had a moderately strong correlation to RAPID-reported T-max > 10 s volumes (Pearson's r = 0.76 (0.58-0.86)). The results suggest that detecting anterior circulation ischemic strokes from CTA using a CNN-based algorithm can be feasible when accompanied with physiological knowledge to rule out false positives.Peer reviewe

    Oscar : Optimal subset cardinality regression using the L0-pseudonorm with applications to prognostic modelling of prostate cancer

    Get PDF
    Author summaryFeature subset selection has become a crucial part of building biomedical models, due to the abundance of available predictors in many applications, yet there remains an uncertainty of their importance and generalization ability. Regularized regression methods have become popular approaches to tackle this challenge by balancing the model goodness-of-fit against the increasing complexity of the model in terms of coefficients that deviate from zero. Regularization norms are pivotal in formulating the model complexity, and currently L-1-norm (LASSO), L-2-norm (Ridge Regression) and their hybrid (Elastic Net) dominate the field. In this paper, we present a novel methodology that is based on the L-0-pseudonorm, also known as the best subset selection, which has largely gone overlooked due to its challenging discrete nature. Our methodology makes use of a continuous transformation of the discrete optimization problem, and provides effective solvers implemented in a user friendly R software package. We exemplify the use of oscar-package in the context of prostate cancer prognostic prediction using both real-world hospital registry and clinical cohort data. By benchmarking the methodology against existing regularization methods, we illustrate the advantages of the L-0-pseudonorm for better clinical applicability, selection of grouped features, and demonstrate its applicability in high-dimensional transcriptomics datasets.In many real-world applications, such as those based on electronic health records, prognostic prediction of patient survival is based on heterogeneous sets of clinical laboratory measurements. To address the trade-off between the predictive accuracy of a prognostic model and the costs related to its clinical implementation, we propose an optimized L-0-pseudonorm approach to learn sparse solutions in multivariable regression. The model sparsity is maintained by restricting the number of nonzero coefficients in the model with a cardinality constraint, which makes the optimization problem NP-hard. In addition, we generalize the cardinality constraint for grouped feature selection, which makes it possible to identify key sets of predictors that may be measured together in a kit in clinical practice. We demonstrate the operation of our cardinality constraint-based feature subset selection method, named OSCAR, in the context of prognostic prediction of prostate cancer patients, where it enables one to determine the key explanatory predictors at different levels of model sparsity. We further explore how the model sparsity affects the model accuracy and implementation cost. Lastly, we demonstrate generalization of the presented methodology to high-dimensional transcriptomics data.Peer reviewe

    Double Bundle Method for Nonsmooth DC Optimization

    Get PDF
    The aim of this paper is to introduce a new proximal double bundle method for unconstrained nonsmooth DC optimization, where the objective function is presented as a difference of two convex (DC) functions. The novelty in our method is a new stopping procedure guaranteeing Clarke stationarity for solutions by utilizing only DC components of the objective function. This optimality condition is stronger than the criticality condition typically used in DC programming. Moreover, if a candidate solution is not Clarke stationary, then the stopping procedure yields a descent direction. With this new stopping procedure we can avoid some drawbacks, which are encountered when criticality is used. The finite convergence of the method is proved to a Clarke stationary point under mild assumptions. Finally, some encouraging numerical results are presented.</p

    A New Subgradient Based Method for Nonsmooth DC Programming

    Get PDF
    The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers.</p

    Aggregate subgradient method for nonsmooth DC optimization

    Get PDF
    The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers

    New bundle method for clusterwise linear regression utilizing support vector machines

    Get PDF
    Clusterwise linear regression (CLR) aims to simultaneously partition a data into a given number of clusters and find regression coefficients for each cluster. In this paper, we propose a novel approach to solve the CLR problem. The main idea is to utilize the support vector machine (SVM) approach to model the CLR problem by using the SVM for regression to approximate each cluster. This new formulation of CLR is represented as an unconstrained nonsmooth optimization problem, where the objective function is a difference of convex (DC) functions. A method based on the combination of the incremental algorithm and the double bundle method for DC optimization is designed to solve it. Numerical experiments are made to validate the reliability of the new formulation and the efficiency of the proposed method. The results show that the SVM approach is beneficial in solving CLR problems, especially, when there are outliers in data.</p
    • …
    corecore