31 research outputs found

    Prediction of overall survival for patients with metastatic castration-resistant prostate cancer : development of a prognostic model through a crowdsourced challenge with open clinical trial data

    Get PDF
    Background Improvements to prognostic models in metastatic castration-resistant prostate cancer have the potential to augment clinical trial design and guide treatment strategies. In partnership with Project Data Sphere, a not-for-profit initiative allowing data from cancer clinical trials to be shared broadly with researchers, we designed an open-data, crowdsourced, DREAM (Dialogue for Reverse Engineering Assessments and Methods) challenge to not only identify a better prognostic model for prediction of survival in patients with metastatic castration-resistant prostate cancer but also engage a community of international data scientists to study this disease. Methods Data from the comparator arms of four phase 3 clinical trials in first-line metastatic castration-resistant prostate cancer were obtained from Project Data Sphere, comprising 476 patients treated with docetaxel and prednisone from the ASCENT2 trial, 526 patients treated with docetaxel, prednisone, and placebo in the MAINSAIL trial, 598 patients treated with docetaxel, prednisone or prednisolone, and placebo in the VENICE trial, and 470 patients treated with docetaxel and placebo in the ENTHUSE 33 trial. Datasets consisting of more than 150 clinical variables were curated centrally, including demographics, laboratory values, medical history, lesion sites, and previous treatments. Data from ASCENT2, MAINSAIL, and VENICE were released publicly to be used as training data to predict the outcome of interest-namely, overall survival. Clinical data were also released for ENTHUSE 33, but data for outcome variables (overall survival and event status) were hidden from the challenge participants so that ENTHUSE 33 could be used for independent validation. Methods were evaluated using the integrated time-dependent area under the curve (iAUC). The reference model, based on eight clinical variables and a penalised Cox proportional-hazards model, was used to compare method performance. Further validation was done using data from a fifth trial-ENTHUSE M1-in which 266 patients with metastatic castration-resistant prostate cancer were treated with placebo alone. Findings 50 independent methods were developed to predict overall survival and were evaluated through the DREAM challenge. The top performer was based on an ensemble of penalised Cox regression models (ePCR), which uniquely identified predictive interaction effects with immune biomarkers and markers of hepatic and renal function. Overall, ePCR outperformed all other methods (iAUC 0.791; Bayes factor >5) and surpassed the reference model (iAUC 0.743; Bayes factor >20). Both the ePCR model and reference models stratified patients in the ENTHUSE 33 trial into high-risk and low-risk groups with significantly different overall survival (ePCR: hazard ratio 3.32, 95% CI 2.39-4.62, p Interpretation Novel prognostic factors were delineated, and the assessment of 50 methods developed by independent international teams establishes a benchmark for development of methods in the future. The results of this effort show that data-sharing, when combined with a crowdsourced challenge, is a robust and powerful framework to develop new prognostic models in advanced prostate cancer.Peer reviewe

    Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder

    Full text link
    Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure

    Intelligent Recognition Method of Micro Image Feature Recognition in Large Data Environment

    Full text link

    Target Tracking Algorithm Based on an Adaptive Feature and Particle Filter

    Full text link
    To boost the robustness of the traditional particle-filter-based tracking algorithm under complex scenes and to tackle the drift problem that is caused by the fast moving target, an improved particle-filter-based tracking algorithm is proposed. Firstly, all of the particles are divided into two parts and put separately. The number of particles that are put for the first time is large enough to ensure that the number of the particles that can cover the target is as many as possible, and then the second part of the particles are put at the location of the particle with the highest similarity to the template in the particles that are first put, to improve the tracking accuracy. Secondly, in order to obtain a sparser solution, a novel minimization model for an Lp tracker is proposed. Finally, an adaptive multi-feature fusion strategy is proposed, to deal with more complex scenes. The experimental results demonstrate that the proposed algorithm can not only improve the tracking robustness, but can also enhance the tracking accuracy in the case of complex scenes. In addition, our tracker can get better accuracy and robustness than several state-of-the-art trackers

    Gradient-Guided and Multi-Scale Feature Network for Image Super-Resolution

    Full text link
    Recently, deep-learning-based image super-resolution methods have made remarkable progress. However, most of these methods do not fully exploit the structural feature of the input image, as well as the intermediate features from the intermediate layers, which hinders the ability of detail recovery. To deal with this issue, we propose a gradient-guided and multi-scale feature network for image super-resolution (GFSR). Specifically, a dual-branch structure network is proposed, including the trunk branch and the gradient one, where the latter is used to extract the gradient feature map as structural prior to guide the image reconstruction process. Then, to absorb features from different layers, two effective multi-scale feature extraction modules, namely residual of residual inception block (RRIB) and residual of residual receptive field block (RRRFB), are proposed and embedded in different network layers. In our RRIB and RRRFB structures, an adaptive weighted residual feature fusion block (RFFB) is investigated to fuse the intermediate features to generate more beneficial representations, and an adaptive channel attention block (ACAB) is introduced to effectively explore the dependencies between channel features to further boost the feature representation capacity. Experimental results on several benchmark datasets demonstrate that our method achieves superior performance against state-of-the-art methods in terms of both subjective visual quality and objective quantitative metrics

    Performance of a Low Energy Ion Source with Carbon Nanotube Electron Emitters under the Influence of Various Operating Gases

    Get PDF
    Low energy ion measurements in the vicinity of a comet have provided us with importantinformation about the planet’s evolution. The calibration of instruments for thermal ions in thelaboratory plays a crucial role when analysing data from in-situ measurements in space. A new lowenergy ion source based on carbon nanotube electron emitters was developed for calibrating theion-mode of mass spectrometers or other ion detectors. The electron field emission (FE) properties ofcarbon nanotubes (CNTs) for H2, He, Ar, O2, and CO2gases were tested in the experiments. H2, He,Ar, and CO2adsorbates could change the FE temporarily at pressures from10−6Pa to10−4Pa. The FEof CNT remains stable in Ar and increases in H2, but degrades in He, O2, and CO2. All gas adsorbateslead to temporary degradation after working for prolonged periods. The ion current of the ion sourceis measured by using a Faraday cup and the sensitivity is derived from this measurement. The ioncurrents for the different gases were around 10 pA (corresponding to 200 ions/cm3s) and an energy of~28 eV could be observed

    Gradient-Guided and Multi-Scale Feature Network for Image Super-Resolution

    Full text link
    Recently, deep-learning-based image super-resolution methods have made remarkable progress. However, most of these methods do not fully exploit the structural feature of the input image, as well as the intermediate features from the intermediate layers, which hinders the ability of detail recovery. To deal with this issue, we propose a gradient-guided and multi-scale feature network for image super-resolution (GFSR). Specifically, a dual-branch structure network is proposed, including the trunk branch and the gradient one, where the latter is used to extract the gradient feature map as structural prior to guide the image reconstruction process. Then, to absorb features from different layers, two effective multi-scale feature extraction modules, namely residual of residual inception block (RRIB) and residual of residual receptive field block (RRRFB), are proposed and embedded in different network layers. In our RRIB and RRRFB structures, an adaptive weighted residual feature fusion block (RFFB) is investigated to fuse the intermediate features to generate more beneficial representations, and an adaptive channel attention block (ACAB) is introduced to effectively explore the dependencies between channel features to further boost the feature representation capacity. Experimental results on several benchmark datasets demonstrate that our method achieves superior performance against state-of-the-art methods in terms of both subjective visual quality and objective quantitative metrics

    Super-Resolution and Wide-Field-of-View Imaging Based on Large-Angle Deflection with Risley Prisms

    Full text link
    A novel single camera combined with Risley prisms is proposed to achieve a super-resolution (SR) imaging and field-of-view extension (FOV) imaging method. We develop a mathematical model to consider the imaging aberrations caused by large-angle beam deflection and propose an SR reconstruction scheme that uses a beam backtracking method for image correction combined with a sub-pixel shift alignment technique. For the FOV extension, we provide a new scheme for the scanning position path of the Risley prisms and the number of image acquisitions, which improves the acquisition efficiency and reduces the complexity of image stitching. Simulation results show that the method can increase the image resolution to the diffraction limit of the optical system for imaging systems where the resolution is limited by the pixel size. Experimental results and analytical verification yield that the resolution of the image can be improved by a factor of 2.5, and the FOV extended by a factor of 3 at a reconstruction factor of 5. The FOV extension is in general agreement with the simulation results. Risley prisms can provide a more general, low-cost, and efficient method for SR reconstruction, FOV expansion, central concave imaging, and various scanning imaging

    Super-Resolution and Wide-Field-of-View Imaging Based on Large-Angle Deflection with Risley Prisms

    Full text link
    A novel single camera combined with Risley prisms is proposed to achieve a super-resolution (SR) imaging and field-of-view extension (FOV) imaging method. We develop a mathematical model to consider the imaging aberrations caused by large-angle beam deflection and propose an SR reconstruction scheme that uses a beam backtracking method for image correction combined with a sub-pixel shift alignment technique. For the FOV extension, we provide a new scheme for the scanning position path of the Risley prisms and the number of image acquisitions, which improves the acquisition efficiency and reduces the complexity of image stitching. Simulation results show that the method can increase the image resolution to the diffraction limit of the optical system for imaging systems where the resolution is limited by the pixel size. Experimental results and analytical verification yield that the resolution of the image can be improved by a factor of 2.5, and the FOV extended by a factor of 3 at a reconstruction factor of 5. The FOV extension is in general agreement with the simulation results. Risley prisms can provide a more general, low-cost, and efficient method for SR reconstruction, FOV expansion, central concave imaging, and various scanning imaging

    GPSR: Gradient-Prior-Based Network for Image Super-Resolution

    Full text link
    Recent deep learning has shown great potential in super-resolution (SR) tasks. However, most deep learning-based SR networks are optimized via pixel-level loss (i.e., L1, L2, and MSE), which forces the networks to output the average of all possible predictions, leading to blurred details. Especially in SR tasks with large scaling factors (i.e., ×4, ×8), the limitation is further aggravated. To alleviate this limitation, we propose a Gradient-Prior-based Super-Resolution network (GPSR). Specifically, a detail-preserving Gradient Guidance Strategy is proposed to fully exploit the gradient prior to guide the SR process from two aspects. On the one hand, an additional gradient branch is introduced into GPSR to provide the critical structural information. On the other hand, a compact gradient-guided loss is proposed to strengthen the constraints on the spatial structure and to prevent the blind restoration of high-frequency details. Moreover, two residual spatial attention adaptive aggregation modules are proposed and incorporated into the SR branch and the gradient branch, respectively, to fully exploit the crucial intermediate features to enhance the feature representation ability. Comprehensive experimental results demonstrate that the proposed GPSR outperforms state-of-the-art methods regarding both subjective visual quality and objective quantitative metrics in SR tasks with large scaling factors (i.e., ×4 and ×8)
    corecore