101 research outputs found

    3D Multimodal Brain Tumor Segmentation and Grading Scheme based on Machine, Deep, and Transfer Learning Approaches

    Get PDF
    Glioma is one of the most common tumors of the brain. The detection and grading of glioma at an early stage is very critical for increasing the survival rate of the patients. Computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems are essential and important tools that provide more accurate and systematic results to speed up the decision-making process of clinicians. In this paper, we introduce a method consisting of the variations of the machine, deep, and transfer learning approaches for the effective brain tumor (i.e., glioma) segmentation and grading on the multimodal brain tumor segmentation (BRATS) 2020 dataset. We apply popular and efficient 3D U-Net architecture for the brain tumor segmentation phase. We also utilize 23 different combinations of deep feature sets and machine learning/fine-tuned deep learning CNN models based on Xception, IncResNetv2, and EfficientNet by using 4 different feature sets and 6 learning models for the tumor grading phase. The experimental results demonstrate that the proposed method achieves 99.5% accuracy rate for slice-based tumor grading on BraTS 2020 dataset. Moreover, our method is found to have competitive performance with similar recent works

    An agent- and GIS-based virtual city creator: A case study of Beijing, China

    Get PDF
    Many agent-based integrated urban models have been developed to investigate urban issues, considering the dynamics and feedbacks in complex urban systems. The lack of disaggregate data, however, has become one of the main barriers to the application of these models, though a number of data synthesis methods have been applied. To generate a complete dataset that contains full disaggregate input data for model initialization, this paper develops a virtual city creator as a key component of an agent-based land-use and transport model, SelfSim. The creator is a set of disaggregate data synthesis methods, including a genetic algorithm (GA)-based population synthesizer, a transport facility synthesizer, an activity facility synthesizer and a daily plan generator, which use the household travel survey data as the main input. Finally, the capital of China, Beijing, was used as a case study. The creator was applied to generate an agent- and Geographic Information System (GIS)-based virtual Beijing containing individuals, households, transport and activity facilities, as well as their attributes and linkages

    CTVIS: Consistent Training for Online Video Instance Segmentation

    Full text link
    The discrimination of instance embeddings plays a vital role in associating instances across time for online video instance segmentation (VIS). Instance embedding learning is directly supervised by the contrastive loss computed upon the contrastive items (CIs), which are sets of anchor/positive/negative embeddings. Recent online VIS methods leverage CIs sourced from one reference frame only, which we argue is insufficient for learning highly discriminative embeddings. Intuitively, a possible strategy to enhance CIs is replicating the inference phase during training. To this end, we propose a simple yet effective training strategy, called Consistent Training for Online VIS (CTVIS), which devotes to aligning the training and inference pipelines in terms of building CIs. Specifically, CTVIS constructs CIs by referring inference the momentum-averaged embedding and the memory bank storage mechanisms, and adding noise to the relevant embeddings. Such an extension allows a reliable comparison between embeddings of current instances and the stable representations of historical instances, thereby conferring an advantage in modeling VIS challenges such as occlusion, re-identification, and deformation. Empirically, CTVIS outstrips the SOTA VIS models by up to +5.0 points on three VIS benchmarks, including YTVIS19 (55.1% AP), YTVIS21 (50.1% AP) and OVIS (35.5% AP). Furthermore, we find that pseudo-videos transformed from images can train robust models surpassing fully-supervised ones.Comment: Accepted by ICCV 2023. The code is available at https://github.com/KainingYing/CTVI

    The serum soluble scavenger with 5 domains levels: A novel biomarker for individuals with heart failure

    Get PDF
    Background: We aimed to explore the relationship between the serum Soluble Scavenger with 5 Domains (SSC5D) levels and heart failure (HF).Methods and Results: We retrospectively enrolled 276 patients diagnosed with HF or normal during hospitalization in Shanghai General Hospital between September 2020 and December 2021. Previously published RNA sequencing data were re-analyzed to confirm the expression profile of SSC5D in failing and non-failing human and mouse heart tissues. Quantitative real-time polymerase chain reaction assay was used to quantify Ssc5d mRNA levels in murine heart tissue after myocardial infarction and transverse aortic constriction surgery. To understand the HF-induced secreted proteins profile, 1,755 secreted proteins were investigated using human dilated cardiomyopathy RNA-seq data, and the results indicated that SSC5D levels were significantly elevated in failing hearts compared to the non-failing. Using single-cell RNA sequencing data, we demonstrated that Ssc5d is predominantly expressed in cardiac fibroblasts. In a murine model of myocardial infarction or transverse aortic constriction, Ssc5d mRNA levels were markedly increased compared with those in the sham group. Similarly, serum SSC5D levels were considerably elevated in the HF group compared with the control group [15,789.35 (10,745.32–23,110.65) pg/mL, 95% CI (16,263.01–19,655.43) vs. 8,938.72 (6,154.97–12,778.81) pg/mL, 95% CI (9,337.50–11,142.93); p < 0.0001]. Moreover, serum SSC5D levels were positively correlated with N-terminal pro-B-type natriuretic peptide (R = 0.4, p = 7.9e-12) and inversely correlated with left ventricular ejection fraction (R = −0.46, p = 9.8e-16).Conclusion: We concluded that SSC5D was a specific response to HF. Serum SSC5D may function as a novel biomarker and therapeutic target for patients with HF

    Ultrastrong conductive in situ composite composed of nanodiamond incoherently embedded in disordered multilayer graphene

    Get PDF
    Traditional ceramics or metals cannot simultaneously achieve ultrahigh strength and high electrical conductivity. The elemental carbon can form a variety of allotropes with entirely different physical properties, providing versatility for tuning mechanical and electrical properties in a wide range. Here, by precisely controlling the extent of transformation of amorphous carbon into diamond within a narrow temperature–pressure range, we synthesize an in situ composite consisting of ultrafine nanodiamond homogeneously dispersed in disordered multilayer graphene with incoherent interfaces, which demonstrates a Knoop hardness of up to ~53 GPa, a compressive strength of up to ~54 GPa and an electrical conductivity of 670–1,240 S m(–1) at room temperature. With atomically resolving interface structures and molecular dynamics simulations, we reveal that amorphous carbon transforms into diamond through a nucleation process via a local rearrangement of carbon atoms and diffusion-driven growth, different from the transformation of graphite into diamond. The complex bonding between the diamond-like and graphite-like components greatly improves the mechanical properties of the composite. This superhard, ultrastrong, conductive elemental carbon composite has comprehensive properties that are superior to those of the known conductive ceramics and C/C composites. The intermediate hybridization state at the interfaces also provides insights into the amorphous-to-crystalline phase transition of carbon

    A Methodology for Evaluating Image Segmentation Algorithms

    Get PDF
    The purpose of this paper is to describe a framework for evaluating image segmentation algorithms. Image segmentation consists of object recognition and delineation. For evaluating segmentation methods, three factors - precision (reproducibility), accuracy (agreement with truth), and efficiency (time taken) – need to be considered for both recognition and delineation. To assess precision, we need to choose a figure of merit (FOM), repeat segmentation considering all sources of variation, and determine variations in FOM via statistical analysis. It is impossible usually to establish true segmentation. Hence, to assess accuracy, we need to choose a surrogate of true segmentation and proceed as for precision. To assess efficiency, both the computational and the user time required for algorithm and operator training and for algorithm execution should be measured and analyzed. Precision, accuracy, and efficiency are interdependent. It is difficult to improve one factor without affecting others. Segmentation methods must be compared based on all three factors. The weight given to each factor depends on application

    CAVASS: A Computer-Assisted Visualization and Analysis Software System

    Get PDF
    The Medical Image Processing Group at the University of Pennsylvania has been developing (and distributing with source code) medical image analysis and visualization software systems for a long period of time. Our most recent system, 3DVIEWNIX, was first released in 1993. Since that time, a number of significant advancements have taken place with regard to computer platforms and operating systems, networking capability, the rise of parallel processing standards, and the development of open-source toolkits. The development of CAVASS by our group is the next generation of 3DVIEWNIX. CAVASS will be freely available and open source, and it is integrated with toolkits such as Insight Toolkit and Visualization Toolkit. CAVASS runs on Windows, Unix, Linux, and Mac but shares a single code base. Rather than requiring expensive multiprocessor systems, it seamlessly provides for parallel processing via inexpensive clusters of work stations for more time-consuming algorithms. Most importantly, CAVASS is directed at the visualization, processing, and analysis of 3-dimensional and higher-dimensional medical imagery, so support for digital imaging and communication in medicine data and the efficient implementation of algorithms is given paramount importance
    • …
    corecore