200 research outputs found

    Learning and comparing functional connectomes across subjects

    Get PDF
    Functional connectomes capture brain interactions via synchronized fluctuations in the functional magnetic resonance imaging signal. If measured during rest, they map the intrinsic functional architecture of the brain. With task-driven experiments they represent integration mechanisms between specialized brain areas. Analyzing their variability across subjects and conditions can reveal markers of brain pathologies and mechanisms underlying cognition. Methods of estimating functional connectomes from the imaging signal have undergone rapid developments and the literature is full of diverse strategies for comparing them. This review aims to clarify links across functional-connectivity methods as well as to expose different steps to perform a group study of functional connectomes

    First-order statistical speckle models improve robustness and reproducibility of contrast-enhanced ultrasound perfusion estimates

    Get PDF
    Contrast-enhanced ultrasound (CEUS) permits the quantification and monitoring of adaptive tumor responses in the face of anti-angiogenic treatment, with the goal of informing targeted therapy. However, conventional CEUS image analysis relies on mean signal intensity as an estimate of tracer concentration in indicator-dilution modeling. This discounts additional information that may be available from the first-order speckle statistics in a CEUS image. Heterogeneous vascular networks, typical of tumor-induced angiogenesis, lead to heterogeneous contrast enhancement of the imaged tumor cross-section. To address this, a linear (B-mode) processing approach was developed to quantify the change in the first-order speckle statistics of B-mode cine loops due to the incursion of microbubbles. The technique, named the EDoF (effective degrees of freedom) method, was developed on tumor bearing mice (MDA-MB-231LN mammary fat pad inoculation) and evaluated using nonlinear (two-pulse amplitude modulated) contrast microbubble-specific images. To improve the potential clinical applicability of the technique, a second-generation compound probability density function for the statistics of two-pulse amplitude modulated contrast-enhanced ultrasound images was developed. The compound technique was tested in an antiangiogenic drug trial (bevacizumab) on tumor bearing mice (MDA-MB-231LN), and evaluated with gold-standard histology and contrast-enhanced X-ray computed tomography. The compound statistical model could more accurately discriminate anti-VEGF treated tumors from untreated tumors than conventional CEUS image. The technique was then applied to a rapid patient-derived xenograft (PDX) model of renal cell carcinoma (RCC) in the chorioallantoic membrane (CAM) of chicken embryos. The ultimate goal of the PDX model is to screen RCC patients for de novo sunitinib resistance. The analysis of the first-order speckle statistics of contrast-enhanced ultrasound cine loops provides more robust and reproducible estimates of tumor blood perfusion than conventional image analysis. Theoretically this form of analysis could quantify perfusion heterogeneity and provide estimates of vascular fractal dimension, but further work is required to determine what physiological features influence these measures. Treatment sensitivity matrices, which combine vascular measures from CEUS and power Doppler, may be suitable for screening of de novo sunitinib resistance in patients diagnosed with renal cell carcinoma. Further studies are required to assess whether this protocol can be predictive of patient outcome

    Prädiktive und prognostische Faktoren nach Resektion gastrointestinaler Tumore

    Get PDF

    Early versus Later Rhythm Analysis in Patients with Out-of-Hospital Cardiac Arrest

    Get PDF
    Background In a departure from the previous strategy of immediate defibrillation, the 2005 resuscitation guidelines from the American Heart Association–International Liaison Committee on Resuscitation suggested that emergency medical service (EMS) personnel could provide 2 minutes of cardiopulmonary resuscitation (CPR) before the first analysis of cardiac rhythm. We compared the strategy of a brief period of CPR with early analysis of rhythm with the strategy of a longer period of CPR with delayed analysis of rhythm. Methods We conducted a cluster-randomized trial involving adults with out-of-hospital cardiac arrest at 10 Resuscitation Outcomes Consortium sites in the United States and Canada. Patients in the early-analysis group were assigned to receive 30 to 60 seconds of EMS-administered CPR and those in the later-analysis group were assigned to receive 180 seconds of CPR, before the initial electrocardiographic analysis. The primary outcome was survival to hospital discharge with satisfactory functional status (a modified Rankin scale score of ≤3, on a scale of 0 to 6, with higher scores indicating greater disability). Results We included 9933 patients, of whom 5290 were assigned to early analysis of cardiac rhythm and 4643 to later analysis. A total of 273 patients (5.9%) in the later-analysis group and 310 patients (5.9%) in the early-analysis group met the criteria for the primary outcome, with a cluster-adjusted difference of −0.2 percentage points (95% confidence interval, −1.1 to 0.7; P=0.59). Analyses of the data with adjustment for confounding factors, as well as subgroup analyses, also showed no survival benefit for either study group. Conclusions Among patients who had an out-of-hospital cardiac arrest, we found no difference in the outcomes with a brief period, as compared with a longer period, of EMS-administered CPR before the first analysis of cardiac rhythm. (Funded by the National Heart, Lung, and Blood Institute and others; ROC PRIMED ClinicalTrials.gov number, NCT00394706.

    Estimation of Mean Transition Time using Markov Model and Comparison of risk factors of malnutrition using Markov Regression to Generalized Estimating Equations and Random Effects Model in a Longitudinal study

    Get PDF
    BACKGROUND: Malnutrition refers to many diseases each with a specific deficiency in one or more nutrients and each characterized by cellular imbalance between the supply of nutrients and energy on the one hand, and the body’s demand for them to ensure growth maintenance. Malnutrition is an important indicator of child health. It is now recognized that 6.6 million out of 12.2 million deaths among children under-five – or 54% of young child mortality in developing countries – is associated with malnutrition. India has the highest percentages of undernourished children in the world. During 1982, seven localities and 22 villages were selected for this study. These localities and villages were selected from Vellore town and KV Kuppam development block sampling frames respectively. All children aged 5-7 years were screened for signs of malnutrition by consultant pediatricians. The children from rural and urban areas of Vellore town were screened at baseline and followed up for every six months for 7 times. Malnutrition was assessed based on these indicators which are BMI Z scores, Height-for-age. The BMI Z scores were classified as “normal” if the BMI Z scores were >-2 standard deviations, “moderate” when Z scores were between -2 and -3 standard deviations and, “severe” if the Z scores were <-3 standard deviations (67). The main hypothesized risk factors for the study were ‘defecation practices at household level’ (within the household; in the open fields), ‘type of fuel used for cooking in the house’ (firewood or cow dung or coal; gas or kerosene) and ‘presence of a separate kitchen within the household premises’ (yes; no). The other confounders that were seen important that have to be adjusted were sex of the child (male; female) and area of residence (rural; urban). Some other covariates that were also included for Generalized Estimating Equations and Markov Regression using transition probabilities are education of mother and father (illiterate or literate; primary or middle school; high school or above), consanguineous marriage of the parents whose children were included in the study (yes; no), type of roof (thatched; tiled; RCC or pukka), type of house (brick and cement; brick and mud; others) and birth order (1; 2; >=3), number of members in a family (6), type of floor (kucha; pukka). AIMS AND OBJECTIVES: The main aim was to find the risk factors for malnutrition. The objectives of the study are: (i) To estimate the first mean passage time which indicates the average time spent by a child to move from one state to another, to find risk factors of using GEE and Random Effects model, to find risk factors of protein energy malnutrition using transition probabilities, to find the risk factors by calculating the transition intensity matrices and to compare the results obtained from GEE and Markov regression models using transition probabilities and transition intensities. PREVALENCE AND INCIDENCE: The overall prevalence of severe underweight was 22.5%. The prevalence of severe underweight was higher among children (25%) than female children (19.9%). The prevalence of severe underweight was lower among children living in the rural areas as compared to children living in the urban areas (16.5% vs 28% respectively). The overall incidence for severe underweight was 11.6% and higher incidence rate was observed among male children than female children. The incidence density of severe underweight was around 5% per year. The prevalence of severe stunting was 25.8%. Higher prevalence of severe stunting was found among male children than female children (27.9% vs 23.5% respectively). The incidence was also higher among children in the rural areas (33.2%) as compare to children living in the urban areas (18.9%). The cumulative incidence of stunting was 20.6% and the incidence density for stunting was about 2% per year. MEAN PASSAGE TIME AND RISK FACTORS: The overall transition probability from normal state of underweight to moderate was 0.12 and severe state was 0.009. The transition probability of moving from severe underweight to moderate underweight and normal weight was found to be 0.28 and 0.10 respectively. The average number of years taken to transit from severe state of underweight to normal state was about 2.7 (2.3 - 3.1) years. The mean number of years taken to transit from severe underweight to normal across male and female children was almost similar. The MPT from severe underweight to normal in the urban areas was less as compared to MPT in the rural areas. It was also found that children who lived in houses with no separate kitchen and children living in houses that used firewood or cow dung for cooking had had lower transition time from severe underweight to normal as compared to children living in houses that had separate kitchen or used gas or kerosene for cooking. The probability of transition from severe stunting to normal was 0.001. The overall MPT from severe stunting to normal was around nine and a half years (8.4 years – 11.3 years). There was no difference across male children and female children in MPT for stunting. The average number of years taken to move from severe stunting to normal was higher among children in the rural areas as compared to children living in urban areas (11.9 years vs 8.1 years). The mean first passage time in the present study clearly indicates how late or early a person transits from one state of an outcome to another state of that outcome when the child experiences a “risky” factor of the exposure in non-absorbing state models. This is useful in chronic disease epidemiology where a motive is to find out how long would the transition time be, on an average, for a person to transit from one state to another. So that appropriate treatment procedures can be provided. Similar findings of “longer time” were observed when the children lived in houses without separate kitchen, defecated in the open fields and used firewood or cow dung for cooking. The risk factors for severe underweight obtained using GEE were defecation and sex of the child. The risk factors for severe underweight using Markov regression other than the two risk factors mentioned using GEE were family members, presence of a separate kitchen and the state of underweight at the previous time. The risk factors that turned out important for severe stunting using GEE were area of residence, mothers’ education, fathers’ education. The factors important for severe stunting using Markov regression were defecation, area of residence, father’s education, and the state of stunting at the previous time. The transition times cannot be estimated using GEE or Random Effect Models as these models do not account for the fact that the current state of malnutrition is mainly due to the state of malnutrition at the previous time (a Markov Chain principle). Markov regression using transition probabilities involves modeling the outcome state at the current state conditioning on the state of the outcome at the previous time and other covariates. Hence if the previous state of outcome is highly correlated, then it is important to perform Markov regression. Markov regression using intensity rates involves modeling the outcome for a specific transition. This model is very specific to what had been the state of malnutrition in the previous time. If there was a specific hypothesis relating to the specific transition, then the model using the ‘transition rates’ would be better. GEE analysis considers the correlations of the different states of malnutrition overtime and adjusts for that correlation. In most longitudinal data analysis, it is worth considering risk factors that are associated with the movement to current state from previous state. It is essential to test whether current state depends on the state at previous time i.e., if the state of malnutrition at the previous time was significantly associated with current state of malnutrition, then the Markov regression using transition probability is appropriate. In this study, state of malnutrition at the previous time was significantly associated with current state of malnutrition. The standard errors obtained from the Markov regression using transition probabilities were smaller than those compared to GEE analysis and had better coverage probability with shorter length. The risk factor profile for GEE and Markov regression were different with relatively more risk factors using Markov regression as this may be due to higher SEs obtained from GEE analysis when adjusted for the correlation structure. The simulations performed for underweight and stunting showed better coverage probabilities and shorter length of confidence intervals when Markov regression analysis was performed than GEE. CONCLUSION: In any longitudinal study with discrete non-absorbing outcome, it is essential to estimate the duration of time spent in each state of the outcome. This will help us to study the impact of duration of stay with other risk factors. In longitudinal data if the current state of the outcome depends on the state of the outcome at the previous time, then Markov regression is the best approach to find the risk factors. GEE approach evaluates the overall correlation structure and therefore more likely to have larger standard errors and thereby likely to deal with false positive findings

    Low-level analysis of microarray data

    Get PDF
    This thesis consists of an extensive introduction followed by seven papers (A-F) on low-level analysis of microarray data. Focus is on calibration and normalization of observed data. The introduction gives a brief background of the microarray technology and its applications in order for anyone not familiar with the field to read the thesis. Formal definitions of calibration and normalization are given. Paper A illustrates a typical statistical analysis of microarray data with background correction, normalization, and identification of differentially expressed genes (among thousands of candidates). A small analysis on the final results for different number of replicates and different image analysis software is also given. Paper B introduces a novel way for displaying microarray data called the print-order plot, which displays data in the order the corresponding spots were printed to the array. Utilizing these, so called (microtiter-) plate effects are identified. Then, based on a simple variability measure for replicated spots across arrays, different normalization sequences are tested and evidence for the existence of plate effects are claimed. Paper C presents an object-oriented extension with transparent reference variables to the R language. It is provides the necessary foundation in order to implement the microarray analysis package described in Paper F. Paper D is on affine transformations of two-channel microarray data and their effects on the log-ratio log-intensity transform. Affine transformations, that is, the existence of channel biases, can explain commonly observed intensity-dependent effects in the log-ratios. In the light of the affine transformation, several normalization methods are revisited. At the end of the paper, a new robust affine normalization is suggested that relies on iterative reweighted principal component analysis. Paper E suggests a multiscan calibration method where each array is scanned at various sensitivity levels in order to uniquely identify the affine transformation of signals that the scanner and the image-analysis methods introduce. Observed data strongly support this method. In addition, multiscan-calibrated data has an extended dynamical range and higher signal-to-noise levels. This is real-world evidence for the existence of affine transformations of microarray data. Paper F describes the aroma package – An R Object-oriented Microarray Analysis environment – implemented in R and that provides easy access to our and others low-level analysis methods. Paper G provides an calibration method for spotted microarrays with dilution series or spike-ins. The method is based on a heteroscedastic affine stochastic model. The parameter estimates are robust against model misspecification

    Accurate Targeting of Liver Tumors in Stereotactic Radiation Therapy

    Get PDF
    This doctoral thesis concerns the treatment of liver cancer patients using external beam radiotherapy. The quality of this treatment greatly depends on delivering a high radiation dose to the tumor while keeping the dose as low as possible to surrounding healthy tissues. One of the major challenges is locating the tumor at the moment of dose delivery. In this ork, the uncertainty of locating the tumor was investigated. For this purpose, gold markers were implanted in the liver tissue and visualized on X-ray images. The markers were used to measure day-to-day tumor mobility and motion due to respiration. Furthermore, it was found that major improvements in the targeting accuracy can be achieved by using the markers for guiding the treatment procedure

    Automated Segmentation of Left and Right Ventricles in MRI and Classification of the Myocarfium Abnormalities

    Get PDF
    A fundamental step in diagnosis of cardiovascular diseases, automated left and right ventricle (LV and RV) segmentation in cardiac magnetic resonance images (MRI) is still acknowledged to be a difficult problem. Although algorithms for LV segmentation do exist, they require either extensive training or intensive user inputs. RV segmentation in MRI has yet to be solved and is still acknowledged a completely unsolved problem because its shape is not symmetric and circular, its deformations are complex and varies extensively over the cardiac phases, and it includes papillary muscles. In this thesis, I investigate fast detection of the LV endo- and epi-cardium surfaces (3D) and contours (2D) in cardiac MRI via convex relaxation and distribution matching. A rapid 3D segmentation of the RV in cardiac MRI via distribution matching constraints on segment shape and appearance is also investigated. These algorithms only require a single subject for training and a very simple user input, which amounts to one click. The solution is sought following the optimization of functionals containing probability product kernel constraints on the distributions of intensity and geometric features. The formulations lead to challenging optimization problems, which are not directly amenable to convex-optimization techniques. For each functional, the problem is split into a sequence of sub-problems, each of which can be solved exactly and globally via a convex relaxation and the augmented Lagrangian method. Finally, an information-theoretic based artificial neural network (ANN) is proposed for normal/abnormal LV myocardium motion classification. Using the LV segmentation results, the LV cavity points is estimated via a Kalman filter and a recursive dynamic Bayesian filter. However, due to the similarities between the statistical information of normal and abnormal points, differentiating between distributions of abnormal and normal points is a challenging problem. The problem was investigated with a global measure based on the Shannon\u27s differential entropy (SDE) and further examined with two other information-theoretic criteria, one based on Renyi entropy and the other on Fisher information. Unlike the existing information-theoretic studies, the approach addresses explicitly the overlap between the distributions of normal and abnormal cases, thereby yielding a competitive performance. I further propose an algorithm based on a supervised 3-layer ANN to differentiate between the distributions farther. The ANN is trained and tested by five different information measures of radial distance and velocity for points on endocardial boundary

    Geospatial Assessment of Sustainable Built Infrastructure Assets and Flood Disaster Protection

    Get PDF
    This research was initiated with a review and synthesis of infrastructure related to city and urban growth, built infrastructure to meet transportation needs and travel demand, and role of mass transit in reducing adverse impacts on the environment and greenhouse gas emissions. Floods are the most frequently occurring natural disaster in the world, which so far claimed millions of lives and resulted in billion-dollar economic costs. Built infrastructure assets in urban and rural areas are not spared from floods\u27 aftermath. A major motivation for this thesis was the 2011 megaflood disaster of Thailand which devastated the green campus of Asian Institute of Technology or AIT located north of Bangkok, a prominent higher education institution in Asia. AIT Campus was inundated with flood water for several weeks in late October and most of November 2011. The primary objective was to develop a geospatial decision support system for flood disaster protection of AIT using spaceborne remote sensing satellite imagery. Pre-flood 1-m IKONOS imagery of the campus area was used to create planimetrics and geospatial infrastructure inventory. Ground truth measurements along with site inspection photos facilitated further flood impact analysis and creation of a detailed flood depth map of the entire AIT Campus. Post-flood 1-m IKONOS imagery was used to estimate existing dike\u27s top width. The imagery-based planimetric of the dike and related cross-section data provided by AIT were used to conduct stability analyses of a proposed raised dike system. Other flood protection strategies proposed in this study include concrete and composite sheet pile flood wall design. Value engineering analysis was implemented to evaluate these flood wall protection alternatives for AIT Campus. Based on comprehensive present worth life cycle cost analysis conducted over 50-year performance period, the least costly composite fiber-reinforced plastic sheet pile flood wall system was recommended to protect AIT Campus from future floods at US$ 1.71 million per km. Further recommendations for future flood protection include: (1) elevated AIT access roads and other campus area roads using composite sheet pile retaining walls and culverts and (2) one or more buildings protected by composite sheet pile peripheral enclosures for emergency management applications
    corecore