2,377 research outputs found

    Finite element modeling of soft tissue deformation.

    Get PDF
    Computer-aided minimally invasive surgery (MIS) has progressed significantly in the last decade and it has great potential in surgical planning and operations. To limit the damage to nearby healthy tissue, accurate modeling is required of the mechanical behavior of a target soft tissue subject to surgical manipulations. Therefore, the study of soft tissue deformations is important for computer-aided (MIS) in surgical planning and operation, or in developing surgical simulation tools or systems. The image acquisition facilities are also important for prediction accuracy. This dissertation addresses partial differential and integral equations (PDIE) based biomechanical modeling of soft tissue deformations incorporating the specific material properties to characterize the soft tissue responses for certain human interface behaviors. To achieve accurate simulation of real tissue deformations, several biomechanical finite element (FE) models are proposed to characterize liver tissue. The contribution of this work is in theoretical and practical aspects of tissue modeling. High resolution imaging techniques of Micro Computed Tomography (Micro-CT) and Cone Beam Computed Tomography (CBCT) imaging are first proposed to study soft tissue deformation in this dissertation. These high resolution imaging techniques can detect the tissue deformation details in the contact region between the tissue and the probe for small force loads which would be applied to a surgical probe used. Traditional imaging techniques in clinics can only achieve low image resolutions. Very small force loads seen in these procedures can only yield tissue deformation on the few millimeters to submillimeter scale. Small variations are hardly to detect. Furthermore, if a model is validated using high resolution images, it implies that the model is true in using the same model for low resolution imaging facilities. The reverse cannot be true since the small variations at the sub-millimeter level cannot be detected. In this dissertation, liver tissue deformations, surface morphological changes, and volume variations are explored and compared from simulations and experiments. The contributions of the dissertation are as follows. For liver tissue, for small force loads (5 grams to tens of grams), the linear elastic model and the neo-Hooke\u27s hyperelastic model are applied and shown to yield some discrepancies among them in simulations and discrepancies between simulations and experiments. The proposed finite element models are verified for liver tissue. A general FE modeling validation system is proposed to verify the applicability of FE models to the soft tissue deformation study. The validation of some FE models is performed visually and quantitatively in several ways in comparison with the actual experimental results. Comparisons among these models are also performed to show their advantages and disadvantages. The method or verification system can be applied for other soft tissues for the finite element analysis of the soft tissue deformation. For brain tissue, an elasticity based model was proposed previously employing local elasticity and Poisson\u27s ratio. It is validated by intraoperative images to show more accurate prediction of brain deformation than the linear elastic model. FE analysis of brain ventricle shape changes was also performed to capture the dynamic variation of the ventricles in author\u27s other works. There, for the safety reasons, the images for brain deformation modeling were from Magnetic Resonance Imaging (MRI) scanning which have been used for brain scanning. The measurement process of material properties involves the tissue desiccation, machine limits, human operation errors, and time factors. The acquired material parameters from measurement devices may have some difference from the tissue used in real state of experiments. Therefore, an experimental and simulation based method to inversely evaluate the material parameters is proposed and compare

    Gradient-based parameter optimization method to determine membrane ionic current composition in human induced pluripotent stem cell-derived cardiomyocytes

    Get PDF
    Premature cardiac myocytes derived from human induced pluripotent stem cells (hiPSC-CMs) show heterogeneous action potentials (APs), probably due to different expression patterns of membrane ionic currents. We developed a method for determining expression patterns of functional channels in terms of whole-cell ionic conductance (Gx) using individual spontaneous AP configurations. It has been suggested that apparently identical AP configurations can be obtained using different sets of ionic currents in mathematical models of cardiac membrane excitation. If so, the inverse problem of Gx estimation might not be solved. We computationally tested the feasibility of the gradient-based optimization method. For a realistic examination, conventional 'cell-specific models' were prepared by superimposing the model output of AP on each experimental AP recorded by conventional manual adjustment of Gxs of the baseline model. Gxs of 4–6 major ionic currents of the 'cell-specific models' were randomized within a range of ± 5–15% and used as an initial parameter set for the gradient-based automatic Gxs recovery by decreasing the mean square error (MSE) between the target and model output. Plotting all data points of the MSE–Gx relationship during optimization revealed progressive convergence of the randomized population of Gxs to the original value of the cell-specific model with decreasing MSE. The absence of any other local minimum in the global search space was confirmed by mapping the MSE by randomizing Gxs over a range of 0.1–10 times the control. No additional local minimum MSE was obvious in the whole parameter space, in addition to the global minimum of MSE at the default model parameter

    Two-point or multiple-point statistics? A comparison between the ensemble Kalman filtering and the ensemble pattern matching inverse methods

    Full text link
    The Ensemble Kalman Filter (EnKF) has been commonly used to assimilate real time dynamic data into geologic models over the past decade. Despite its various advantages such as computational efficiency and its capability to handle multiple sources of uncertainty, the EnKF may not be used to reliably update models that are characterized by curvilinear geometries such as fluvial deposits where the permeable channels play a crucial role in the prediction of solute transport. It is well-known that the EnKF performs optimally for updating multi-Gaussian distributed fields, basically because it uses two-point statistics (i.e., covariances) to represent the relationship between the model parameters and between the model parameters and the observed response, and this is the only statistic necessary to fully characterize a multiGaussian distribution. The Ensemble PATtern matching (EnPAT) is an alternative ensemble based method that shows significant potential to condition complex geology such as channelized aquifers to dynamic data. The EnPAT is an evolution of the EnKF, replacing, in the analysis step, two-point statistics with multiple-point statistics. The advantages of EnPAT reside in its capability to honor the complex spatial connectivity of geologic structures as well as the measured static and dynamic data. In this work, the performance of the classical EnKF and the EnPAT are compared for modeling a synthetic channelized aquifer. The results reveal that the EnPAT yields a better prediction of transport characteristics than the EnKF because it characterizes the conductivity heterogeneity better. Issues such as uncertainty of multiple variables and the effect of measurement errors on EnPAT results will be discussed. © 2015 Elsevier Ltd. All rights reserved.The first three authors gratefully acknowledge the financial support by the U.S. Department of Energy through project DE-FE0004962. The fourth author acknowledges the financial support by the Spanish Ministry of Economy and Competitiveness through project CGL2011-23295. We thank the guest editor Prof. Dr. Harrie-Jan Hendricks Franssen, as well as the reviewer Prof. Alberto Guadagnini and two anonymous reviewers for their comments, which substantially improved the manuscript.Li, L.; Srinivasan, S.; Zhou, H.; Gomez-Hernandez, JJ. (2015). Two-point or multiple-point statistics? A comparison between the ensemble Kalman filtering and the ensemble pattern matching inverse methods. Advances in Water Resources. 86:297-310. https://doi.org/10.1016/j.advwatres.2015.05.014S2973108

    STRUCTURAL MODELING OF PROTEIN-PROTEIN INTERACTIONS USING MULTIPLE-CHAIN THREADING AND FRAGMENT ASSEMBLY

    Get PDF
    Since its birth, the study of protein structures has made progress with leaps and bounds. However, owing to the expenses and difficulties involved, the number of protein structures has not been able to catch up with the number of protein sequences and in fact has steadily lost ground. This necessitated the development of high-throughput but accurate computational algorithms capable of predicting the three dimensional structure of proteins from its amino acid sequence. While progress has been made in the realm of protein tertiary structure prediction, the advancement in protein quaternary structure prediction has been limited by the fact that the degree of freedom for protein complexes is even larger and even fewer number of protein complex structures are present in the PDB library. In fact, protein complex structure prediction till date has largely remained a docking problem where automated algorithms aim to predict the protein complex structure starting from the unbound crystal structure of its component subunits and thus has remained largely limited in terms of scope. Secondly, since docking essentially treats the unbound subunits as "rigid-bodies" it has limited accuracy when conformational change accompanies protein-protein interaction. In one of the first of its kind effort, this study aims for the development of protein complex structure algorithms which require only the amino acid sequence of the interacting subunits as input. The study aimed to adapt the best features of protein tertiary structure prediction including template detection and ab initio loop modeling and extend it for protein-protein complexes thus requiring simultaneous modeling of the three dimensional structure of the component subunits as well as ensuring the correct orientation of the chains at the protein-protein interface. Essentially, the algorithms are dependent on knowledge-based statistical potentials for both fold recognition and structure modeling. First, as a way to compare known structure of protein-protein complexes, a complex structure alignment program MM-align was developed. MM-align joins the chains of the complex structures to be aligned to form artificial monomers in every possible order. It then aligns them using a heuristic dynamic programming based approach using TM-score as the objective function. However, the traditional NW dynamic programming was redesigned to prevent the cross alignment of chains during the structure alignment process. Driven by the knowledge obtained from MM-align that protein complex structures share evolutionary relationships and the current protein complex structure library already contains homologous/structurally analogous protein quaternary structure families, a dimeric threading approach, COTH was designed. The new threading-recombination approach boosts the protein complex structure library by combining tertiary structure templates with complex alignments. The query sequences are first aligned to complex templates using the modified dynamic programming algorithm, guided by a number of predicted structural features including ab initio binding-site predictions. Finally, a template-based complex structure prediction approach, TACOS, was designed to build full-length protein complex structures starting from the initial templates identified by COTH. TACOS, fragments the templates aligned regions of templates and reassembles them while building the structure of the threading unaligned region ab inito using a replica-exchange monte-carlo simulation procedure. Simultaneously, TACOS also searches for the best orientation match of the component structures driven by a number of knowledge-based potential terms. Overall, TACOS presents the one of the first approach capable of predicting full length protein complex structures from sequence alone and introduces a new paradigm in the field of protein complex structure modeling

    Development of a control strategy to compensate transient behaviour due to atmospheric disturbances in solar thermal energy generation systems using short-time prediction data

    Get PDF
    La energía solar térmica concentrada (CSP) es una forma prometedora de energía renovable que puede aprovechar la energía del sol y ayudar a sustituir el uso de combustibles fósiles para la generación de electricidad. Sin embargo, enfrenta retos para aumentar su despliegue a nivel mundial. Las torres solares, un tipo de tecnología CSP, se componen principalmente de un campo solar y una torre en la que un receptor funciona como intercambiador de calor para alimentar un bloque de potencia. El campo solar está formado por miles de heliostatos, que son espejos capaces de seguir el sol y proyectar la luz solar concentrada sobre el receptor. Las torres solares con almacenamiento térmico funcionan continuamente, pero están sujetas a perturbaciones causadas por la interacción de la luz solar con la atmósfera. Este comportamiento puede afectar la integridad del receptor. Para determinar la posición de cada helióstato se utilizan complejos métodos de optimización. Sin embargo, estos métodos están sujetos a incertidumbre en los parámetros y no pueden compensar perturbaciones en tiempo real, como las nubes, debido a su costo computacional. Esta tesis aborda esta cuestión como un problema de control, reduciendo el número de variables. En lugar de encontrar el ángulo de elevación y azimutal para miles de helióstatos, se utilizan dos variables dentro de grupos de helióstatos. A continuación, se implementa una estrategia de control por retroalimentación, aprovechando esta reducción dimensional. Además, la metodología desarrollada en esta tesis utiliza información de un sistema de predicción de radiación solar a corto plazo de última generación, dentro de una novedosa estrategia de control adaptativo para el campo solar.DoctoradoDoctor en Ingeniería Mecánic
    corecore