368 research outputs found

    Micromechanical homogenization of a hydrogel-filled electrospun scaffold for tissue-engineered epicardial patching of the infarcted heart: a feasibility study

    Get PDF
    For tissue engineering applications, accurate prediction of the effective mechanical properties of tissue scaffolds is critical. Open and closed cell modelling, mean-field homogenization theory, and finite element (FE) methods are theories and techniques currently used in conventional homogenization methods to estimate the equivalent mechanical properties of tissue-engineering scaffolds. This study aimed at developing a formulation to link the microscopic structure and macroscopic mechanics of a fibrous electrospun scaffold filled with a hydrogel for use as an epicardial patch for local support of the infarcted heart. The macroscopic elastic modulus of the scaffold was predicted to be 0.287 MPa with the FE method and 0.290 MPa with the closed-cell model for the realistic fibre structure of the scaffold, and 0.108 MPa and 0.540 MPa with mean-field homogenization for randomly oriented and completely aligned fibres. The homogenized constitutive description of the scaffold was implemented for an epicardial patch in a FE model of a human cardiac left ventricle to assess the effects of patching on myocardial mechanics and ventricular function in the presence of an infarct. Epicardial patching was predicted to reduce maximum myocardial stress in the infarcted LV from 19 kPa (no patch) to 9.5 kPa (patch) and to marginally improve the ventricular ejection fraction from 40% (no patch) to 43% (patch). This study demonstrates the feasibility of homogenization techniques to represent complex multiscale structural features in a simplified but meaningful and effective manner

    A multiscale agent-based in silico model of liver fibrosis progression

    Get PDF
    Chronic hepatic inflammation involves a complex interplay of inflammatory and mechanical influences, ultimately manifesting in a characteristic histopathology of liver fibrosis. We created an agent-based model (ABM) of liver tissue in order to computationally examine the consequence of liver inflammation. Our liver fibrosis ABM (LFABM) is comprised of literature-derived rules describing molecular and histopathological aspects of inflammation and fibrosis in a section of chemically injured liver. Hepatocytes are modeled as agents within hexagonal lobules. Injury triggers an inflammatory reaction, which leads to activation of local Kupffer cells and recruitment of monocytes from circulation. Portal fibroblasts and hepatic stellate cells are activated locally by the products of inflammation. The various agents in the simulation are regulated by above-threshold concentrations of pro- and anti-inflammatory cytokines and damage-associated molecular pattern molecules. The simulation progresses from chronic inflammation to collagen deposition, exhibiting periportal fibrosis followed by bridging fibrosis, and culminating in disruption of the regular lobular structure. The ABM exhibited key histopathological features observed in liver sections from rats treated with carbon tetrachloride (CCl4). An in silico "tension test" for the hepatic lobules predicted an overall increase in tissue stiffness, in line with clinical elastography literature and published studies in CCl4-treated rats. Therapy simulations suggested differential anti-fibrotic effects of neutralizing tumor necrosis factor alpha vs. enhancing M2 Kupffer cells. We conclude that a computational model of liver inflammation on a structural skeleton of physical forces can recapitulate key histopathological and macroscopic properties of CCl4-injured liver. This multiscale approach linking molecular and chemomechanical stimuli enables a model that could be used to gain translationally relevant insights into liver fibrosis

    Analysis of pronuclear zygote configurations in 459 clinical pregnancies obtained with assisted reproductive technique procedures

    Get PDF
    BACKGROUND: Embryos selection is crucial to maintain high performance in terms of pregnancy rate, reducing the risk of multiple pregnancy during IVF. Pronuclear and nucleolar characteristics have been proposed as an indicator of embryo development and chromosomal complement in humans, providing information about embryo viability. METHODS: To correlate the zygote-score with the maternal age and the outcome of pregnancy, we analyzed the pronuclear and nucleolar morphology, the polar body alignment and the zygote configuration in 459 clinical pregnancies obtained by IVF and ICSI in our public clinic in Reggio Emilia, Italy. We derived odds ratios (OR) and Corenfield's 95% confidence intervals (CI). Continuous variables were compared with Student's t-test; P lower than .05 was considered statistically significant. RESULTS: We observed a significant increase of "A" pronuclear morphology configuration in 38-41 years old patients in comparison to that lower than or equal to 32 years old and a significant decrease of "B" configuration in 38-41 years old patients in comparison to that lower than or equal to 32 and in comparison to that of 33-37 years old. Related to maternal age we found no significant differences in P1 and in P2 configuration. We found no correlation between zygote-score, embryo cleavage and embryo quality. CONCLUSIONS: Our results confirm the limited clinical significance of zygote-score suggesting that it can not be associated with maternal age, embryo cleavage and embryo quality. The evaluation of embryo quality based on morphological parameters is probably more predictive than zygote-score

    QAPgrid: A Two Level QAP-Based Approach for Large-Scale Data Analysis and Visualization

    Get PDF
    Background: The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain “hidden regularities” and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. Methodology/Principal Findings: We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Conclusions/Significance: Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics

    A First Step in the Translation of Alloy to Coq

    Get PDF
    International audienceAlloy is both a formal language and a tool for software mod-eling. The language is basically first order relational logic. The analyzer is based on instance finding: it tries to refute assertions and if it succeeds it reports a counterexample. It works by translating Alloy models and instance finding into SAT problems. If no instance is found it does not mean the assertion is satisfied. Alloy relies on the small scope hypothesis: examining all small cases is likely to produce interesting counterexamples. This is very valuable when developing a system. However, Alloy cannot show their absence. In this paper, we propose an approach where Alloy can be used as a first step, and then using a tool we develop, Alloy models can be translated to Coq code to be proved correct interactively

    Ownership and control in a competitive industry

    Get PDF
    We study a differentiated product market in which an investor initially owns a controlling stake in one of two competing firms and may acquire a non-controlling or a controlling stake in a competitor, either directly using her own assets, or indirectly via the controlled firm. While industry profits are maximized within a symmetric two product monopoly, the investor attains this only in exceptional cases. Instead, she sometimes acquires a noncontrolling stake. Or she invests asymmetrically rather than pursuing a full takeover if she acquires a controlling one. Generally, she invests indirectly if she only wants to affect the product market outcome, and directly if acquiring shares is profitable per se. --differentiated products,separation of ownership and control,private benefits of control

    Tokamak cooling systems and power conversion system options

    Get PDF
    DEMO will be a fusion power plant demonstrating the integration into the grid architecture of an electric utility grid. The design of the power conversion chain is of particular importance, as it must adequately account for the specifics of nuclear fusion on the generation side and ensure compatibility with the electric utility grid at all times. One of the special challenges the foreseen pulsed operation, which affects the operation of the entire heat transport chain. This requires a time-dependant analysis of different concept design approaches to ensure proof of reliable operation and efficiency to obtain nuclear licensing. Several architectures of Balance of Plant were conceived and developed during the DEMO Pre-Concept Design Phase in order to suit needs and constraints of the in-vessel systems, with particular regard to the different blanket concepts. At this early design stage, emphasis was given to the achievement of robust solutions for all essential Balance of Plant systems, which have chiefly to ensure feasible and flexible operation modes during the main DEMO operating phases – Pulse, Dwell and ramp-up/down – and to adsorb and compensate for potential fusion power fluctuations during plasma flat-top. Although some criticalities, requiring further design improvements were identified, these preliminary assessments showed that the investigated cooling system architectures have the capability to restore nominal conditions after any of the abovementioned cases and that the overall availability could meet the DEMO top-level requirements. This paper describes the results of the studies on the tokamak coolant and Power Conversion System (PCS) options and critically highlights the aspects that require further work
    • 

    corecore