198 research outputs found
Regional radiotherapy versus an axillary lymph node dissection after lumpectomy: a safe alternative for an axillary lymph node dissection in a clinically uninvolved axilla in breast cancer. A case control study with 10 years follow up
<p>Abstract</p> <p>Background</p> <p>The standard treatment of the axilla in breast cancer used to be an axillary lymph node dissection. An axillary lymph node dissection is known to give substantial risks of morbidity. In recent years the sentinel node biopsy has become common practice. Future randomized study results will determine whether the expected decrease in morbidity can be proven.</p> <p>Methods</p> <p>Before the introduction of the sentinel node biopsy, we conducted a study in which 180 women of 50 years and older with T1/T2 cN0 breast cancer were treated with breast conserving therapy. Instead of an axillary lymph node dissection regional radiotherapy was given in combination with tamoxifen (RT-group). The study group was compared with 341 patients, with the same patient and tumour characteristics, treated with an axillary lymph node dissection (S-group).</p> <p>Results</p> <p>The treatment groups were comparable, except for age. The RT-group was significantly older than the S-group. The median follow up was 7.2 years. The regional relapse rates were low and equal in both treatment groups, 1.1% in RT-group versus 1.5% in S-group at 5 years. The overall survival was similar; the disease free survival was significant better in the RT-group.</p> <p>Conclusion</p> <p>Regional recurrence rates after regional radiotherapy are very low and equal to an axillary lymphnode dissection.</p
Multiscale computing for science and engineering in the era of exascale performance
In this position paper, we discuss two relevant topics: (i) generic multiscale computing on emerging exascale high-performing computing environments, and (ii) the scaling of such applications towards the exascale. We will introduce the different phases when developing a multiscale model and simulating it on available computing infrastructure, and argue that we could rely on it both on the conceptual modelling level and also when actually executing the multiscale simulation, and maybe should further develop generic frameworks and software tools to facilitate multiscale computing. Next, we focus on simulating multiscale models on high-end computing resources in the face of emerging exascale performance levels. We will argue that although applications could scale to exascale performance relying on weak scaling and maybe even on strong scaling, there are also clear arguments that such scaling may no longer apply for many applications on these emerging exascale machines and that we need to resort to what we would call multi-scaling
On the sensitivity analysis of porous finite element models for cerebral perfusion estimation
AbstractComputational physiological models are promising tools to enhance the design of clinical trials and to assist in decision making. Organ-scale haemodynamic models are gaining popularity to evaluate perfusion in a virtual environment both in healthy and diseased patients. Recently, the principles of verification, validation, and uncertainty quantification of such physiological models have been laid down to ensure safe applications of engineering software in the medical device industry. The present study sets out to establish guidelines for the usage of a three-dimensional steady state porous cerebral perfusion model of the human brain following principles detailed in the verification and validation (V&V 40) standard of the American Society of Mechanical Engineers. The model relies on the finite element method and has been developed specifically to estimate how brain perfusion is altered in ischaemic stroke patients before, during, and after treatments. Simulations are compared with exact analytical solutions and a thorough sensitivity analysis is presented covering every numerical and physiological model parameter.The results suggest that such porous models can approximate blood pressure and perfusion distributions reliably even on a coarse grid with first order elements. On the other hand, higher order elements are essential to mitigate errors in volumetric blood flow rate estimation through cortical surface regions. Matching the volumetric flow rate corresponding to major cerebral arteries is identified as a validation milestone. It is found that inlet velocity boundary conditions are hard to obtain and that constant pressure inlet boundary conditions are feasible alternatives. A one-dimensional model is presented which can serve as a computationally inexpensive replacement of the three-dimensional brain model to ease parameter optimisation, sensitivity analyses and uncertainty quantification.The findings of the present study can be generalised to organ-scale porous perfusion models. The results increase the applicability of computational tools regarding treatment development for stroke and other cerebrovascular conditions.</jats:p
A porous circulation model of the human brain for in silico clinical trials in ischaemic stroke
The advancement of ischaemic stroke treatment relies on resource-intensive experiments and clinical trials. In order to improve ischaemic stroke treatments, such as thrombolysis and thrombectomy, we target the development of computational tools for in silico trials which can partially replace these animal and human experiments with fast simulations. This study proposes a model that will serve as part of a predictive unit within an in silico clinical trial estimating patient outcome as a function of treatment. In particular, the present work aims at the development and evaluation of an organ-scale microcirculation model of the human brain for perfusion prediction. The model relies on a three-compartment porous continuum approach. Firstly, a fast and robust method is established to compute the anisotropic permeability tensors representing arterioles and venules. Secondly, vessel encoded arterial spin labelling magnetic resonance imaging and clustering are employed to create an anatomically accurate mapping between the microcirculation and large arteries by identifying superficial perfusion territories. Thirdly, the parameter space of the problem is reduced by analysing the governing equations and experimental data. Fourthly, a parameter optimization is conducted. Finally, simulations are performed with the tuned model to obtain perfusion maps corresponding to an open and an occluded (ischaemic stroke) scenario. The perfusion map in the occluded vessel scenario shows promising qualitative agreement with computed tomography images of a patient with ischaemic stroke caused by large vessel occlusion. The results highlight that in the case of vessel occlusion (i) identifying perfusion territories is essential to capture the location and extent of underperfused regions and (ii) anisotropic permeability tensors are required to give quantitatively realistic estimation of perfusion change. In the future, the model will be thoroughly validated against experiments
Multiscale computing with the multiscale modeling library and runtime environment
We introduce a software tool to simulate multiscale models: The Multiscale Coupling Library and Environment 2 (MUSCLE 2). MUSCLE 2 is a component-based modeling tool inspired by the multiscale modeling and simulation framework, with an easy-to-use API which supports Java, C++, C, and Fortran. We present MUSCLE 2's runtime features, such as its distributed computing capabilities, and its benefits to multiscale modelers. We also describe two multiscale models that use MUSCLE 2 to do distributed multiscale computing: An in-stent restenosis and a canal system model. We conclude that MUSCLE 2 is a notable improvement over the previous version of MUSCLE, and that it allows users to more flexibly deploy simulations of multiscale models, while improving their performance. © 2013 The Authors. Published by Elsevier B.V
Semi-analytical approach to magnetized temperature autocorrelations
The cosmic microwave background (CMB) temperature autocorrelations, induced
by a magnetized adiabatic mode of curvature inhomogeneities, are computed with
semi-analytical methods. As suggested by the latest CMB data, a nearly
scale-invariant spectrum for the adiabatic mode is consistently assumed. In
this situation, the effects of a fully inhomogeneous magnetic field are
scrutinized and constrained with particular attention to harmonics which are
relevant for the region of Doppler oscillations. Depending on the parameters of
the stochastic magnetic field a hump may replace the second peak of the angular
power spectrum. Detectable effects on the Doppler region are then expected only
if the magnetic power spectra have quasi-flat slopes and typical amplitude
(smoothed over a comoving scale of Mpc size and redshifted to the epoch of
gravitational collapse of the protogalaxy) exceeding 0.1 nG. If the magnetic
energy spectra are bluer (i.e. steeper in frequency) the allowed value of the
smoothed amplitude becomes, comparatively, larger (in the range of 20 nG). The
implications of this investigation for the origin of large-scale magnetic
fields in the Universe are discussed. Connections with forthcoming experimental
observations of CMB temperature fluctuations are also suggested and partially
explored.Comment: 40 pages, 13 figure
Can computational efficiency alone drive the evolution of modularity in neural networks?
Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means
Flexible composition and execution of high performance, high fidelity multiscale biomedical simulations
Multiscale simulations are essential in the biomedical domain to accurately model human physiology. We present a modular approach for designing, constructing and executing multiscale simulations on a wide range of resources, from laptops to petascale supercomputers, including combinations of these. Our work features two multiscale applications, in-stent restenosis and cerebrovascular bloodflow, which combine multiple existing single-scale applications to create a multiscale simulation. These applications can be efficiently coupled, deployed and executed on computers up to the largest (peta) scale, incurring a coupling overhead of 1–10% of the total execution time
Editorial: Observational studies in ADHD: the effects of switching to modified-release methylphenidate preparations on clinical outcomes and adherence
Patients with ADHD may have better adherence to treatment with modified-release methylphenidate (MPH-MR) formulations, which are taken once daily, compared with immediate-release (IR) formulations, which need to be taken several times a day. Data on long-term outcomes such as adherence may be lacking from randomised controlled trials as these are usually only short-term. Observational studies, if performed and reported appropriately, can provide valuable long-term data on such outcomes, as well as additional information on effectiveness and efficiency, from a real-life setting. By reviewing previous observational studies that have investigated switching treatment from MPH-IR to MPH-MR, results from a new, naturalistic observational study, the OBSEER study, are put into context. We conclude that, based on observational trial data, switching from MPH-IR to MPH-MR is a valid clinical approach, with the potential for improved clinical outcome and treatment adherence
- …