599 research outputs found

    The hypotensive effect of salt substitutes in stage 2 hypertension:a systematic review and meta-analysis

    Get PDF
    Background: Hypertension (HTN) is a ubiquitous risk factor for numerous non-communicable diseases, including cardiovascular disease and stroke. There are currently no wholly effective pharmacological therapies for subjects with HTN. However, salt substitutes have emerged as a potential therapy for the treatment of HTN. The aim of the present study was to assess the effect of salt substitutes on reducing systolic blood pressure (SBP) and diastolic BP (DBP), following a meta-analysis of randomized controlled trials. Methods: Studies were found via systematic searches of the Pubmed/Medline, Scopus, Ovid, Google Scholar and Cochrane library. Ten studies, comprised of 11 trials and 1119 participants, were included in the meta-analysis. Results: Pooled weighted mean differences showed significant reductions of SBP (WMD - 8.87 mmHg; 95 CI - 11.19, - 6.55, p < 0.001) and DBP (WMD - 4.04 mmHg; 95 CI - 5.70, - 2.39) with no statistically significant heterogeneity between the 11 included comparisons of SBPs and DBPs. The stratified analysis of trials based on the mean age of participants showed a significant reduction in the mean difference of SBP in both adults (< 65 years old) and elderly (ĂąïżœÂ„65 years old). However, the DBP-lowering effect of salt substitutes was only observed in adult patients (WMD - 4.22 mmHg; 95 CI - 7.85, - 0.58), but not in the elderly subjects. Conclusions: These findings suggest that salt-substitution strategies could be used for lowering SBP and DBP in patients with stage 2 HTN; providing a nutritional platform for the treatment, amelioration, and prevention of HTN. © 2020 The Author(s)

    Optimal sensor configuration for complex systems with application to signal detection in structures

    Get PDF

    Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation

    Get PDF
    The simultaneous perturbation stochastic approximation (SPSA) algorithm has recently attracted considerable attention for optimization problems where it is di cult or impossible to obtain a direct gradient of the objective (say, loss) function. The approach is based on a highly e cient simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo process. The objective is to minimize the mean square error of the estimate. We also consider maximization of the likelihood that the estimate be con ned within a bounded symmetric region of the true parameter. The optimal distribution for the components of the simultaneous perturbation vector is found to be a symmetric Bernoulli in both cases. We end the paper with a numerical study related to the area of experiment design. 1

    Optimal sensor configuration for complex systems

    Get PDF

    Redshift Evolution of the Merger Fraction of Galaxies in CDM Cosmologies

    Get PDF
    We use semi-analytical modelling of galaxy formation to study the redshift evolution of the galaxy merger fractions and merger ratesin Lambda CDM and quintessence (QCDM) cosmologies, their dependence on physical parameters as the environment, the merger timescale, the way major mergers are defined, and the minimum mass of objects taken into account. We find that for a given final halo mass the redshift dependence of the merger fraction F_mg and the resulting merger rate can be fitted well by a power law for redshifts z <= 1. The normalization F_{mg}(0) and the slope m depend on the final halo mass. For a given merger timescale t_merg and an assumed maximum mass ratio R_major for major mergers, F_mg(0) and m depend exponentially on each other. The slope m depends logarithmically on the ratio of the final halo mass and the minimum halo mass taken into account. In addition, the local normalization F_{mg}(0) increases for larger R_{major} while m decreases. We compare the predicted merger fractionwith recent observations and find that the model cannot reproduce both the merger index and the normalization at the same time. In general the model underestimates F_{mg}(0) and m by a factor of 2.Comment: 9 pages, 3 figures, submitted to ApJ, referee's comments and one additional figure adde

    A finite element model for protein transport in vivo

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Biological mass transport processes determine the behavior and function of cells, regulate interactions between synthetic agents and recipient targets, and are key elements in the design and use of biosensors. Accurately predicting the outcomes of such processes is crucial to both enhancing our understanding of how these systems function, enabling the design of effective strategies to control their function, and verifying that engineered solutions perform according to plan.</p> <p>Methods</p> <p>A Galerkin-based finite element model was developed and implemented to solve a system of two coupled partial differential equations governing biomolecule transport and reaction in live cells. The simulator was coupled, in the framework of an inverse modeling strategy, with an optimization algorithm and an experimental time series, obtained by the Fluorescence Recovery after Photobleaching (FRAP) technique, to estimate biomolecule mass transport and reaction rate parameters. In the inverse algorithm, an adaptive method was implemented to calculate sensitivity matrix. A multi-criteria termination rule was developed to stop the inverse code at the solution. The applicability of the model was illustrated by simulating the mobility and binding of GFP-tagged glucocorticoid receptor in the nucleoplasm of mouse adenocarcinoma.</p> <p>Results</p> <p>The numerical simulator shows excellent agreement with the analytic solutions and experimental FRAP data. Detailed residual analysis indicates that residuals have zero mean and constant variance and are normally distributed and uncorrelated. Therefore, the necessary and sufficient criteria for least square parameter optimization, which was used in this study, were met.</p> <p>Conclusion</p> <p>The developed strategy is an efficient approach to extract as much physiochemical information from the FRAP protocol as possible. Well-posedness analysis of the inverse problem, however, indicates that the FRAP protocol provides insufficient information for unique simultaneous estimation of diffusion coefficient and binding rate parameters. Care should be exercised in drawing inferences, from FRAP data, regarding concentrations of free and bound proteins, average binding and diffusion times, and protein mobility unless they are confirmed by long-range Markov Chain-Monte Carlo (MCMC) methods and experimental observations.</p

    Ontologies, Mental Disorders and Prototypes

    Get PDF
    As it emerged from philosophical analyses and cognitive research, most concepts exhibit typicality effects, and resist to the efforts of defining them in terms of necessary and sufficient conditions. This holds also in the case of many medical concepts. This is a problem for the design of computer science ontologies, since knowledge representation formalisms commonly adopted in this field do not allow for the representation of concepts in terms of typical traits. However, the need of representing concepts in terms of typical traits concerns almost every domain of real world knowledge, including medical domains. In particular, in this article we take into account the domain of mental disorders, starting from the DSM-5 descriptions of some specific mental disorders. On this respect, we favor a hybrid approach to the representation of psychiatric concepts, in which ontology oriented formalisms are combined to a geometric representation of knowledge based on conceptual spaces

    Adding Environmental Gas Physics to the Semi-Analytic Method for Galaxy Formation: Gravitational Heating

    Full text link
    We present results of an attempt to include more detailed gas physics motivated from hydrodynamical simulations within semi-analytic models (SAM) of galaxy formation, focusing on the role that environmental effects play. The main difference to previous SAMs is that we include 'gravitational' heating of the intra-cluster medium (ICM) by the net surplus of gravitational potential energy released from gas that has been stripped from infalling satellites. Gravitational heating appears to be an efficient heating source able to prevent cooling in environments corresponding to dark matter halos more massive than ∌1013\sim 10^{13} M⊙_{\odot}. The energy release by gravitational heating can match that by AGN-feedback in massive galaxies and can exceed it in the most massive ones. However, there is a fundamental difference in the way the two processes operate. Gravitational heating becomes important at late times, when the peak activity of AGNs is already over, and it is very mass dependent. This mass dependency and time behaviour gives the right trend to recover down-sizing in the star-formation rate of massive galaxies. Abridged...Comment: replaced by accepted version to ApJ, some sections have been dropped and text has been added to others to include the referee's comments, several typos have been correcte

    Dynamic provisioning of cloud resources based on workload prediction

    Full text link
    © Springer Nature Singapore Pte Ltd. 2019. Most of the businesses nowadays have started using cloud platforms to host their software applications. A cloud platform is a shared resource that provides various services like software as a service (SAAS), infrastructure as a service (IAAS) or anything as a service (XAAS) that is required to develop and deploy any business application. These cloud services are provided as virtual machines (VM) that can handle the end-user’s requirements. The cloud providers have to ensure efficient resource handling mechanisms for different time intervals to avoid wastage of resources. Auto-scaling mechanisms would take care of using these resources appropriately along with providing an excellent quality of service. The researchers have used various approaches to perform auto-scaling. In this paper, a framework based on dynamic provisioning of cloud resources using workload prediction is discussed

    Railway reinforced concrete infrastructure life management and sustainability index

    Get PDF
    Infrastructure healthy enhancement for saving resources in operation procedures is one of the most important objectives for owners on their decision support system based on cost management. In this manner, finding the intervention action priority, as well as the inspection method and maintenance system for each component, with regard to a limited resources amount is investigated in this paper. Defects on infrastructure components create data and these data are undoubtedly useful to increase the knowledge in decision making in practice. In that sense, risk analysis and value of information can be applied using decision trees together with Bayesian networks. For data filtering and noise reduction, a principal component analysis may also be applied to manage a database and prepare useful input variables for the decision tree system. This paper presented an approach for the maintenance managers to prepare their infrastructure available with a sustainable index with minimum cost. This index would be a tool for decision-makers with regard to the cost management and sustainability aspects
    • 

    corecore