8,328 research outputs found

    Scaling Size and Parameter Spaces in Variability-Aware Software Performance Models (T)

    Get PDF
    In software performance engineering, what-if scenarios, architecture optimization, capacity planning, run-time adaptation, and uncertainty management of realistic models typically require the evaluation of many instances. Effective analysis is however hindered by two orthogonal sources of complexity. The first is the infamous problem of state space explosion — the analysis of a single model becomes intractable with its size. The second is due to massive parameter spaces to be explored, but such that computations cannot be reused across model instances. In this paper, we efficiently analyze many queuing models with the distinctive feature of more accurately capturing variability and uncertainty of execution rates by incorporating general (i.e., non-exponential) distributions. Applying product-line engineering methods, we consider a family of models generated by a core that evolves into concrete instances by applying simple delta operations affecting both the topology and the model's parameters. State explosion is tackled by turning to a scalable approximation based on ordinary differential equations. The entire model space is analyzed in a family-based fashion, i.e., at once using an efficient symbolic solution of a super-model that subsumes every concrete instance. Extensive numerical tests show that this is orders of magnitude faster than a naive instance-by-instance analysis

    Runtime-guided mitigation of manufacturing variability in power-constrained multi-socket NUMA nodes

    Get PDF
    This work has been supported by the Spanish Government (Severo Ochoa grants SEV2015-0493, SEV-2011-00067), by the Spanish Ministry of Science and Innovation (contracts TIN2015-65316-P), by Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), by the RoMoL ERC Advanced Grant (GA 321253) and the European HiPEAC Network of Excellence. M. Moretó has been partially supported by the Ministry of Economy and Competitiveness under Juan de la Cierva postdoctoral fellowship number JCI-2012-15047. M. Casas is supported by the Secretary for Universities and Research of the Ministry of Economy and Knowledge of the Government of Catalonia and the Cofund programme of the Marie Curie Actions of the 7th R&D Framework Programme of the European Union (Contract 2013 BP B 00243). This work was also partially performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-CONF-689878). Finally, the authors are grateful to the reviewers for their valuable comments, to the RoMoL team, to Xavier Teruel and Kallia Chronaki from the Programming Models group of BSC and the Computation Department of LLNL for their technical support and useful feedback.Peer ReviewedPostprint (published version

    Interpretable statistics for complex modelling: quantile and topological learning

    Get PDF
    As the complexity of our data increased exponentially in the last decades, so has our need for interpretable features. This thesis revolves around two paradigms to approach this quest for insights. In the first part we focus on parametric models, where the problem of interpretability can be seen as a “parametrization selection”. We introduce a quantile-centric parametrization and we show the advantages of our proposal in the context of regression, where it allows to bridge the gap between classical generalized linear (mixed) models and increasingly popular quantile methods. The second part of the thesis, concerned with topological learning, tackles the problem from a non-parametric perspective. As topology can be thought of as a way of characterizing data in terms of their connectivity structure, it allows to represent complex and possibly high dimensional through few features, such as the number of connected components, loops and voids. We illustrate how the emerging branch of statistics devoted to recovering topological structures in the data, Topological Data Analysis, can be exploited both for exploratory and inferential purposes with a special emphasis on kernels that preserve the topological information in the data. Finally, we show with an application how these two approaches can borrow strength from one another in the identification and description of brain activity through fMRI data from the ABIDE project

    BioDMET: a physiologically based pharmacokinetic simulation tool for assessing proposed solutions to complex biological problems

    Get PDF
    We developed a detailed, whole-body physiologically based pharmacokinetic (PBPK) modeling tool for calculating the distribution of pharmaceutical agents in the various tissues and organs of a human or animal as a function of time. Ordinary differential equations (ODEs) represent the circulation of body fluids through organs and tissues at the macroscopic level, and the biological transport mechanisms and biotransformations within cells and their organelles at the molecular scale. Each major organ in the body is modeled as composed of one or more tissues. Tissues are made up of cells and fluid spaces. The model accounts for the circulation of arterial and venous blood as well as lymph. Since its development was fueled by the need to accurately predict the pharmacokinetic properties of imaging agents, BioDMET is more complex than most PBPK models. The anatomical details of the model are important for the imaging simulation endpoints. Model complexity has also been crucial for quickly adapting the tool to different problems without the need to generate a new model for every problem. When simpler models are preferred, the non-critical compartments can be dynamically collapsed to reduce unnecessary complexity. BioDMET has been used for imaging feasibility calculations in oncology, neurology, cardiology, and diabetes. For this purpose, the time concentration data generated by the model is inputted into a physics-based image simulator to establish imageability criteria. These are then used to define agent and physiology property ranges required for successful imaging. BioDMET has lately been adapted to aid the development of antimicrobial therapeutics. Given a range of built-in features and its inherent flexibility to customization, the model can be used to study a variety of pharmacokinetic and pharmacodynamic problems such as the effects of inter-individual differences and disease-states on drug pharmacokinetics and pharmacodynamics, dosing optimization, and inter-species scaling. While developing a tool to aid imaging agent and drug development, we aimed at accelerating the acceptance and broad use of PBPK modeling by providing a free mechanistic PBPK software that is user friendly, easy to adapt to a wide range of problems even by non-programmers, provided with ready-to-use parameterized models and benchmarking data collected from the peer-reviewed literature
    corecore