1,846 research outputs found

    Structure of Triadic Relations in Multiplex Networks

    Get PDF
    Recent advances in the study of networked systems have highlighted that our interconnected world is composed of networks that are coupled to each other through different "layers" that each represent one of many possible subsystems or types of interactions. Nevertheless, it is traditional to aggregate multilayer networks into a single weighted network in order to take advantage of existing tools. This is admittedly convenient, but it is also extremely problematic, as important information can be lost as a result. It is therefore important to develop multilayer generalizations of network concepts. In this paper, we analyze triadic relations and generalize the idea of transitivity to multiplex networks. By focusing on triadic relations, which yield the simplest type of transitivity, we generalize the concept and computation of clustering coefficients to multiplex networks. We show how the layered structure of such networks introduces a new degree of freedom that has a fundamental effect on transitivity. We compute multiplex clustering coefficients for several real multiplex networks and illustrate why one must take great care when generalizing standard network concepts to multiplex networks. We also derive analytical expressions for our clustering coefficients for ensemble averages of networks in a family of random multiplex networks. Our analysis illustrates that social networks have a strong tendency to promote redundancy by closing triads at every layer and that they thereby have a different type of multiplex transitivity from transportation networks, which do not exhibit such a tendency. These insights are invisible if one only studies aggregated networks.Comment: Main text + Supplementary Material included in a single file. Published in New Journal of Physic

    Mathematical Formulation of Multi-Layer Networks

    Full text link
    A network representation is useful for describing the structure of a large variety of complex systems. However, most real and engineered systems have multiple subsystems and layers of connectivity, and the data produced by such systems is very rich. Achieving a deep understanding of such systems necessitates generalizing "traditional" network theory, and the newfound deluge of data now makes it possible to test increasingly general frameworks for the study of networks. In particular, although adjacency matrices are useful to describe traditional single-layer networks, such a representation is insufficient for the analysis and description of multiplex and time-dependent networks. One must therefore develop a more general mathematical framework to cope with the challenges posed by multi-layer complex systems. In this paper, we introduce a tensorial framework to study multi-layer networks, and we discuss the generalization of several important network descriptors and dynamical processes --including degree centrality, clustering coefficients, eigenvector centrality, modularity, Von Neumann entropy, and diffusion-- for this framework. We examine the impact of different choices in constructing these generalizations, and we illustrate how to obtain known results for the special cases of single-layer and multiplex networks. Our tensorial approach will be helpful for tackling pressing problems in multi-layer complex systems, such as inferring who is influencing whom (and by which media) in multichannel social networks and developing routing techniques for multimodal transportation systems.Comment: 15 pages, 5 figure

    Curses, Tradeoffs, and Scalable Management:Advancing Evolutionary Multiobjective Direct Policy Search to Improve Water Reservoir Operations

    Get PDF
    Optimal management policies for water reservoir operation are generally designed via stochastic dynamic programming (SDP). Yet, the adoption of SDP in complex real-world problems is challenged by the three curses of dimensionality, modeling, and multiple objectives. These three curses considerably limit SDP’s practical application. Alternatively, this study focuses on the use of evolutionary multiobjective direct policy search (EMODPS), a simulation-based optimization approach that combines direct policy search, nonlinear approximating networks, and multiobjective evolutionary algorithms to design Pareto-approximate closed-loop operating policies for multipurpose water reservoirs. This analysis explores the technical and practical implications of using EMODPS through a careful diagnostic assessment of the effectiveness and reliability of the overall EMODPS solution design as well as of the resulting Pareto-approximate operating policies. The EMODPS approach is evaluated using the multipurpose Hoa Binh water reservoir in Vietnam, where water operators are seeking to balance the conflicting objectives of maximizing hydropower production and minimizing flood risks. A key choice in the EMODPS approach is the selection of alternative formulations for flexibly representing reservoir operating policies. This study distinguishes between the relative performance of two widely-used nonlinear approximating networks, namely artificial neural networks (ANNs) and radial basis functions (RBFs). The results show that RBF solutions are more effective than ANN ones in designing Pareto approximate policies for the Hoa Binh reservoir. Given the approximate nature of EMODPS, the diagnostic benchmarking uses SDP to evaluate the overall quality of the attained Pareto-approximate results. Although the Hoa Binh test case’s relative simplicity should maximize the potential value of SDP, the results demonstrate that EMODPS successfully dominates the solutions derived via SDP

    Genetically Predicted Type 2 Diabetes Mellitus Liability, Glycated Hemoglobin and Cardiovascular Diseases: A Wide-Angled Mendelian Randomization Study

    Get PDF
    (1) Aim: To investigate the causal effects of T2DM liability and glycated haemoglobin (HbA1c) levels on various cardiovascular disease outcomes, both in the general population and in non-diabetic individuals specifically. (2) Methods: We selected 243 variants as genetic instruments for T2DM liability and 536 variants for HbA1c. Linear Mendelian randomization analyses were performed to estimate the associations of genetically-predicted T2DM liability and HbA1c with 12 cardiovascular disease outcomes in 367,703 unrelated UK Biobank participants of European ancestries. We performed secondary analyses in participants without diabetes (HbA1c 6.5% with no diagnosed diabetes), and in participants without diabetes or pre-diabetes (HbA1c 5.7% with no diagnosed diabetes). (3) Results: Genetically-predicted T2DM liability was positively associated (p 0.004, 0.05/12) with peripheral vascular disease, aortic valve stenosis, coronary artery disease, heart failure, ischaemic stroke, and any stroke. Genetically-predicted HbA1c was positively associated with coronary artery disease and any stroke. Mendelian randomization estimates generally shifted towards the null when excluding diabetic and pre-diabetic participants from analyses. (4) Conclusions: This genetic evidence supports causal effects of T2DM liability and HbA1c on a range of cardiovascular diseases, suggesting that improving glycaemic control could reduce cardiovascular risk in a general population, with greatest benefit in individuals with diabetes

    Genetic Determinants of Lipids and Cardiovascular Disease Outcomes: A Wide-Angled Mendelian Randomization Investigation.

    Get PDF
    BACKGROUND: Evidence from randomized trials has shown that therapies that lower LDL (low-density lipoprotein)-cholesterol and triglycerides reduce coronary artery disease (CAD) risk. However, there is still uncertainty about their effects on other cardiovascular outcomes. We therefore performed a systematic investigation of causal relationships between circulating lipids and cardiovascular outcomes using a Mendelian randomization approach. METHODS: In the primary analysis, we performed 2-sample multivariable Mendelian randomization using data from participants of European ancestry. We also conducted univariable analyses using inverse-variance weighted and robust methods, and gene-specific analyses using variants that can be considered as proxies for specific lipid-lowering medications. We obtained associations with lipid fractions from the Global Lipids Genetics Consortium, a meta-analysis of 188 577 participants, and genetic associations with cardiovascular outcomes from 367 703 participants in UK Biobank. RESULTS: For LDL-cholesterol, in addition to the expected positive associations with CAD risk (odds ratio [OR] per 1 SD increase, 1.45 [95% CI, 1.35-1.57]) and other atheromatous outcomes (ischemic cerebrovascular disease and peripheral vascular disease), we found independent associations of genetically predicted LDL-cholesterol with abdominal aortic aneurysm (OR, 1.75 [95% CI, 1.40-2.17]) and aortic valve stenosis (OR, 1.46 [95% CI, 1.25-1.70]). Genetically predicted triglyceride levels were positively associated with CAD (OR, 1.25 [95% CI, 1.12-1.40]), aortic valve stenosis (OR, 1.29 [95% CI, 1.04-1.61]), and hypertension (OR, 1.17 [95% CI, 1.07-1.27]), but inversely associated with venous thromboembolism (OR, 0.79 [95% CI, 0.67-0.93]) and hemorrhagic stroke (OR, 0.78 [95% CI, 0.62-0.98]). We also found positive associations of genetically predicted LDL-cholesterol and triglycerides with heart failure that appeared to be mediated by CAD. CONCLUSIONS: Lowering LDL-cholesterol is likely to prevent abdominal aortic aneurysm and aortic stenosis, in addition to CAD and other atheromatous cardiovascular outcomes. Lowering triglycerides is likely to prevent CAD and aortic valve stenosis but may increase thromboembolic risk

    Multicriteria Optimization Model to Generate on-DEM Optimal Channel Networks

    Get PDF
    The theory of optimal channel networks (OCNs) explains the existence of self‐similarities in river networks by multiple optimality principles, namely, (i) the minimum energy expenditure in any link, (ii) the equal energy expenditure per unit area of channel anywhere, and (iii) the minimum total energy expenditure (TEE). These principles have been used to generate OCNs from 2‐D networks. The existing notion of OCN considers the concavity of river longitudinal profiles as a priori condition. Attempts to generate OCNs starting from a random 3‐D digital elevation model (DEM) and minimizing solely TEE have failed to reproduce concave profiles. Yet alternative approaches can be devised from the three optimality principles, for instance, focusing on the local energy expenditure (LEE). In this paper, we propose a Multiobjective modeling framework for Riverscape Exploration (MoRE) via simultaneous optimization of multiple OCN criteria. MoRE adopts a multiobjective evolutionary algorithm and radial basis functions to efficiently guide DEM elevation variations required to shape 3‐D OCNs. By minimizing both TEE and the variance in LEE, MoRE successfully reproduces realistic on‐DEM, OCN‐based riverscapes, for the first time. Simulated networks possess scaling laws of upstream area and length and river longitudinal profile resembling those of real river networks. The profile concavity of generated on‐DEM OCNs emerges as a consequence of the minimization of TEE constrained to the equalization of LEE. Minimizing TEE under this condition generates networks that possess specific patterns of LEE, where the scaling of slope with basin area resembles the patterns observed in real river networks

    A First Search for coincident Gravitational Waves and High Energy Neutrinos using LIGO, Virgo and ANTARES data from 2007

    Get PDF
    We present the results of the first search for gravitational wave bursts associated with high energy neutrinos. Together, these messengers could reveal new, hidden sources that are not observed by conventional photon astronomy, particularly at high energy. Our search uses neutrinos detected by the underwater neutrino telescope ANTARES in its 5 line configuration during the period January - September 2007, which coincided with the fifth and first science runs of LIGO and Virgo, respectively. The LIGO-Virgo data were analysed for candidate gravitational-wave signals coincident in time and direction with the neutrino events. No significant coincident events were observed. We place limits on the density of joint high energy neutrino - gravitational wave emission events in the local universe, and compare them with densities of merger and core-collapse events.Comment: 19 pages, 8 figures, science summary page at http://www.ligo.org/science/Publication-S5LV_ANTARES/index.php. Public access area to figures, tables at https://dcc.ligo.org/cgi-bin/DocDB/ShowDocument?docid=p120000

    Association of LPA Variants With Risk of Coronary Disease and the Implications for Lipoprotein(a)-Lowering Therapies: A Mendelian Randomization Analysis.

    Get PDF
    IMPORTANCE: Human genetic studies have indicated that plasma lipoprotein(a) (Lp[a]) is causally associated with the risk of coronary heart disease (CHD), but randomized trials of several therapies that reduce Lp(a) levels by 25% to 35% have not provided any evidence that lowering Lp(a) level reduces CHD risk. OBJECTIVE: To estimate the magnitude of the change in plasma Lp(a) levels needed to have the same evidence of an association with CHD risk as a 38.67-mg/dL (ie, 1-mmol/L) change in low-density lipoprotein cholesterol (LDL-C) level, a change that has been shown to produce a clinically meaningful reduction in the risk of CHD. DESIGN, SETTING, AND PARTICIPANTS: A mendelian randomization analysis was conducted using individual participant data from 5 studies and with external validation using summarized data from 48 studies. Population-based prospective cohort and case-control studies featured 20 793 individuals with CHD and 27 540 controls with individual participant data, whereas summarized data included 62 240 patients with CHD and 127 299 controls. Data were analyzed from November 2016 to March 2018. EXPOSURES: Genetic LPA score and plasma Lp(a) mass concentration. MAIN OUTCOMES AND MEASURES: Coronary heart disease. RESULTS: Of the included study participants, 53% were men, all were of white European ancestry, and the mean age was 57.5 years. The association of genetically predicted Lp(a) with CHD risk was linearly proportional to the absolute change in Lp(a) concentration. A 10-mg/dL lower genetically predicted Lp(a) concentration was associated with a 5.8% lower CHD risk (odds ratio [OR], 0.942; 95% CI, 0.933-0.951; P = 3 × 10-37), whereas a 10-mg/dL lower genetically predicted LDL-C level estimated using an LDL-C genetic score was associated with a 14.5% lower CHD risk (OR, 0.855; 95% CI, 0.818-0.893; P = 2 × 10-12). Thus, a 101.5-mg/dL change (95% CI, 71.0-137.0) in Lp(a) concentration had the same association with CHD risk as a 38.67-mg/dL change in LDL-C level. The association of genetically predicted Lp(a) concentration with CHD risk appeared to be independent of changes in LDL-C level owing to genetic variants that mimic the relationship of statins, PCSK9 inhibitors, and ezetimibe with CHD risk. CONCLUSIONS AND RELEVANCE: The clinical benefit of lowering Lp(a) is likely to be proportional to the absolute reduction in Lp(a) concentration. Large absolute reductions in Lp(a) of approximately 100 mg/dL may be required to produce a clinically meaningful reduction in the risk of CHD similar in magnitude to what can be achieved by lowering LDL-C level by 38.67 mg/dL (ie, 1 mmol/L)

    Principal components analysis based control of a multi-dof underactuated prosthetic hand

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG). Driving a multi degrees of freedom (DoF) hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user.</p> <p>Methods</p> <p>A Principal Components Analysis (PCA) based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand) with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs). Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control.</p> <p>Results</p> <p>Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture) may be achieved.</p> <p>Conclusions</p> <p>This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis.</p

    Body mass index and all cause mortality in HUNT and UK Biobank studies:linear and non-linear mendelian randomisation analyses

    Get PDF
    Objective To investigate the shape of the causal relation between body mass index (BMI) and mortality. Design Linear and non-linear mendelian randomisation analyses. Setting Nord-Trøndelag Health (HUNT) Study (Norway) and UK Biobank (United Kingdom). Participants Middle to early late aged participants of European descent: 56 150 from the HUNT Study and 366 385 from UK Biobank. Main outcome measures All cause and cause specific (cardiovascular, cancer, and non-cardiovascular non-cancer) mortality. Results 12 015 and 10 344 participants died during a median of 18.5 and 7.0 years of follow-up in the HUNT Study and UK Biobank, respectively. Linear mendelian randomisation analyses indicated an overall positive association between genetically predicted BMI and the risk of all cause mortality. An increase of 1 unit in genetically predicted BMI led to a 5% (95% confidence interval 1% to 8%) higher risk of mortality in overweight participants (BMI 25.0-29.9) and a 9% (4% to 14%) higher risk of mortality in obese participants (BMI ≥30.0) but a 34% (16% to 48%) lower risk in underweight (BMI <18.5) and a 14% (−1% to 27%) lower risk in low normal weight participants (BMI 18.5-19.9). Non-linear mendelian randomisation indicated a J shaped relation between genetically predicted BMI and the risk of all cause mortality, with the lowest risk at a BMI of around 22-25 for the overall sample. Subgroup analyses by smoking status, however, suggested an always-increasing relation of BMI with mortality in never smokers and a J shaped relation in ever smokers. Conclusions The previously observed J shaped relation between BMI and risk of all cause mortality appears to have a causal basis, but subgroup analyses by smoking status revealed that the BMI-mortality relation is likely comprised of at least two distinct curves, rather than one J shaped relation. An increased risk of mortality for being underweight was only evident in ever smokers
    corecore