2,243 research outputs found

    Bayesian model selection for exponential random graph models via adjusted pseudolikelihoods

    Get PDF
    Models with intractable likelihood functions arise in areas including network analysis and spatial statistics, especially those involving Gibbs random fields. Posterior parameter es timation in these settings is termed a doubly-intractable problem because both the likelihood function and the posterior distribution are intractable. The comparison of Bayesian models is often based on the statistical evidence, the integral of the un-normalised posterior distribution over the model parameters which is rarely available in closed form. For doubly-intractable models, estimating the evidence adds another layer of difficulty. Consequently, the selection of the model that best describes an observed network among a collection of exponential random graph models for network analysis is a daunting task. Pseudolikelihoods offer a tractable approximation to the likelihood but should be treated with caution because they can lead to an unreasonable inference. This paper specifies a method to adjust pseudolikelihoods in order to obtain a reasonable, yet tractable, approximation to the likelihood. This allows implementation of widely used computational methods for evidence estimation and pursuit of Bayesian model selection of exponential random graph models for the analysis of social networks. Empirical comparisons to existing methods show that our procedure yields similar evidence estimates, but at a lower computational cost.Comment: Supplementary material attached. To view attachments, please download and extract the gzzipped source file listed under "Other formats

    Variance reduction techniques for estimating quantiles and value-at-risk

    Get PDF
    Quantiles, as a performance measure, arise in many practical contexts. In finance, quantiles are called values-at-risk (VARs), and they are widely used in the financial industry to measure portfolio risk. When the cumulative distribution function is unknown, the quantile can not be computed exactly and must be estimated. In addition to computing a point estimate for the quantile, it is important to also provide a confidence interval for the quantile as a way of indicating the error in the estimate. A problem with crude Monte Carlo is that the resulting confidence interval may be large, which is often the case when estimating extreme quantiles. This motivates applying variance-reduction techniques (VRTs) to try to obtain more efficient quantile estimators. Much of the previous work on estimating quantiles using VRTs did not provide methods for constructing asymptotically valid confidence intervals. This research developed asymptotically valid confidence intervals for quantiles that are estimated using simulation with VRTs. The VRTs considered were importance sampling (IS), stratified sampling (SS), antithetic variates (AV), and control variates (CV). The method of proving the asymptotic validity was to first show that the quantile estimators obtained with VRTs satisfies a Bahadur-Ghosh representation. Then this was employed to prove central limit theorems (CLTs) and to obtain consistent estimators of the variances in the CLTs, which were used to construct confidence intervals. After the theoretical framework was established, explicit algorithms were presented to construct confidence intervals for quantiles when applying IS+SS, AV and CV. An empirical study of the finite-sample behavior of the confidence intervals was also performed on two stochastic models: a standard normal/bivariate normal distribution and a stochastic activity network (SAN)

    Functional brain networks before the onset of psychosis : a prospective fMRI study with graph theoretical analysis

    Get PDF
    Individuals with an at-risk mental state (ARMS) have a risk of developing a psychotic disorder significantly greater than the general population. However, it is not currently possible to predict which ARMS individuals will develop psychosis from clinical assessment alone. Comparison of ARMS subjects who do, and do not, develop psychosis can reveal which factors are critical for the onset of illness. In the present study, 37 patients with an ARMS were followed clinically at least 24 months subsequent to initial referral. Functional MRI data were collected at the beginning of the follow-up period during performance of an executive task known to recruit frontal lobe networks and to be impaired in psychosis. Graph theoretical analysis was used to compare the organization of a functional brain network in ARMS patients who developed a psychotic disorder following the scan (ARMS-T) to those who did not become ill during the same follow-up period (ARMS-NT) and aged-matched controls. The global properties of each group's representative network were studied (density, efficiency, global average path length) as well as regionally-specific contributions of network nodes to the organization of the system (degree, farness-centrality, betweenness-centrality). We focused our analysis on the dorsal anterior cingulate cortex (ACC), a region known to support executive function that is structurally and functionally impaired in ARMS patients. In the absence of between-group differences in global network organization, we report a significant reduction in the topological centrality of the ACC in the ARMS-T group relative to both ARMS-NT and controls. These results provide evidence that abnormalities in the functional organization of the brain predate the onset of psychosis, and suggest that loss of ACC topological centrality is a potential biomarker for transition to psychosis

    Time-Uncertainty Analysis by Using Simulation in Project Scheduling Networks

    Get PDF
    Risks are inherently present In all construction projects. Quite often, construction projects fail to achieve their time quality and budget goals. Risk management is a subject, which has grown in popularity during the last decade. It is a formal orderly process for systematically identifying, analysing and responding to risks associated with construction projects so as to reduce the effects of these risks to an acceptable level. Risk analysis is primarily concerned with evaluating uncertainties. The purpose of risk analysis is to enable a decision-maker to take an appropriate response in advance against a possible occurrence of a problem. In this study, Monte Carlo simulation as a tool of risk analysis was used. The merge event bias as one of the essential problems associated with PERT is discussed, along with models and approaches developed by other researchers, namely, Probabilistic Network Evaluation Technique (PNET algorithm), Modified PNET, Back-Forward Uncertainly Estimation procedure (BFUE) and concept based on the robust reliability idea. These developed approaches are more reliable in planning construction projects compared to PERT because they attempt to handle the merge event bias problem. In addition, this study demonstrates a number of benefits. the most significant among them being that: (1) Formal risk management tec1miques are rarely used in construction. Dealing with risk management in construction is now essential for minimizing losses and to enhance profitability. (2) It is very dangerous to rely only on PERT/CPM conventional techniques in scheduling projects. (3) To use floats, as stated by traditional resource allocation method, is not practicable. (4) For a project network, the likelihood completion date of a project is exactly equal to the product of the probabilities of each path, separately, with respect to a project completion date. Using simulation now validates this statement. (5) The\ud computation error of a project likelihood completion date is less than 10 percent if a path of a float greater than twice the larger standard deviation of this mentioned path and the critical path is dropped from the calculation, and (6) An effective risk response framework is introduced to help contractors systematically manage the risk in scheduling their projects

    Characterization of gradient estimators for stochastic activity networks

    Get PDF
    This thesis aims to characterize the statistical properties of Monte Carlo simulation-based gradient estimation techniques for performance measures in stochastic activity networks (SANs) using the estimators' variance as the comparison criterion. When analyzing SANs, both performance measures and their sensitivities (gradient, Hessian) are important. This thesis focuses on analyzing three direct gradient estimation techniques: infinitesimal perturbation analysis, the score function or likelihood ratio method, and weak derivatives. To investigate how statistical properties of the different gradient estimation techniques depend on characteristics of the SAN, we carry out both theoretical analyses and numerical experiments. The objective of these studies is to provide guidelines for selecting which technique to use for particular classes of SANs based on features such as complexity, size, shape and interconnectivity. The results reveal that a specific weak derivatives-based method with common random numbers outperforms the other direct techniques in nearly every network configuration tested

    The Outage Probability of a Finite Ad Hoc Network in Nakagami Fading

    Full text link
    An ad hoc network with a finite spatial extent and number of nodes or mobiles is analyzed. The mobile locations may be drawn from any spatial distribution, and interference-avoidance protocols or protection against physical collisions among the mobiles may be modeled by placing an exclusion zone around each radio. The channel model accounts for the path loss, Nakagami fading, and shadowing of each received signal. The Nakagami m-parameter can vary among the mobiles, taking any positive value for each of the interference signals and any positive integer value for the desired signal. The analysis is governed by a new exact expression for the outage probability, defined to be the probability that the signal-to-interference-and-noise ratio (SINR) drops below a threshold, and is conditioned on the network geometry and shadowing factors, which have dynamics over much slower timescales than the fading. By averaging over many network and shadowing realizations, the average outage probability and transmission capacity are computed. Using the analysis, many aspects of the network performance are illuminated. For example, one can determine the influence of the choice of spreading factors, the effect of the receiver location within the finite network region, and the impact of both the fading parameters and the attenuation power laws.Comment: to appear in IEEE Transactions on Communication

    Matching and network effects

    Get PDF
    The matching of individuals in teams is a key element in the functioning of an economy. The network of social ties can potentially transmit important information on abilities and reputations and also help mitigate matching frictions by facilitating interactions among ¿screened¿ individuals. We conjecture that the probability of i and j forming a team is falling in the distance between i and j in the network of existing social ties. The objective of this paper is to empirically test this conjecture. We examine the formation of coauthor relations among economists over a twenty year period. Our principal finding is that a new collaboration emerges faster among two researchers if they are ¿closer" in the existing coauthor network among economists. This proximity effect on collaboration is strong: being at a network distance of 2 instead of 3, for instance, raises the probability of initiating a collaboration by 27 percent. Research collaboration takes place in an environment where fairly detailed information concerning individual ability and productivity -reflected in publications, employment history, etc.- is publicly available. Our finding that social networks are powerful even in this setting suggests that they must affect matching processes more generally.coauthorship network, matching, network effects, network formation.
    corecore