1,164 research outputs found

    Capturing and Treating Unobserved Heterogeneity by Response Based Segmentation in PLS Path Modeling. A Comparison of Alternative Methods by Computational Experiments

    Get PDF
    Segmentation in PLS path modeling framework results is a critical issue in social sciences. The assumption that data is collected from a single homogeneous population is often unrealistic. Sequential clustering techniques on the manifest variables level are ineffective to account for heterogeneity in path model estimates. Three PLS path model related statistical approaches have been developed as solutions for this problem. The purpose of this paper is to present a study on sets of simulated data with different characteristics that allows a primary assessment of these methodologies.Partial Least Squares; Path Modeling; Unobserved Heterogeneity

    A Hierarchical Multivariate Two-Part Model for Profiling Providers\u27 Effects on Healthcare Charges

    Get PDF
    Procedures for analyzing and comparing healthcare providers\u27 effects on health services delivery and outcomes have been referred to as provider profiling. In a typical profiling procedure, patient-level responses are measured for clusters of patients treated by providers that in turn, can be regarded as statistically exchangeable. Thus, a hierarchical model naturally represents the structure of the data. When provider effects on multiple responses are profiled, a multivariate model rather than a series of univariate models, can capture associations among responses at both the provider and patient levels. When responses are in the form of charges for healthcare services and sampled patients include non-users of services, charge variables are a mix of zeros and highly-skewed positive values that present a modeling challenge. For analysis of regressor effects on charges for a single service, a frequently used approach is a two-part model (Duan, Manning, Morris, and Newhouse 1983) that combines logistic or probit regression on any use of the service and linear regression on the log of positive charges given use of the service. Here, we extend the two-part model to the case of charges for multiple services, using a log-linear model and a general multivariate log-normal model, and employ the resultant multivariate two-part model as the within-provider component of a hierarchical model. The log-linear likelihood is reparameterized as proposed by Fitzmaurice and Laird (1993), so that regressor effects on any use of each service are marginal with respect to any use of other services. The general multivariate log-normal likelihood is constructed in such a way that variances of log of positive charges for each service are provider-specific but correlations between log of positive charges for different services are uniform across providers. A data augmentation step is included in the Gibbs sampler used to fit the hierarchical model, in order to accommodate the fact that values of log of positive charges are undefined for unused service. We apply this hierarchical, multivariate, two-part model to analyze the effects of primary care physicians on their patients\u27 annual charges for two services, primary care and specialty care. Along the way, we also demonstrate an approach for incorporating prior information about the effects of patient morbidity on response variables, to improve the accuracy of provider profiles that are based on patient samples of limited size

    Nuni-A case study

    Get PDF

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    A Framework to Quantify Network Resilience and Survivability

    Get PDF
    The significance of resilient communication networks in the modern society is well established. Resilience and survivability mechanisms in current networks are limited and domain specific. Subsequently, the evaluation methods are either qualitative assessments or context-specific metrics. There is a need for rigorous quantitative evaluation of network resilience. We propose a service oriented framework to characterize resilience of networks to a number of faults and challenges at any abstraction level. This dissertation presents methods to quantify the operational state and the expected service of the network using functional metrics. We formalize resilience as transitions of the network state in a two-dimensional state space quantifying network characteristics, from which network service performance parameters can be derived. One dimension represents the network as normally operating, partially degraded, or severely degraded. The other dimension represents network service as acceptable, impaired, or unacceptable. Our goal is to initially understand how to characterize network resilience, and ultimately how to guide network design and engineering toward increased resilience. We apply the proposed framework to evaluate the resilience of the various topologies and routing protocols. Furthermore, we present several mechanisms to improve the resilience of the networks to various challenges

    A class of theory-decidable inference systems

    Get PDF
    Tableau d’honneur de la Faculté des études supérieures et postdoctorales, 2004-2005Dans les deux dernières décennies, l’Internet a apporté une nouvelle dimension aux communications. Il est maintenant possible de communiquer avec n’importe qui, n’importe où, n’importe quand et ce, en quelques secondes. Alors que certains systèmes de communication distribués, comme le courriel, le chat, . . . , sont plutôt informels et ne nécessitent aucune sécurité, d’autres comme l’échange d’informations militaires ou encore médicales, le commerce électronique, . . . , sont très formels et nécessitent de très hauts niveaux de sécurité. Pour atteindre les objectifs de sécurité voulus, les protocoles cryptographiques sont souvent utilisés. Cependant, la création et l’analyse de ces protocoles sont très difficiles. Certains protocoles ont été montrés incorrects plusieurs années après leur conception. Nous savons maintenant que les méthodes formelles sont le seul espoir pour avoir des protocoles parfaitement corrects. Ce travail est une contribution dans le domaine de l’analyse des protocoles cryptographiques de la façon suivante: • Une classification des méthodes formelles utilisées pour l’analyse des protocoles cryptographiques. • L’utilisation des systèmes d’inférence pour la mod´elisation des protocoles cryptographiques. • La définition d’une classe de systèmes d’inférence qui ont une theorie décidable. • La proposition d’une procédure de décision pour une grande classe de protocoles cryptographiquesIn the last two decades, Internet brought a new dimension to communications. It is now possible to communicate with anyone, anywhere at anytime in few seconds. While some distributed communications, like e-mail, chat, . . . , are rather informal and require no security at all, others, like military or medical information exchange, electronic-commerce, . . . , are highly formal and require a quite strong security. To achieve security goals in distributed communications, it is common to use cryptographic protocols. However, the informal design and analysis of such protocols are error-prone. Some protocols were shown to be deficient many years after their conception. It is now well known that formal methods are the only hope of designing completely secure cryptographic protocols. This thesis is a contribution in the field of cryptographic protocols analysis in the following way: • A classification of the formal methods used in cryptographic protocols analysis. • The use of inference systems to model cryptographic protocols. • The definition of a class of theory-decidable inference systems. • The proposition of a decision procedure for a wide class of cryptographic protocols

    Tools for climate change adaptation in water management - inventory and assessment of methods and tools

    Get PDF
    This report summarizes an inventory of methods and tools for assessing climate change impacts, vulnerability and adaptation options, focusing on the water sector. Two questions are central: What are the opportunities for international applications of Dutch methods and tools? And: Which methods and tools available abroad are suitable for application in The Netherlands

    CER Computers as Weapons of Mass Disruption: The Yugoslav Computer Industry in the 1960s

    Get PDF
    The article investigates the history of the CER-10, the first Yugo-slav electronic computer, and the subsequent failed attempt for the establishment of the computer industry during the 1960s. While the CER-10 was an important milestone on the Yugoslav road to technological modernization, the aftermath of this project revealed myriads of problems of the entire Yugoslav state system, which included simultaneous implementation of conflicting economic policies, the heavy hand of Aleksandar Ranković and the Yugoslav secret police in the country’s economy, as well as the channeling of federal funds into Serbian companies without much economic rationale, all of which eventually ground the establishment of this high-tech industrial sector to a halt
    corecore