1,295 research outputs found

    COPS: Cluster optimized proximity scaling

    Get PDF
    Proximity scaling (i.e., multidimensional scaling and related methods) is a versatile statistical method whose general idea is to reduce the multivariate complexity in a data set by employing suitable proximities between the data points and finding low-dimensional configurations where the fitted distances optimally approximate these proximities. The ultimate goal, however, is often not only to find the optimal configuration but to infer statements about the similarity of objects in the high-dimensional space based on the the similarity in the configuration. Since these two goals are somewhat at odds it can happen that the resulting optimal configuration makes inferring similarities rather difficult. In that case the solution lacks "clusteredness" in the configuration (which we call "c-clusteredness"). We present a version of proximity scaling, coined cluster optimized proximity scaling (COPS), which solves the conundrum by introducing a more clustered appearance into the configuration while adhering to the general idea of multidimensional scaling. In COPS, an arbitrary MDS loss function is parametrized by monotonic transformations and combined with an index that quantifies the c-clusteredness of the solution. This index, the OPTICS cordillera, has intuitively appealing properties with respect to measuring c-clusteredness. This combination of MDS loss and index is called "cluster optimized loss" (coploss) and is minimized to push any configuration towards a more clustered appearance. The effect of the method will be illustrated with various examples: Assessing similarities of countries based on the history of banking crises in the last 200 years, scaling Californian counties with respect to the projected effects of climate change and their social vulnerability, and preprocessing a data set of hand written digits for subsequent classification by nonlinear dimension reduction. (authors' abstract)Series: Discussion Paper Series / Center for Empirical Research Method

    Religion in Scots Law: Report of an Audit at the University of Glasgow

    Get PDF
    No abstract available

    Religion in Scots Law: Report of an Audit at the University of Glasgow

    Get PDF
    No abstract available

    Developing and Measuring IS Scales Using Item Response Theory

    Get PDF
    Information Systems (IS) research frequently uses survey data to measure the interplay between technological systems and human beings. Researchers have developed sophisticated procedures to build and validate multi-item scales that measure latent constructs. Most studies use classical test theory (CTT), which suffers from several theoretical shortcomings. We discuss these problems and present item response theory (IRT) as a viable alternative. Subsequently, we use the CTT approach as well as Rasch models (a class of restrictive IRT models) to develop a scale for measuring the hedonic aspects of websites. The results illustrate how IRT can not only be successfully applied in IS research but also provide improved results over CTT approaches

    Breaking Free from the Limitations of Classical Test Theory: Developing and Measuring Information Systems Scales Using Item Response Theory

    Get PDF
    Information systems (IS) research frequently uses survey data to measure the interplay between technological systems and human beings. Researchers have developed sophisticated procedures to build and validate multi-item scales that measure latent constructs. The vast majority of IS studies uses classical test theory (CTT), but this approach suffers from three major theoretical shortcomings: (1) it assumes a linear relationship between the latent variable and observed scores, which rarely represents the empirical reality of behavioral constructs; (2) the true score can either not be estimated directly or only by making assumptions that are difficult to be met; and (3) parameters such as reliability, discrimination, location, or factor loadings depend on the sample being used. To address these issues, we present item response theory (IRT) as a collection of viable alternatives for measuring continuous latent variables by means of categorical indicators (i.e., measurement variables). IRT offers several advantages: (1) it assumes nonlinear relationships; (2) it allows more appropriate estimation of the true score; (3) it can estimate item parameters independently of the sample being used; (4) it allows the researcher to select items that are in accordance with a desired model; and (5) it applies and generalizes concepts such as reliability and internal consistency, and thus allows researchers to derive more information about the measurement process. We use a CTT approach as well as Rasch models (a special class of IRT models) to demonstrate how a scale for measuring hedonic aspects of websites is developed under both approaches. The results illustrate how IRT can be successfully applied in IS research and provide better scale results than CTT. We conclude by explaining the most appropriate circumstances for applying IRT, as well as the limitations of IRT

    Findings from a pilot randomised trial of an asthma internet self-management intervention (RAISIN)

    Get PDF
    <b>Objective </b>To evaluate the feasibility of a phase 3 randomised controlled trial (RCT) of a website (Living Well with Asthma) to support self-management.<p></p> <b>Design and setting</b> Phase 2, parallel group, RCT, participants recruited from 20 general practices across Glasgow, UK. Randomisation through automated voice response, after baseline data collection, to website access for minimum 12 weeks or usual care.<p></p> <b>Participants </b>Adults (age≥16 years) with physician diagnosed, symptomatic asthma (Asthma Control Questionnaire (ACQ) score ≥1). People with unstable asthma or other lung disease were excluded.<p></p> <b>Intervention</b> Living Well with Asthma’ is a desktop/ laptop compatible interactive website designed with input from asthma/ behaviour change specialists, and adults with asthma. It aims to support optimal medication management, promote use of action plans, encourage attendance at asthma reviews and increase physical activity.<p></p> <b>Outcome measures</b> Primary outcomes were recruitment/retention, website use, ACQ and mini- Asthma Quality of Life Questionnaire (AQLQ). Secondary outcomes included patient activation, prescribing, adherence, spirometry, lung inflammation and health service contacts after 12 weeks. Blinding postrandomisation was not possible.<p></p> <b>Results </b>Recruitment target met. 51 participants randomised (25 intervention group). Age range 16–78 years; 75% female; 28% from most deprived quintile. 45/51 (88%; 20 intervention group) followed up. 19 (76% of the intervention group) used the website, for a mean of 18 min (range 0–49). 17 went beyond the 2 ‘core’ modules. Median number of logins was 1 (IQR 1–2, range 0–7). No significant difference in the prespecified primary efficacy measures of ACQ scores (−0.36; 95% CI −0.96 to 0.23; p=0.225), and mini-AQLQ scores (0.38; −0.13 to 0.89; p=0.136). No adverse events.<p></p> <b>Conclusions</b> Recruitment and retention confirmed feasibility; trends to improved outcomes suggest use of Living Well with Asthma may improve self-management in adults with asthma and merits further development followed by investigation in a phase 3 trial

    On Feynman--Kac training of partial Bayesian neural networks

    Full text link
    Recently, partial Bayesian neural networks (pBNNs), which only consider a subset of the parameters to be stochastic, were shown to perform competitively with full Bayesian neural networks. However, pBNNs are often multi-modal in the latent-variable space and thus challenging to approximate with parametric models. To address this problem, we propose an efficient sampling-based training strategy, wherein the training of a pBNN is formulated as simulating a Feynman--Kac model. We then describe variations of sequential Monte Carlo samplers that allow us to simultaneously estimate the parameters and the latent posterior distribution of this model at a tractable computational cost. We show on various synthetic and real-world datasets that our proposed training scheme outperforms the state of the art in terms of predictive performance.Comment: Under revie
    • …
    corecore