1,129 research outputs found

    Looking at the Schizophrenia Spectrum Through the Prism of Self-disorders: An Empirical Study

    Get PDF
    Nonpsychotic anomalies of subjective experience were emphasized in both classic literature and phenomenological psychiatry as essential clinical features of schizophrenia. However, only in recent years, their topicality with respect to the construct validity of the concept of the schizophrenia spectrum has been explicitly acknowledged, mainly as a consequence of the increasing focus on early detection and prevention of psychosis. The current study tested the hypothesis of a specific aggregation of self-disorders (SDs, various anomalies of self-awareness) in schizophrenia-spectrum conditions, comparing different diagnostic groups; 305 subjects, previously assessed in the Copenhagen Schizophrenia Linkage Study, were grouped into 4 experimental samples, according to their Diagnostic and Statistical Manual of Mental Disorders (Third Edition Revised) main diagnosis: schizophrenia, (n = 29), schizotypal personality disorder (n = 61), other mental illness not belonging to the schizophrenia spectrum (n = 112), and no mental illness (n = 103). The effect of diagnostic grouping on the level of SDs was explored via general linear model and logistic regression. The diagnosis of schizophrenia and schizotypy predicted higher levels of SDs, and SDs scores were significantly different between spectrum and nonspectrum samples; the likelihood of experiencing SDs increased as well with the diagnostic severity. The findings support the assumption that SDs are a discriminant psychopathological feature of the schizophrenia spectrum and suggest their incorporation to strengthen its construct validity, with potential benefit for both early detection and pathogenetic research

    The Silent Side of the Spectrum: Schizotypy and the Schizotaxic Self

    Get PDF
    The identification of individuals carrying unexpressed genetic liability to schizophrenia is crucial for both etiological research and clinical risk stratification. Subclinical psychopathological features detectable in the nonpsychotic part of the schizophrenia spectrum could improve the delineation of informative vulnerability phenotypes. Inspired by Meehl's schizotaxia-schizotypy heuristic model, we tested anomalous subjective experiences (self-disorders, SDs) as a candidate vulnerability phenotype in a sample of nonpsychotic, genetically high-risk subjects. A total of 218 unaffected members of 6 extended multiplex families (assessed between 1989 and 1999 during the Copenhagen Schizophrenia Linkage Study) were stratified into 4 groups of increasing psychopathological expressivity: no mental illness (NMI), no mental illness with schizotypal traits (NMI-ST), personality disorders not fulfilling other personality disorders (OPDs), and schizotypal personality disorder (SPD). We tested the distribution of SDs among the subgroups, the effect of SDs on the risk of belonging to the different subgroups, and the effect of experimental grouping and concomitant psychopathology (ie, negative symptoms (NSs) and subpsychotic formal thought disorder [FTD]) on the chances of experiencing SDs. SDs distribution followed an incremental pattern from NMI to SPD. SDs were associated with a markedly increased risk of NMI-ST, OPDs, or SPD. The odds of SDs increased as a function of the diagnostic category assignment, independently of sociodemographics and concomitant subclinical psychopathology (NSs and FTD). The results support SDs as an expression of schizotaxic vulnerability and indicate a multidimensional model of schizotypy—characterized by SDs, NSs, FTD—as a promising heuristic construct to address liability phenotypes in genetically high-risk studies

    Can we avoid high coupling?

    Get PDF
    It is considered good software design practice to organize source code into modules and to favour within-module connections (cohesion) over between-module connections (coupling), leading to the oft-repeated maxim "low coupling/high cohesion". Prior research into network theory and its application to software systems has found evidence that many important properties in real software systems exhibit approximately scale-free structure, including coupling; researchers have claimed that such scale-free structures are ubiquitous. This implies that high coupling must be unavoidable, statistically speaking, apparently contradicting standard ideas about software structure. We present a model that leads to the simple predictions that approximately scale-free structures ought to arise both for between-module connectivity and overall connectivity, and not as the result of poor design or optimization shortcuts. These predictions are borne out by our large-scale empirical study. Hence we conclude that high coupling is not avoidable--and that this is in fact quite reasonable

    Contracts in Practice

    Get PDF
    Contracts are a form of lightweight formal specification embedded in the program text. Being executable parts of the code, they encourage programmers to devote proper attention to specifications, and help maintain consistency between specification and implementation as the program evolves. The present study investigates how contracts are used in the practice of software development. Based on an extensive empirical analysis of 21 contract-equipped Eiffel, C#, and Java projects totaling more than 260 million lines of code over 7700 revisions, it explores, among other questions: 1) which kinds of contract elements (preconditions, postconditions, class invariants) are used more often; 2) how contracts evolve over time; 3) the relationship between implementation changes and contract changes; and 4) the role of inheritance in the process. It has found, among other results, that: the percentage of program elements that include contracts is above 33% for most projects and tends to be stable over time; there is no strong preference for a certain type of contract element; contracts are quite stable compared to implementations; and inheritance does not significantly affect qualitative trends of contract usage

    Punctuated Equilibrium in Software Evolution

    Full text link
    The approach based on paradigm of self-organized criticality proposed for experimental investigation and theoretical modelling of software evolution. The dynamics of modifications studied for three free, open source programs Mozilla, Free-BSD and Emacs using the data from version control systems. Scaling laws typical for the self-organization criticality found. The model of software evolution presenting the natural selection principle is proposed. The results of numerical and analytical investigation of the model are presented. They are in a good agreement with the data collected for the real-world software.Comment: 4 pages, LaTeX, 2 Postscript figure

    Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results

    Get PDF
    Testability is defined as the probability that a program will fail a test, conditional on the program containing some fault. In this paper, we show that statements about the testability of a program can be more simply described in terms of assumptions on the probability distribution of the failure intensity of the program. We can thus state general acceptance conditions in clear mathematical terms using Bayesian inference. We develop two scenarios, one for software for which the reliability requirements are that the software must be completely fault-free, and another for requirements stated as an upper bound on the acceptable failure probability

    Testing probability distributions underlying aggregated data

    Full text link
    In this paper, we analyze and study a hybrid model for testing and learning probability distributions. Here, in addition to samples, the testing algorithm is provided with one of two different types of oracles to the unknown distribution DD over [n][n]. More precisely, we define both the dual and cumulative dual access models, in which the algorithm AA can both sample from DD and respectively, for any i[n]i\in[n], - query the probability mass D(i)D(i) (query access); or - get the total mass of {1,,i}\{1,\dots,i\}, i.e. j=1iD(j)\sum_{j=1}^i D(j) (cumulative access) These two models, by generalizing the previously studied sampling and query oracle models, allow us to bypass the strong lower bounds established for a number of problems in these settings, while capturing several interesting aspects of these problems -- and providing new insight on the limitations of the models. Finally, we show that while the testing algorithms can be in most cases strictly more efficient, some tasks remain hard even with this additional power

    Interface, a dispersed architecture

    Get PDF
    Past and current specification techniques use timing diagrams and written text to describe the phenomenology of an interface. This paper treats an interface as the architecture of a number of processes, which are dispersed over the related system parts and the message path. This approach yields a precise definition of an interface. With this definition as starting point, the inherent structure of an interface is developed. A horizontal and vertical partitioning strategy, based on one functional entity per partition and described by a state description, is used to specify the structure. This method allows unambiguous specification, interpretation, and implementation, and allows a much easier judgement of the quality of an interface. The method has been applied to a number of widely used interfaces

    Return of the Great Spaghetti Monster : Learnings from a Twelve-Year Adventure in Web Software Development

    Get PDF
    The widespread adoption of the World Wide Web has fundamentally changed the landscape of software development. Only ten years ago, very few developers would write software for the Web, let alone consider using JavaScript or other web technologies for writing any serious software applications. In this paper, we reflect upon a twelve-year adventure in web development that began with the development of the Lively Kernel system at Sun Microsystems Labs in 2006. Back then, we also published some papers that identified important challenges in web-based software development based on established software engineering principles. We will revisit our earlier findings and compare the state of the art in web development today to our earlier learnings, followed by some reflections and suggestions for the road forward.Peer reviewe

    Coordination Implications of Software Coupling in Open Source Projects

    Get PDF
    The effect of software coupling on the quality of software has been studied quite widely since the seminal paper on software modularity by Parnas [1]. However, the effect of the increase in software coupling on the coordination of the developers has not been researched as much. In commercial software development environments there normally are coordination mechanisms in place to manage the coordination requirements due to software dependencies. But, in the case of Open Source software such coordination mechanisms are harder to implement, as the developers tend to rely solely on electronic means of communication. Hence, an understanding of the changing coordination requirements is essential to the management of an Open Source project. In this paper we study the effect of changes in software coupling on the coordination requirements in a case study of a popular Open Source project called JBoss
    corecore