630 research outputs found

    INVESTIGATING THE LONGITUDINAL RELATIONSHIP BETWEEN SOCIAL MOTIVATION AND DEPRESSION IN AUTISTIC ADULTS

    Get PDF
    Autism affects individuals across the lifespan, yet there tends to be limited research and services for autistic adults. This is especially concerning given that autistic adults have high mental health needs, with depression being one of the most common and clinically significant co-occurring conditions. We explored the longitudinal relationships between social motivation, social access (i.e., having opportunities for meaningful social interactions), loneliness, and depression in N=303 autistic adults ages 18-65. Participants completed online surveys about social behavior and wellbeing three times over 3–4 months. We hypothesized that an interaction between higher social motivation and lower social access at Time 1 would predict depressive symptoms at Time 3 via the mediator of loneliness at Time 2. Our hypothesis was not supported, though loneliness significantly mediated the relationship between T1 low social access and T3 depression. We discuss the non-significant interaction in light of challenges measuring social motivation, defining and measuring “social access,” and possible bidirectional effects of social motivation and depressed mood that unfolded pre-study. The findings still highlight the importance of social access to mood in this population and supporting meaningful social opportunities for autistic adults universally, not just those who desire social experiences

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    Leverage analysis: A method for locating points of influence in systemic design decisions

    Get PDF
    Many systemic design processes include the development and analysis of systems models that represent the issue(s) at hand. In causal loop diagram models, phenomena are graphed as nodes, with connections between them indicating a control relationship. Such models provide mechanisms for stakeholder collaboration, problem finding and generative insight and are powerful . These functions are valued in design thinking, but the potential of these models may yet be unfulfilled. We introduce the notion of “leverage measures” to systemic design, adapting techniques from social network analysis and systems dynamics to uncover key structures, relationships and latent leverage positions of modelled phenomena. We demonstrate their utility in a pilot study. By rethinking the logics of leverage, we make better arguments for change and find the place from which to move the world

    How to Think About Resilient Infrastructure Systems

    Get PDF
    abstract: Resilience is emerging as the preferred way to improve the protection of infrastructure systems beyond established risk management practices. Massive damages experienced during tragedies like Hurricane Katrina showed that risk analysis is incapable to prevent unforeseen infrastructure failures and shifted expert focus towards resilience to absorb and recover from adverse events. Recent, exponential growth in research is now producing consensus on how to think about infrastructure resilience centered on definitions and models from influential organizations like the US National Academy of Sciences. Despite widespread efforts, massive infrastructure failures in 2017 demonstrate that resilience is still not working, raising the question: Are the ways people think about resilience producing resilient infrastructure systems? This dissertation argues that established thinking harbors misconceptions about infrastructure systems that diminish attempts to improve their resilience. Widespread efforts based on the current canon focus on improving data analytics, establishing resilience goals, reducing failure probabilities, and measuring cascading losses. Unfortunately, none of these pursuits change the resilience of an infrastructure system, because none of them result in knowledge about how data is used, goals are set, or failures occur. Through the examination of each misconception, this dissertation results in practical, new approaches for infrastructure systems to respond to unforeseen failures via sensing, adapting, and anticipating processes. Specifically, infrastructure resilience is improved by sensing when data analytics include the modeler-in-the-loop, adapting to stress contexts by switching between multiple resilience strategies, and anticipating crisis coordination activities prior to experiencing a failure. Overall, results demonstrate that current resilience thinking needs to change because it does not differentiate resilience from risk. The majority of research thinks resilience is a property that a system has, like a noun, when resilience is really an action a system does, like a verb. Treating resilience as a noun only strengthens commitment to risk-based practices that do not protect infrastructure from unknown events. Instead, switching to thinking about resilience as a verb overcomes prevalent misconceptions about data, goals, systems, and failures, and may bring a necessary, radical change to the way infrastructure is protected in the future.Dissertation/ThesisDoctoral Dissertation Civil, Environmental and Sustainable Engineering 201

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The total of 60 regular papers presented in these volumes was carefully reviewed and selected from 155 submissions. The papers are organized in topical sections as follows: Part I: Program verification; SAT and SMT; Timed and Dynamical Systems; Verifying Concurrent Systems; Probabilistic Systems; Model Checking and Reachability; and Timed and Probabilistic Systems. Part II: Bisimulation; Verification and Efficiency; Logic and Proof; Tools and Case Studies; Games and Automata; and SV-COMP 2020

    Model-based reasoning for power system management using KATE and the SSM/PMAD

    Get PDF
    The overall goal of this research effort has been the development of a software system which automates tasks related to monitoring and controlling electrical power distribution in spacecraft electrical power systems. The resulting software system is called the Intelligent Power Controller (IPC). The specific tasks performed by the IPC include continuous monitoring of the flow of power from a source to a set of loads, fast detection of anomalous behavior indicating a fault to one of the components of the distribution systems, generation of diagnosis (explanation) of anomalous behavior, isolation of faulty object from remainder of system, and maintenance of flow of power to critical loads and systems (e.g. life-support) despite fault conditions being present (recovery). The IPC system has evolved out of KATE (Knowledge-based Autonomous Test Engineer), developed at NASA-KSC. KATE consists of a set of software tools for developing and applying structure and behavior models to monitoring, diagnostic, and control applications

    Outlier detection approach for PCB testing based on Principal Component Analysis, An

    Get PDF
    2011 Spring.Includes bibliographical references.Capacitive Lead Frame Testing, a widely used approach for printed circuit board testing, is very effective for open solder detection. The approach, however, is affected by mechanical variations during testing and by tolerances of electrical parameters of components, making it difficult to use threshold based techniques for defect detection. A novel approach is presented in this thesis for identifying boardruns that are likely to be outliers. Based on Principal Components Analysis (PCA), this approach treats the set of capacitance measurements of individual connectors or sockets in a holistic manner to overcome the measurement and component parameter variations inherent in test data. Effectiveness of the method is evaluated using measurements on different types of boards. Based on multiple analyses of different measurement datasets, the most suitable statistics for outlier detection and relative parameter values are also identified. Enhancements to the PCA-based technique using the concept of test-pin windows are presented to increase the resolution of the analysis. When applied to one test window at a time, PCA is able to detect the physical position of potential defects. Combining the basic and enhanced techniques, the effectiveness of outlier detection is improved. The PCA based approach is extended to detect and compensate for systematic variation of measurement data caused by tilt or shift of the sense plate. This scheme promises to enhance the accuracy of outlier detection when measurements are from different fixtures. Compensation approaches are introduced to correct the 'abnormal' measurements due to sense-plate variations to a 'normal' and consistent baseline. The effectiveness of this approach in the presence of the two common forms of mechanical variations is illustrated. Potential to use PCA based analysis to estimate the relative amount of tilt and shift in sense plate is demonstrated
    corecore