29 research outputs found

    An algebra to represent task flow models

    Get PDF
    This paper presents the algebra representation for the Task Flow model in the Discovery Method. This metamodel is based on simple and compound tasks structured using operators such as sequence, selection, and parallel composition. Recursion and encapsulation are also considered. The axioms of the algebra are presented as well as a set of examples showing a combination of basic elements in the expressions

    Formal Testing of Timed and Probabilistic Systems

    Get PDF
    Abstract. This talk reviews some of my contributions on formal testing of timed and probabilistic systems, focusing on methodologies that allow their users to decide whether these systems are correct with respect to a formal specification. The consideration of time and probability complicates the definition of these frameworks since there is not an obvious way to define correctness. For example, in a specific situation it might be desirable that a system is as fast as possible while in a different application it might be required that the performance of the system is exactly equal to the one given by the specification. All the methodologies have as common assumption that the system under test is a black-box and that the specification is described as a timed and/or probabilistic extension of the finite state machines formalism

    SAM-SoS: A stochastic software architecture modeling and verification approach for complex System-of-Systems

    Get PDF
    A System-of-Systems (SoS) is a complex, dynamic system whose Constituent Systems (CSs) are not known precisely at design time, and the environment in which they operate is uncertain. SoS behavior is unpredictable due to underlying architectural characteristics such as autonomy and independence. Although the stochastic composition of CSs is vital to achieving SoS missions, their unknown behaviors and impact on system properties are unavoidable. Moreover, unknown conditions and volatility have significant effects on crucial Quality Attributes (QAs) such as performance, reliability and security. Hence, the structure and behavior of a SoS must be modeled and validated quantitatively to foresee any potential impact on the properties critical for achieving the missions. Current modeling approaches lack the essential syntax and semantics required to model and verify SoS behaviors at design time and cannot offer alternative design choices for better design decisions. Therefore, the majority of existing techniques fail to provide qualitative and quantitative verification of SoS architecture models. Consequently, we have proposed an approach to model and verify Non-Deterministic (ND) SoS in advance by extending the current algebraic notations for the formal models as a hybrid stochastic formalism to specify and reason architectural elements with the required semantics. A formal stochastic model is developed using a hybrid approach for architectural descriptions of SoS with behavioral constraints. Through a model-driven approach, stochastic models are then translated into PRISM using formal verification rules. The effectiveness of the approach has been tested with an end-to-end case study design of an emergency response SoS for dealing with a fire situation. Architectural analysis is conducted on the stochastic model, using various qualitative and quantitative measures for SoS missions. Experimental results reveal critical aspects of SoS architecture model that facilitate better achievement of missions and QAs with improved design, using the proposed approach

    Preschool screening for the early identification of children with learning disabilities

    Get PDF
    It was the intent of this paper to look briefly at what criteria are employed in determining that the preschool child is learning disabled and then to focus on what instruments are being used to screen the preschoolers for these established criteria

    Stroke outcome measurements from electronic medical records : cross-sectional study on the effectiveness of neural and nonneural classifiers

    Get PDF
    Background: With the rapid adoption of electronic medical records (EMRs), there is an ever-increasing opportunity to collect data and extract knowledge from EMRs to support patient-centered stroke management. Objective: This study aims to compare the effectiveness of state-of-the-art automatic text classification methods in classifying data to support the prediction of clinical patient outcomes and the extraction of patient characteristics from EMRs. Methods: Our study addressed the computational problems of information extraction and automatic text classification. We identified essential tasks to be considered in an ischemic stroke value-based program. The 30 selected tasks were classified (manually labeled by specialists) according to the following value agenda: tier 1 (achieved health care status), tier 2 (recovery process), care related (clinical management and risk scores), and baseline characteristics. The analyzed data set was retrospectively extracted from the EMRs of patients with stroke from a private Brazilian hospital between 2018 and 2019. A total of 44,206 sentences from free-text medical records in Portuguese were used to train and develop 10 supervised computational machine learning methods, including state-of-the-art neural and nonneural methods, along with ontological rules. As an experimental protocol, we used a 5-fold cross-validation procedure repeated 6 times, along with subject-wise sampling. A heatmap was used to display comparative result analyses according to the best algorithmic effectiveness (F1 score), supported by statistical significance tests. A feature importance analysis was conducted to provide insights into the results. Results: The top-performing models were support vector machines trained with lexical and semantic textual features, showing the importance of dealing with noise in EMR textual representations. The support vector machine models produced statistically superior results in 71% (17/24) of tasks, with an F1 score >80% regarding care-related tasks (patient treatment location, fall risk, thrombolytic therapy, and pressure ulcer risk), the process of recovery (ability to feed orally or ambulate and communicate), health care status achieved (mortality), and baseline characteristics (diabetes, obesity, dyslipidemia, and smoking status). Neural methods were largely outperformed by more traditional nonneural methods, given the characteristics of the data set. Ontological rules were also effective in tasks such as baseline characteristics (alcoholism, atrial fibrillation, and coronary artery disease) and the Rankin scale. The complementarity in effectiveness among models suggests that a combination of models could enhance the results and cover more tasks in the future. Conclusions: Advances in information technology capacity are essential for scalability and agility in measuring health status outcomes. This study allowed us to measure effectiveness and identify opportunities for automating the classification of outcomes of specific tasks related to clinical conditions of stroke victims, and thus ultimately assess the possibility of proactively using these machine learning techniques in real-world situations

    Measuring the complexity of product configuration systems

    Get PDF
    International audienceThe complexity of product configuration systems is an important indicator of both development and maintenance effort of the systems. Existing literature proposes a couple of effort estimation approaches for configurator projects. However, these approaches do not address the issues of comprehensibility and modifiability of a configuration model. Therefore, this article proposes a metric to measure the total cognitive complexity of the configuration model corresponding to a product configuration system, expressed in the form of an UML class diagram. This metric takes into account the number and the type of attributes, constraints and the relationships between classes in an UML class diagram. The proposed metric can be used to compare two configuration models, in terms of their cognitive complexity. Moreover, a relation between development time for a PCS project and the total cognitive complexity of the corresponding configuration model is established using linear regression. To validate the proposed approach a case study is conducted where the cognitive complexity is calculated for two configuration models

    Flexible temporal constraint management in modularized processes

    Get PDF
    Managing temporal process constraints in modularized processes is an important task, both during the design, as it allows the reuse of temporal (child) process models, and during the checking of temporal properties of processes, as it avoids the necessity of ‘‘unfolding’’ child processes within the main process model. Taking into account the capability of providing modular solutions, modeling and checking temporal features of processes is still an open problem in the context of process-aware information systems. In this paper, we present and discuss a novel approach to represent flexible temporal constraints in modularized time-aware BPMN process models. To support temporal flexibility, allowed task durations are represented through guarded ranges that allow a limited (guarded) restriction of task durations during process execution if it is necessary to guarantee the satisfaction of all temporal constraints. We, then, propose how to derive a compact representation of the overall temporal behavior of such time-aware BPMN models. Such compact representation of child processes allows us to check the dynamic controllability (DC) of a parent timeaware process model without ‘‘unfolding’’ the child process models. Dynamic controllability guarantees that process models can have process instances (i.e., executions) satisfying all the temporal constraints for any possible combination of allowed durations of tasks and child processes. Possible approaches for even more flexibility by solving some kinds of DC violations are then introduced. We use a real process model from a healthcare domain as a motivating example, and we also present a proof-of-concept prototype confirming the concrete applicability of the solutions we propose, followed by an experimental evaluation

    Co-evolutionary automatic programming for software development

    Get PDF
    AbstractSince the 1970s the goal of generating programs in an automatic way (i.e., Automatic Programming) has been sought. A user would just define what he expects from the program (i.e., the requirements), and it should be automatically generated by the computer without the help of any programmer. Unfortunately, this task is much harder than expected. Although transformation methods are usually employed to address this problem, they cannot be employed if the gap between the specification and the actual implementation is too wide. In this paper we introduce a novel conceptual framework for evolving programs from their specification. We use genetic programming to evolve the programs, and at the same time we exploit the specification to co-evolve sets of unit tests. Programs are rewarded by how many tests they do not fail, whereas the unit tests are rewarded by how many programs they make to fail. We present and analyse seven different problems on which this novel technique is successfully applied

    Residential Property Tax Abatement;Testing a Model of Neighborhood Impact

    Get PDF
    Using a quasi-experimental research design, this study examines the relationship between residential property tax abatement for new construction, and urban neighborhoods in four Ohio cities. Neighborhoods were defined as census tract. The purpose of this research is to determine if there is a statistically significant relationship at p \u3c .05 between residential property tax abatement programs for new construction and several different measures of neighborhood outcomes. The neighborhood outcome measures can be grouped under the broad concepts of increased private investment, blight removal, decreased criminal activity, and property tax equity. Subsequent questions investigated are the direction of these relationships and the existence of a threshold level at which point relationships become significant. The utilization of a comparable comparison group addresses the counterfactual scenario. Independence of samples tests and multivariate cubic regression are employed to answer the research questions. Results indicate that there are no discernable effects between residential property tax abatement and the indicators of neighborhood change as defined in the study. Second, there appears to be no threshold at which the number of tax abated residential units becomes significantly associated with the indicators of neighborhood change. Third, there were no significant differences on the indicators of neighborhood change between subject and comparison groups. In essence, there are no effects from residential tax abatement policy seen at the neighborhood leve
    corecore