794,468 research outputs found

    COMPLEXITY MEASURES IN SYSTEM DEVELOPMENT

    Get PDF
    Complexity measurement algorithms for information systems schemas are considered. Graph representations, based on an objectrelation pardigm and linguistic models, are discussed. Software science metrics are evaluated as complexity measures, as is the cyclomatic complexity measure. The deficiencies of current measures are highlighted. An alternative structural complexity metric is proposed that reflects propagation effects. The system development life cycle is used to determine realms of complexity that provide a framework for evaluation of complexity of designs and for projecting complexity between system development life cycle phases

    Entropic measures of individual mobility patterns

    Full text link
    Understanding human mobility from a microscopic point of view may represent a fundamental breakthrough for the development of a statistical physics for cognitive systems and it can shed light on the applicability of macroscopic statistical laws for social systems. Even if the complexity of individual behaviors prevents a true microscopic approach, the introduction of mesoscopic models allows the study of the dynamical properties for the non-stationary states of the considered system. We propose to compute various entropy measures of the individual mobility patterns obtained from GPS data that record the movements of private vehicles in the Florence district, in order to point out new features of human mobility related to the use of time and space and to define the dynamical properties of a stochastic model that could generate similar patterns. Moreover, we can relate the predictability properties of human mobility to the distribution of time passed between two successive trips. Our analysis suggests the existence of a hierarchical structure in the mobility patterns which divides the performed activities into three different categories, according to the time cost, with different information contents. We show that a Markov process defined by using the individual mobility network is not able to reproduce this hierarchy, which seems the consequence of different strategies in the activity choice. Our results could contribute to the development of governance policies for a sustainable mobility in modern cities

    Assembly Time Modeling Through Connective Complexity Metrics

    Get PDF
    This paper presents an approach for the development of surrogate models predicting the assembly time of a system based on complexity metrics of the physical system architecture when detailed geometric information is unavailable. A convention for modelling physical architecture is presented, followed by a sample of 10 analysed systems used for training and three systems used for validation. These systems are evaluated on complexity metrics developed from graph theoretic measures. An example model is developed based on a series of regressions of trends observed within the sample data. This is validated against the systems that are not used to develop the model. The model developed uses average path length, part count and path length density to approximate assembly time within the standard deviation of the subjective variation possible in Boothroyd and Dewhurst design for assembly (DFA) analysis. While the specific example model developed is generalisable only to systems similar to those in the sample set, the capability to develop mappings between physical architecture and assembly time in early-stage design is demonstrated

    Evaluating follow- up and complexity in cancer clinical trials (EFACCT): an eDelphi study of research professionals’ perspectives.

    Get PDF
    Objectives: To evaluate patient follow-up and complexity in cancer clinical trial delivery, using consensus methods to: (1) identify research professionals’ priorities, (2) understand localised challenges, (3) define study complexity and workloads supporting the development of a trial rating and complexity assessment tool (TRACAT). Design: A classic eDelphi completed in three rounds, conducted as the launch study to a multiphase national project (evaluating follow-up and complexity in cancer clinical trials). Setting: Multicentre online survey involving professionals at National Health Service secondary care hospital sites in Scotland and England varied in scale, geographical location and patient populations. Participants: Principal investigators at 13 hospitals across nine clinical research networks recruited 33 participants using pre-defined eligibility criteria to form a multidisciplinary panel. Main outcome measures: Statements achieving a consensus level of 70% on a 7-point Likert-type scale and ranked trial rating indicators (TRIs) developed by research professionals. Results: The panel developed 75 consensus statements illustrating factors contributing to complexity, follow-up intensity and operational performance in trial delivery, and specified 14 ranked TRIs. Seven open questions in the first qualitative round generated 531 individual statements. Iterative survey rounds returned rates of 82%, 82% and 93%. Conclusions: Clinical trials operate within a dynamic, complex healthcare and innovation system where rapid scientific advances present opportunities and challenges for delivery organisations and professionals. Panellists highlighted cultural and organisational factors limiting the profession’s potential to support growing trial complexity and patient follow-up. Enhanced communication, interoperability, funding and capacity have emerged as key priorities. Future operational models should test dialectic Singerian-based approaches respecting open dialogue and shared values. Research capacity building should prioritise innovative, collaborative approaches embedding validated review and evaluation models to understand changing operational needs and challenges. TRACAT provides a mechanism for continual knowledge assimilation to improve decision-making

    Toward a hybrid dynamo model for the Milky Way

    Full text link
    (Abridged) Based on the rapidly increasing all-sky data of Faraday rotation measures and polarised synchrotron radiation, the Milky Way's magnetic field is now modelled with an unprecedented level of detail and complexity. We aim to complement this heuristic approach with a physically motivated, quantitative Galactic dynamo model -- a model that moreover allows for the evolution of the system as a whole, instead of just solving the induction equation for a fixed static disc. Building on the framework of mean-field magnetohydrodynamics and extending it to the realm of a hybrid evolution, we perform three-dimensional global simulations of the Galactic disc. Closure coefficients embodying the mean-field dynamo are calibrated against resolved box simulations of supernova-driven interstellar turbulence. The emerging dynamo solutions comprise a mixture of the dominant axisymmetric S0 mode, with even parity, and a subdominant A0 mode, with odd parity. Notably, such a superposition of modes creates a strong localised vertical field on one side of the Galactic disc. We moreover find significant radial pitch angles, which decay with radius -- explained by flaring of the disc. In accordance with previous work, magnetic instabilities appear to be restricted to the less-stirred outer Galactic disc. Their main effect is to create strong fields at large radii such that the radial scale length of the magnetic field increases from 4 kpc (for the case of a mean-field dynamo alone) to about 10 kpc in the hybrid models. There remain aspects (e.g., spiral arms, X-shaped halo fields, fluctuating fields) that are not captured by the current model and that will require further development towards a fully dynamical evolution. Nevertheless, the work presented demonstrates that a hybrid modelling of the Galactic dynamo is feasible and can serve as a foundation for future efforts.Comment: 12 pages, 12 figures, 2 tables, accepted for publication in A&

    Development and Validation of a Rule-based Time Series Complexity Scoring Technique to Support Design of Adaptive Forecasting DSS

    Get PDF
    Evidence from forecasting research gives reason to believe that understanding time series complexity can enable design of adaptive forecasting decision support systems (FDSSs) to positively support forecasting behaviors and accuracy of outcomes. Yet, such FDSS design capabilities have not been formally explored because there exists no systematic approach to identifying series complexity. This study describes the development and validation of a rule-based complexity scoring technique (CST) that generates a complexity score for time series using 12 rules that rely on 14 features of series. The rule-based schema was developed on 74 series and validated on 52 holdback series using well-accepted forecasting methods as benchmarks. A supporting experimental validation was conducted with 14 participants who generated 336 structured judgmental forecasts for sets of series classified as simple or complex by the CST. Benchmark comparisons validated the CST by confirming, as hypothesized, that forecasting accuracy was lower for series scored by the technique as complex when compared to the accuracy of those scored as simple. The study concludes with a comprehensive framework for design of FDSS that can integrate the CST to adaptively support forecasters under varied conditions of series complexity. The framework is founded on the concepts of restrictiveness and guidance and offers specific recommendations on how these elements can be built in FDSS to support complexity

    Ein verallgemeinerter Prozess zur Verifikation und Validerung von Modellen und Simulationsergebnissen

    Get PDF
    With technologies increasing rapidly, symbolic, quantitative modeling and computer-based simulation (M&S) have become affordable and easy-to-apply tools in numerous application areas as, e.g., supply chain management, pilot training, car safety improvement, design of industrial buildings, or theater-level war gaming. M&S help to reduce the resources required for many types of projects, accelerate the development of technical systems, and enable the control and management of systems of high complexity. However, as the impact of M&S on the real world grows, the danger of adverse effects of erroneous or unsuitable models or simu-lation results also increases. These effects may range from the delayed delivery of an item ordered by mail to hundreds of avoidable casualties caused by the simulation-based acquisi-tion (SBA) of a malfunctioning communication system for rescue teams. In order to benefit from advancing M&S, countermeasures against M&S disadvantages and drawbacks must be taken. Verification and Validation (V&V) of models and simulation results are intended to ensure that only correct and suitable models and simulation results are used. However, during the development of any technical system including models for simulation, numerous errors may occur. The later they are detected, and the further they have propagated through the model development process, the more resources they require to correct thus, their propaga-tion should be avoided. If the errors remain undetected, and major decisions are based on in-correct or unsuitable models or simulation results, no benefit is gained from M&S, but a dis-advantage. This thesis proposes a structured and rigorous approach to support the verification and valida-tion of models and simulation results by a) the identification of the most significant of the current deficiencies of model develop-ment (design and implementation) and use, including the need for more meaningful model documentation and the lack of quality assurance (QA) as an integral part of the model development process; b) giving an overview of current quality assurance measures in M&S and in related areas. The transferability of concepts like the capability maturity model for software (SW-CMM) and the ISO9000 standard is discussed, and potentials and limits of documents such as the VV&A Recommended Practices Guide of the US Defense Modeling and Simulation Office are identified; c) analysis of quality assurance measures and so called V&V techniques for similarities and differences, to amplify their strengths and to reduce their weaknesses. d) identification and discussion of influences that drive the required rigor and intensity of V&V measures (risk involved in using models and simulation results) on the one hand, and that limit the maximum reliability of V&V activities (knowledge about both the real system and the model) on the other. This finally leads to the specification of a generalized V&V process - the V&V Triangle. It illustrates the dependencies between numerous V&V objectives, which are derived from spe-cific potential errors that occur during model development, and provides guidance for achiev-ing these objectives by the association of V&V techniques, required input, and evidence made available. The V&V Triangle is applied to an M&S sample project, and the lessons learned from evaluating the results lead to the formulation of future research objectives in M&S V&V

    Institutional theory and legislatures

    Get PDF
    Institutionalism has become one of the dominant strands of theory within contemporary political science. Beginning with the challenge to behavioral and rational choice theory issued by March and Olsen, institutional analysis has developed into an important alternative to more individualistic approaches to theory and analysis. This body of theory has developed in a number of ways, and perhaps the most commonly applied version in political science is historical institutionalism that stresses the importance of path dependency in shaping institutional behaviour. The fundamental question addressed in this book is whether institutionalism is useful for the various sub-disciplines within political science to which it has been applied, and to what extent the assumptions inherent to institutional analysis can be useful for understanding the range of behavior of individuals and structures in the public sector. The volume will also examine the relative utility of different forms of institutionalism within the various sub-disciplines. The book consists of a set of strong essays by noted international scholars from a range of sub-disciplines within the field of political science, each analyzing their area of research from an institutionalist perspective and assessing what contributions this form of theorizing has made, and can make, to that research. The result is a balanced and nuanced account of the role of institutions in contemporary political science, and a set of suggestions for the further development of institutional theory
    • 

    corecore