1,513,995 research outputs found

    An Introduction to Complex Systems Science and its Applications

    Full text link
    The standard assumptions that underlie many conceptual and quantitative frameworks do not hold for many complex physical, biological, and social systems. Complex systems science clarifies when and why such assumptions fail and provides alternative frameworks for understanding the properties of complex systems. This review introduces some of the basic principles of complex systems science, including complexity profiles, the tradeoff between efficiency and adaptability, the necessity of matching the complexity of systems to that of their environments, multi-scale analysis, and evolutionary processes. Our focus is on the general properties of systems as opposed to the modeling of specific dynamics; rather than provide a comprehensive review, we pedagogically describe a conceptual and analytic approach for understanding and interacting with the complex systems of our world. With the exception of a few footnotes, this paper assumes only a high school mathematical and scientific background, so that it may be accessible to academics in all fields, decision-makers in industry, government, and philanthropy, and anyone who is interested in systems and society

    VINYL: The VIrtual Neutron and x-raY Laboratory and its applications

    Get PDF
    Experiments conducted in large scientific research infrastructures, such as synchrotrons, free electron lasers and neutron sources become increasingly complex. Such experiments, often investigating complex physical systems, are usually performed under strict time limitations and may depend critically on experimental parameters. To prepare and analyze these complex experiments, a virtual laboratory which provides start-to-end simulation tools can help experimenters predict experimental results under real or close to real instrument conditions. As a part of the PaNOSC (Photon and Neutron Open Science Cloud) project, the VIrtual Neutron and x-raY Laboratory (VINYL) is designed to be a cloud service framework to implement start-to-end simulations for those scientific facilities. In this paper, we present an introduction of the virtual laboratory framework and discuss its applications to the design and optimization of experiment setups as well as the estimation of experimental artifacts in an X-ray experiment

    Study on Forward Chaining and Reverse Chaining in Expert System

    Full text link
    Expert systems are part of a general category of computer applications known as intelligence. Expert system are designed to solve complex problems. Expert Systems is a branch of AI designed to work within a particular domain. To solve expert-level problems, expert systems will need efficient access to a substantial domain knowledge base, and a reasoning mechanism to apply the knowledge to the problems they are given. Usually they will also need to be able to explain, to the users who rely on them, how they have reached their decisions. As an expert is a person who can solve a problem with the domain knowledge. This research paper introduces introduction, parts, application of expert system. and difference between forward chaining and Backward chaining and Exactly meaning of Chaining. ETL tools uses functionality to extract, transform and load data from one system into another system, but our expert advises they're not optimal for application-to-application communication. In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert. The AI technology has become really advanced and its only matter of time when the machines will be able to learn almost anything. The machine learning algorithms are already very smart, however the Processing power has been a challenge in last decade .Now with the big data and distributed computing revolution this problem has become easy to solve. Many programmers and developers can start programming their own robots and other gadgets on their own. Artificial intelligence is a science and technology based on disciplines such as Computer Science, Biology, Psychology, Linguistics, Mathematics, and Engineering. A major thrust of AI is in the development of computer functions associated with human intelligence, such as reasoning, learning, and problem solving

    Synthesis and characterization of a tetrathia[7]helicenebased rhenium(I) complex

    Get PDF
    Tetrathia[7]helicenes (7-TH), formed by thiophene and benzene rings ortho-fused in an alternating fashion, are emerging as one of the most popular class of chiral helical-shaped molecules, thanks to their peculiar electronic and chiroptical properties suitable for manifold applications in different areas of science.1 In particular, transition metal-based 7-TH systems are an extremely appealing class of complexes, in which the coordination of metals with the \uf070-helical ligand, bearing appropriate coordinating functionalities, provides original chiral architectures. Indeed, the effective functionalization of the \uf061-position(s) of the terminal thiophene ring(s) of the 7-TH scaffold allows the introduction of a variety of substituents, including those with efficient coordinating ability (e.g. cyano2, phosphane3, phosphine oxide4). For example, Rh(I)5 and Au(I)6 complexes based on 7-TH phosphanes have been successfully used in the homogenous transition metal catalysis. In our ongoing studies on 7-TH-based organometallic complexes, we have focused on a novel field of investigation concerning the development of rhenium-based polynuclear complexes containing 7-TH phosphine oxide ligands. In this communication, we describe the synthesis and the characterization of a novel dinuclear rhenium(I) complex (Figure 1), along with the elucidation of its tridimensional structure by single crystal X-ray diffraction studies

    Presenting a comprehensive multi-scale evaluation framework for participatory modelling programs: a scoping review

    Get PDF
    INTRODUCTION: Systems modelling and simulation can improve understanding of complex systems to support decision making, better managing system challenges. Advances in technology have facilitated accessibility of modelling by diverse stakeholders, allowing them to engage with and contribute to the development of systems models (participatory modelling). However, despite its increasing applications across a range of disciplines, there is a growing need to improve evaluation efforts to effectively report on the quality, importance, and value of participatory modelling. This paper aims to identify and assess evaluation frameworks, criteria, and/or processes, as well as to synthesize the findings into a comprehensive multi-scale framework for participatory modelling programs. MATERIALS AND METHODS: A scoping review approach was utilized, which involved a systematic literature search via Scopus in consultation with experts to identify and appraise records that described an evaluation framework, criteria, and/or process in the context of participatory modelling. This scoping review is registered with the Open Science Framework. RESULTS: The review identified 11 studies, which varied in evaluation purposes, terminologies, levels of examination, and time points. The review of studies highlighted areas of overlap and opportunities for further development, which prompted the development of a comprehensive multi-scale evaluation framework to assess participatory modelling programs across disciplines and systems modelling methods. The framework consists of four categories (Feasibility, Value, Change/Action, Sustainability) with 30 evaluation criteria, broken down across project-, individual-, group- and system-level impacts. DISCUSSION & CONCLUSION: The presented novel framework brings together a significant knowledge base into a flexible, cross-sectoral evaluation effort that considers the whole participatory modelling process. Developed through the rigorous synthesis of multidisciplinary expertise from existing studies, the application of the framework can provide the opportunity to understand practical future implications such as which aspects are particularly important for policy decisions, community learning, and the ongoing improvement of participatory modelling methods

    A unifying mathematical definition enables the theoretical study of the algorithmic class of particle methods.

    Get PDF
    Mathematical definitions provide a precise, unambiguous way to formulate concepts. They also provide a common language between disciplines. Thus, they are the basis for a well-founded scientific discussion. In addition, mathematical definitions allow for deeper insights into the defined subject based on mathematical theorems that are incontrovertible under the given definition. Besides their value in mathematics, mathematical definitions are indispensable in other sciences like physics, chemistry, and computer science. In computer science, they help to derive the expected behavior of a computer program and provide guidance for the design and testing of software. Therefore, mathematical definitions can be used to design and implement advanced algorithms. One class of widely used algorithms in computer science is the class of particle-based algorithms, also known as particle methods. Particle methods can solve complex problems in various fields, such as fluid dynamics, plasma physics, or granular flows, using diverse simulation methods, including Discrete Element Methods (DEM), Molecular Dynamics (MD), Reproducing Kernel Particle Methods (RKPM), Particle Strength Exchange (PSE), and Smoothed Particle Hydrodynamics (SPH). Despite the increasing use of particle methods driven by improved computing performance, the relation between these algorithms remains formally unclear. In particular, particle methods lack a unifying mathematical definition and precisely defined terminology. This prevents the determination of whether an algorithm belongs to the class and what distinguishes the class. Here we present a rigorous mathematical definition for determining particle methods and demonstrate its importance by applying it to several canonical algorithms and those not previously recognized as particle methods. Furthermore, we base proofs of theorems about parallelizability and computational power on it and use it to develop scientific computing software. Our definition unified, for the first time, the so far loosely connected notion of particle methods. Thus, it marks the necessary starting point for a broad range of joint formal investigations and applications across fields.:1 Introduction 1.1 The Role of Mathematical Definitions 1.2 Particle Methods 1.3 Scope and Contributions of this Thesis 2 Terminology and Notation 3 A Formal Definition of Particle Methods 3.1 Introduction 3.2 Definition of Particle Methods 3.2.1 Particle Method Algorithm 3.2.2 Particle Method Instance 3.2.3 Particle State Transition Function 3.3 Explanation of the Definition of Particle Methods 3.3.1 Illustrative Example 3.3.2 Explanation of the Particle Method Algorithm 3.3.3 Explanation of the Particle Method Instance 3.3.4 Explanation of the State Transition Function 3.4 Conclusion 4 Algorithms as Particle Methods 4.1 Introduction 4.2 Perfectly Elastic Collision in Arbitrary Dimensions 4.3 Particle Strength Exchange 4.4 Smoothed Particle Hydrodynamics 4.5 Lennard-Jones Molecular Dynamics 4.6 Triangulation refinement 4.7 Conway's Game of Life 4.8 Gaussian Elimination 4.9 Conclusion 5 Parallelizability of Particle Methods 5.1 Introduction 5.2 Particle Methods on Shared Memory Systems 5.2.1 Parallelization Scheme 5.2.2 Lemmata 5.2.3 Parallelizability 5.2.4 Time Complexity 5.2.5 Application 5.3 Particle Methods on Distributed Memory Systems 5.3.1 Parallelization Scheme 5.3.2 Lemmata 5.3.3 Parallelizability 5.3.4 Bounds on Time Complexity and Parallel Scalability 5.4 Conclusion 6 Turing Powerfulness and Halting Decidability 6.1 Introduction 6.2 Turing Machine 6.3 Turing Powerfulness of Particle Methods Under a First Set of Constraints 6.4 Turing Powerfulness of Particle Methods Under a Second Set of Constraints 6.5 Halting Decidability of Particle Methods 6.6 Conclusion 7 Particle Methods as a Basis for Scientific Software Engineering 7.1 Introduction 7.2 Design of the Prototype 7.3 Applications, Comparisons, Convergence Study, and Run-time Evaluations 7.4 Conclusion 8 Results, Discussion, Outlook, and Conclusion 8.1 Problem 8.2 Results 8.3 Discussion 8.4 Outlook 8.5 Conclusio

    Digital Ecosystems: Ecosystem-Oriented Architectures

    Full text link
    We view Digital Ecosystems to be the digital counterparts of biological ecosystems. Here, we are concerned with the creation of these Digital Ecosystems, exploiting the self-organising properties of biological ecosystems to evolve high-level software applications. Therefore, we created the Digital Ecosystem, a novel optimisation technique inspired by biological ecosystems, where the optimisation works at two levels: a first optimisation, migration of agents which are distributed in a decentralised peer-to-peer network, operating continuously in time; this process feeds a second optimisation based on evolutionary computing that operates locally on single peers and is aimed at finding solutions to satisfy locally relevant constraints. The Digital Ecosystem was then measured experimentally through simulations, with measures originating from theoretical ecology, evaluating its likeness to biological ecosystems. This included its responsiveness to requests for applications from the user base, as a measure of the ecological succession (ecosystem maturity). Overall, we have advanced the understanding of Digital Ecosystems, creating Ecosystem-Oriented Architectures where the word ecosystem is more than just a metaphor.Comment: 39 pages, 26 figures, journa
    corecore