53,861 research outputs found

    What Can Artificial Intelligence Do for Scientific Realism?

    Get PDF
    The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts. Towards this end, the paper fleshes out unconceived alternatives not as a critique of scientific realism but rather a reinforcement, as it rejects the retrospective interpretations of scientific progress, which brought about the problem of alternatives in the first place. By utilising adversarial machine learning, the synthesis explores possibility spaces of available evidence for unconceived alternatives providing modal knowledge of what is possible therein. As a result, the epistemic warrant of synthesised realist theories should emerge bolstered as the underdetermination by available evidence gets reduced. While shifting the realist commitment away from theoretical artefacts towards modalities of the possibility spaces, the synthesis comes out as a kind of perspectival modelling

    A Manifesto for the Equifinality Thesis.

    Get PDF
    This essay discusses some of the issues involved in the identification and predictions of hydrological models given some calibration data. The reasons for the incompleteness of traditional calibration methods are discussed. The argument is made that the potential for multiple acceptable models as representations of hydrological and other environmental systems (the equifinality thesis) should be given more serious consideration than hitherto. It proposes some techniques for an extended GLUE methodology to make it more rigorous and outlines some of the research issues still to be resolved

    Support for collaborative component-based software engineering

    Get PDF
    Collaborative system composition during design has been poorly supported by traditional CASE tools (which have usually concentrated on supporting individual projects) and almost exclusively focused on static composition. Little support for maintaining large distributed collections of heterogeneous software components across a number of projects has been developed. The CoDEEDS project addresses the collaborative determination, elaboration, and evolution of design spaces that describe both static and dynamic compositions of software components from sources such as component libraries, software service directories, and reuse repositories. The GENESIS project has focussed, in the development of OSCAR, on the creation and maintenance of large software artefact repositories. The most recent extensions are explicitly addressing the provision of cross-project global views of large software collections and historical views of individual artefacts within a collection. The long-term benefits of such support can only be realised if OSCAR and CoDEEDS are widely adopted and steps to facilitate this are described. This book continues to provide a forum, which a recent book, Software Evolution with UML and XML, started, where expert insights are presented on the subject. In that book, initial efforts were made to link together three current phenomena: software evolution, UML, and XML. In this book, focus will be on the practical side of linking them, that is, how UML and XML and their related methods/tools can assist software evolution in practice. Considering that nowadays software starts evolving before it is delivered, an apparent feature for software evolution is that it happens over all stages and over all aspects. Therefore, all possible techniques should be explored. This book explores techniques based on UML/XML and a combination of them with other techniques (i.e., over all techniques from theory to tools). Software evolution happens at all stages. Chapters in this book describe that software evolution issues present at stages of software architecturing, modeling/specifying, assessing, coding, validating, design recovering, program understanding, and reusing. Software evolution happens in all aspects. Chapters in this book illustrate that software evolution issues are involved in Web application, embedded system, software repository, component-based development, object model, development environment, software metrics, UML use case diagram, system model, Legacy system, safety critical system, user interface, software reuse, evolution management, and variability modeling. Software evolution needs to be facilitated with all possible techniques. Chapters in this book demonstrate techniques, such as formal methods, program transformation, empirical study, tool development, standardisation, visualisation, to control system changes to meet organisational and business objectives in a cost-effective way. On the journey of the grand challenge posed by software evolution, the journey that we have to make, the contributory authors of this book have already made further advances

    Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective

    Full text link
    Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNN) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as its relationship to current developments in neural-symbolic computing.Comment: Updated version, draft of accepted IJCAI2020 Survey Pape

    Regularization of context data of autonomous power supply systems

    Get PDF
    Для покращення якості прийняття рішень з керування автономними системами електроживленням створено і протестовано алгоритм регуляризації контекстних даних, що дозволило зменшити помилку прогнозу контекстних часових рядів з (5-6) % до (1,5-2) % та зменшити обсяг операцій при формуванні правил керування напівпровідниковими перетворювачами електроенергії в мережі. Контекстні дані формуються з часових рядів (ЧР), значення яких фіксуються давачами через задані проміжки часу.In this work we present a regularization of context data of autonomous power supply systems. The autonomous power supply systems is a context awareness framework that aims to provide a comprehensive solution to reason about the context from the level of sensor data to the high-level situation awareness (actuator or devices). The paper describes these challenges and presents data management solutions as a module of context data analysis for the energy control system. These solutions include sensor data acquisition and time series forecasting, ontology model and context prediction model for analytical query processing past and future context data. Context prediction requires the consideration of the preliminary time series processing consists in the detection of the series values anomalous values and series smoothing. The randomness of the commutation, though, leads to the disturbances in power consumption characteristics. Keeping a record of time points and the value of the disturbances complicates the forecasting process and can lead to erroneous results. Filtration or smoothing of context time series is the necessary preliminary prediction stage for obtaining trends. Thus, the first step of the module of context data analysis is the filtration and the second step is the prediction. There are three distinct groups of smoothing: Averaging Methods – moving average, weighted moving average; Exponential Smoothing Methods – simple, weighted, exponential, double; Kalman filter. And three group of prediction: Interpolation – linear, polynomial, spline; Extrapolation – linear, polynomial, French curve, conic; Linear prediction. If the prediction value falls outside the confidence range of prediction errors, the task of regularizing sample n of the prediction method is performed. By sample regularizing we understand sample value alteration up to the value which provides the transition of prediction value to the area of confidence range. The proposed approach of regularization (adaptation) of time series for forecasting method allows reducing forecasting error from 6-5% to 2-1.5%, as the test results showed
    corecore