7,324 research outputs found

    An Introduction to Ontology

    Get PDF
    Analytical philosophy of the last one hundred years has been heavily influenced by a doctrine to the effect that one can arrive at a correct ontology by paying attention to certain superficial (syntactic) features of first-order predicate logic as conceived by Frege and Russell. More specifically, it is a doctrine to the effect that the key to the ontological structure of reality is captured syntactically in the ‘Fa’ (or, in more sophisticated versions, in the ‘Rab’) of first-order logic, where ‘F’ stands for what is general in reality and ‘a’ for what is individual. Hence “f(a)ntology”. Because predicate logic has exactly two syntactically different kinds of referring expressions—‘F’, ‘G’, ‘R’, etc., and ‘a’, ‘b’, ‘c’, etc.—so reality must consist of exactly two correspondingly different kinds of entity: the general (properties, concepts) and the particular (things, objects), the relation between these two kinds of entity being revealed in the predicate-argument structure of atomic formulas in first-order logic

    Seafarers, Silk, and Science: Oceanographic Data in the Making

    Get PDF
    This thesis comprises an empirical case study of scientific data production in oceanography and a philosophical analysis of the relations between newly created scientific data and the natural world. Based on qualitative interviews with researchers, I reconstruct research practices that lead to the ongoing production of digital data related to long-term developments of plankton biodiversity in the oceans. My analysis is centred on four themes: materiality, scientific representing with data, methodological continuity, and the contribution of non-scientists to epistemic processes. These are critically assessed against the background of today’s data-intensive sciences and increased automation and remoteness in oceanographic practices. Sciences of the world’s oceans have by and large been disregarded in philosophical scholarship thus far. My thesis opens this field for philosophical analysis and reveals various conditions and constraints of data practices that are largely uncontrollable by ocean scientists. I argue that the creation of useful scientific data depends on the implementation and preservation of material, methodological, and social continuities. These allow scientists to repeatedly transform visually perceived characteristics of research samples into meaningful scientific data stored in a digital database. In my case study, data are not collected but result from active intervention and subsequent manipulation and processing of newly created material objects. My discussion of scientific representing with data suggests that scientists do not extract or read any intrinsic representational relation between data and a target, but make data gradually more computable and compatible with already existing representations of natural systems. My arguments shed light on the epistemological significance of materiality, on limiting factors of scientific agency, and on an inevitable balance between changing conditions of concrete research settings and long-term consistency of data practices.European Research Counci

    A Two-Level Information Modelling Translation Methodology and Framework to Achieve Semantic Interoperability in Constrained GeoObservational Sensor Systems

    Get PDF
    As geographical observational data capture, storage and sharing technologies such as in situ remote monitoring systems and spatial data infrastructures evolve, the vision of a Digital Earth, first articulated by Al Gore in 1998 is getting ever closer. However, there are still many challenges and open research questions. For example, data quality, provenance and heterogeneity remain an issue due to the complexity of geo-spatial data and information representation. Observational data are often inadequately semantically enriched by geo-observational information systems or spatial data infrastructures and so they often do not fully capture the true meaning of the associated datasets. Furthermore, data models underpinning these information systems are typically too rigid in their data representation to allow for the ever-changing and evolving nature of geo-spatial domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in an interoperable and computable way. The health domain experiences similar challenges with representing complex and evolving domain information concepts. Within any complex domain (such as Earth system science or health) two categories or levels of domain concepts exist. Those concepts that remain stable over a long period of time, and those concepts that are prone to change, as the domain knowledge evolves, and new discoveries are made. Health informaticians have developed a sophisticated two-level modelling systems design approach for electronic health documentation over many years, and with the use of archetypes, have shown how data, information, and knowledge interoperability among heterogenous systems can be achieved. This research investigates whether two-level modelling can be translated from the health domain to the geo-spatial domain and applied to observing scenarios to achieve semantic interoperability within and between spatial data infrastructures, beyond what is possible with current state-of-the-art approaches. A detailed review of state-of-the-art SDIs, geo-spatial standards and the two-level modelling methodology was performed. A cross-domain translation methodology was developed, and a proof-of-concept geo-spatial two-level modelling framework was defined and implemented. The Open Geospatial Consortium’s (OGC) Observations & Measurements (O&M) standard was re-profiled to aid investigation of the two-level information modelling approach. An evaluation of the method was undertaken using II specific use-case scenarios. Information modelling was performed using the two-level modelling method to show how existing historical ocean observing datasets can be expressed semantically and harmonized using two-level modelling. Also, the flexibility of the approach was investigated by applying the method to an air quality monitoring scenario using a technologically constrained monitoring sensor system. This work has demonstrated that two-level modelling can be translated to the geospatial domain and then further developed to be used within a constrained technological sensor system; using traditional wireless sensor networks, semantic web technologies and Internet of Things based technologies. Domain specific evaluation results show that twolevel modelling presents a viable approach to achieve semantic interoperability between constrained geo-observational sensor systems and spatial data infrastructures for ocean observing and city based air quality observing scenarios. This has been demonstrated through the re-purposing of selected, existing geospatial data models and standards. However, it was found that re-using existing standards requires careful ontological analysis per domain concept and so caution is recommended in assuming the wider applicability of the approach. While the benefits of adopting a two-level information modelling approach to geospatial information modelling are potentially great, it was found that translation to a new domain is complex. The complexity of the approach was found to be a barrier to adoption, especially in commercial based projects where standards implementation is low on implementation road maps and the perceived benefits of standards adherence are low. Arising from this work, a novel set of base software components, methods and fundamental geo-archetypes have been developed. However, during this work it was not possible to form the required rich community of supporters to fully validate geoarchetypes. Therefore, the findings of this work are not exhaustive, and the archetype models produced are only indicative. The findings of this work can be used as the basis to encourage further investigation and uptake of two-level modelling within the Earth system science and geo-spatial domain. Ultimately, the outcomes of this work are to recommend further development and evaluation of the approach, building on the positive results thus far, and the base software artefacts developed to support the approach

    Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    Get PDF
    We introduce the \texttt{pyunicorn} (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. \texttt{pyunicorn} is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics or network surrogates. Additionally, \texttt{pyunicorn} provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis (RQA), recurrence networks, visibility graphs and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology.Comment: 28 pages, 17 figure

    The International Land Model Benchmarking (ILAMB) System: Design, Theory, and Implementation

    Full text link
    The increasing complexity of Earth system models has inspired efforts to quantitatively assess model fidelity through rigorous comparison with best available measurements and observational data products. Earth system models exhibit a high degree of spread in predictions of land biogeochemistry, biogeophysics, and hydrology, which are sensitive to forcing from other model components. Based on insights from prior land model evaluation studies and community workshops, the authors developed an open source model benchmarking software package that generates graphical diagnostics and scores model performance in support of the International Land Model Benchmarking (ILAMB) project. Employing a suite of in situ, remote sensing, and reanalysis data sets, the ILAMB package performs comprehensive model assessment across a wide range of land variables and generates a hierarchical set of web pages containing statistical analyses and figures designed to provide the user insights into strengths and weaknesses of multiple models or model versions. Described here is the benchmarking philosophy and mathematical methodology embodied in the most recent implementation of the ILAMB package. Comparison methods unique to a few specific data sets are presented, and guidelines for configuring an ILAMB analysis and interpreting resulting model performance scores are discussed. ILAMB is being adopted by modeling teams and centers during model development and for model intercomparison projects, and community engagement is sought for extending evaluation metrics and adding new observational data sets to the benchmarking framework.Key PointThe ILAMB benchmarking system broadly compares models to observational data sets and provides a synthesis of overall performancePeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146994/1/jame20779_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146994/2/jame20779.pd

    Semantic Mediation of Environmental Observation Datasets through Sensor Observation Services

    Get PDF
    A large volume of environmental observation data is being generated as a result of the observation of many properties at the Earth surface. In parallel, there exists a clear interest in accessing data from different data providers related to the same property, in order to solve concrete problems. Based on such fact, there is also an increasing interest in publishing the above data through open interfaces in the scope of Spatial Data Infraestructures. There have been important advances in the definition of open standards of the Open Geospatial Consortium (OGC) that enable interoperable access to sensor data. Among the proposed interfaces, the Sensor Observation Service (SOS) is having an important impact. We have realized that currently there is no available solution to provide integrated access to various data sources through a SOS interface. This problem shows up two main facets. On the one hand, the heterogeneity among different data sources has to be solved. On the other hand, semantic conflicts that arise during the integration process must also resolved with the help of relevant domain expert knowledge. To solve the problems, the main goal of this thesis is to design and develop a semantic data mediation framework to access any kind of environmental observation dataset, including both relational data sources and multidimensional arrays

    Understanding drawing: a cognitive account of observational process

    Get PDF
    This thesis contributes to theorising observational drawing from a cognitive perspective. Our current understanding of drawing is developing rapidly through artistic and scientific enquiry. However, it remains fragmented because the frames of reference of those modes of enquiry do not coincide. Therefore, the foundations for a truly interdisciplinary understanding of observational drawing are still inceptive. This thesis seeks to add to those foundations by bridging artistic and scientific perspectives on observational process and the cognitive aptitudes underpinning it. The project is based on four case studies of experienced artists drawing processes, with quantitative and qualitative data gathered: timing of eye and hand movements, and artists verbal reports. The data sets are analysed with a generative approach, using behavioural and protocol analysis methods to yield comparative models that describe cognitive strategies for drawing. This forms a grounded framework that elucidates the cognitive activities and competences observational process entails. Cognitive psychological theory is consulted to explain the observed behaviours, and the combined evidence is applied to understanding apparent discrepancies in existing accounts of drawing. In addition, the use of verbal reporting methods in drawing studies is evaluated. The study observes how drawing process involves a segregation of activities that enables efficient use of limited and parametrically constrained cognitive resources. Differing drawing strategies are shown to share common key characteristics; including a staged use of selective visual attention, and the capacity to temporarily postpone critical judgement in order to engage fully in periods of direct perception and action. The autonomy and regularity of those activities, demonstrated by the artists studied, indicate that drawing ability entails tacit self‐knowledge concerning the cognitive and perceptual capacities described in this thesis. This thesis presents drawing as a skill that involves strategic use of visual deconstruction, comparison, analogical transfer and repetitive cycles of construction, evaluation and revision. I argue that drawing skill acquisition and transfer can be facilitated by the elucidation of these processes. As such, this framework for describing and understanding drawing is offered to those who seek to understand, learn or teach observational practice, and to those who are taking a renewed interest in drawing as a tool for thought

    Bayesian participatory-based decision analysis : an evolutionary, adaptive formalism for integrated analysis of complex challenges to social-ecological system sustainability

    Get PDF
    Includes bibliographical references (pages. 379-400).This dissertation responds to the need for integration between researchers and decision-makers who are dealing with complex social-ecological system sustainability and decision-making challenges. To this end, we propose a new approach, called Bayesian Participatory-based Decision Analysis (BPDA), which makes use of graphical causal maps and Bayesian networks to facilitate integration at the appropriate scales and levels of descriptions. The BPDA approach is not a predictive approach, but rather, caters for a wide range of future scenarios in anticipation of the need to adapt to unforeseeable changes as they occur. We argue that the graphical causal models and Bayesian networks constitute an evolutionary, adaptive formalism for integrating research and decision-making for sustainable development. The approach was implemented in a number of different interdisciplinary case studies that were concerned with social-ecological system scale challenges and problems, culminating in a study where the approach was implemented with decision-makers in Government. This dissertation introduces the BPDA approach, and shows how the approach helps identify critical cross-scale and cross-sector linkages and sensitivities, and addresses critical requirements for understanding system resilience and adaptive capacity

    Examining the decision-relevance of climate model information for the insurance industry

    Get PDF
    The insurance industry is becoming increasingly exposed to the adverse impacts of climate variability and climate change. In developing policies and adapting strategies to better manage climate risk, insurers and reinsurers are therefore engaging directly with the climate modelling community to further understand the predictive capabilities of climate models and to develop techniques to utilise climate model output. With an inherent interest in the present and future frequency and magnitude of extreme climate-related loss events, insurers rely on the climate modelling community to provide informative model projections at the relevant spatial and temporal scales for insurance decisions. Furthermore, given the high economic stakes associated with enacting strategies to address climate change, it is essential that climate model experiments are designed to thoroughly explore the multiple sources of uncertainty. Determining the reliability of model based projections is a precursor to examining their relevance to the insurance industry and more widely to the climate change adaptation community. Designing experiments which adequately account for uncertainty therefore requires careful consideration of the nonlinear and chaotic properties of the climate system. Using the well developed concepts of dynamical systems theory, simple nonlinear chaotic systems are investigated to further understand what is meant by climate under climate change. The thesis questions the conventional paradigm in which long-term climate prediction is treated purely as a boundary value problem (predictability of the second kind). Using simple climate-like models to draw analogies to the climate system, results are presented which support the emerging view that climate prediction ought to be treated as both an initial value problem and a boundary condition problem on all time scales. The research also examines the application of the ergodic assumption in climate modelling and climate change adaptation decisions. By using idealised model experiments, situations in which the ergodic assumption breaks down are illustrated. Consideration is given to alternative model experimental designs which do not rely on the assumption of ergodicity. Experimental results are presented which support the view that large initial condition ensembles are required to detail the changing distribution of climate under altered forcing conditions. It is argued that the role of chaos and nonlinear dynamic behaviour ought to have more prominence in the discussion of the forecasting capabilities in climate prediction

    Discovering Causal Relations and Equations from Data

    Full text link
    Physics is a field of science that has traditionally used the scientific method to answer questions about why natural phenomena occur and to make testable models that explain the phenomena. Discovering equations, laws and principles that are invariant, robust and causal explanations of the world has been fundamental in physical sciences throughout the centuries. Discoveries emerge from observing the world and, when possible, performing interventional studies in the system under study. With the advent of big data and the use of data-driven methods, causal and equation discovery fields have grown and made progress in computer science, physics, statistics, philosophy, and many applied fields. All these domains are intertwined and can be used to discover causal relations, physical laws, and equations from observational data. This paper reviews the concepts, methods, and relevant works on causal and equation discovery in the broad field of Physics and outlines the most important challenges and promising future lines of research. We also provide a taxonomy for observational causal and equation discovery, point out connections, and showcase a complete set of case studies in Earth and climate sciences, fluid dynamics and mechanics, and the neurosciences. This review demonstrates that discovering fundamental laws and causal relations by observing natural phenomena is being revolutionised with the efficient exploitation of observational data, modern machine learning algorithms and the interaction with domain knowledge. Exciting times are ahead with many challenges and opportunities to improve our understanding of complex systems.Comment: 137 page
    • 

    corecore