1,990 research outputs found

    A Qualitative Representation of Spatial Scenes in R2 with Regions and Lines

    Get PDF
    Regions and lines are common geographic abstractions for geographic objects. Collections of regions, lines, and other representations of spatial objects form a spatial scene, along with their relations. For instance, the states of Maine and New Hampshire can be represented by a pair of regions and related based on their topological properties. These two states are adjacent (i.e., they meet along their shared boundary), whereas Maine and Florida are not adjacent (i.e., they are disjoint). A detailed model for qualitatively describing spatial scenes should capture the essential properties of a configuration such that a description of the represented objects and their relations can be generated. Such a description should then be able to reproduce a scene in a way that preserves all topological relationships, but without regards to metric details. Coarse approaches to qualitative spatial reasoning may underspecify certain relations. For example, if two objects meet, it is unclear if they meet along an edge, at a single point, or multiple times along their boundaries. Where the boundaries of spatial objects converge, this is called a spatial intersection. This thesis develops a model for spatial scene descriptions primarily through sequences of detailed spatial intersections and object containment, capturing how complex spatial objects relate. With a theory of complex spatial scenes developed, a tool that will automatically generate a formal description of a spatial scene is prototyped, enabling the described objects to be analyzed. The strengths and weaknesses of the provided model will be discussed relative to other models of spatial scene description, along with further refinements

    Verification and validation of knowledge-based systems with an example from site selection.

    Get PDF
    In this paper, the verification and validation of Knowledge-Based Systems (KBS) using decision tables (DTs) is one of the central issues. It is illustrated using real-market data taken from industrial site selection problems.One of the main problems of KBS is that often there remain a lot of anomalies after the knowledge has been elicited. As a consequence, the quality of the KBS will degrade. This evaluation consists mainly of two parts: verification and validation (V&V). To make a distinction between verification and validation, the following phrase is regularly used: Verification deals with 'building the system right', while validation involves 'building the right system'. In the context of DTs, it has been claimed from the early years of DT research onwards that DTs are very suited for V&V purposes. Therefore, it will be explained how V&V of the modelled knowledge can be performed. In this respect, use is made of stated response modelling designs techniques to select decision rules from a DT. Our approach is illustrated using a case-study dealing with the locational problem of a (petro)chemical company in a port environment. The KBS developed has been named Matisse, which is an acronym of Matching Algorithm, a Technique for Industrial Site Selection and Evaluation.Selection; Systems;

    What Is Strategy In Urban Mobility Systems?

    Get PDF
    Institute of Transport and Logistics Studies. Faculty of Economics and Business. The University of Sydne

    What Is Strategy In Urban Mobility Systems?

    Get PDF
    Institute of Transport and Logistics Studies. Faculty of Economics and Business. The University of Sydne

    Feature-based validation reasoning for intent-driven engineering design

    Get PDF
    Feature based modelling represents the future of CAD systems. However, operations such as modelling and editing can corrupt the validity of a feature-based model representation. Feature interactions are a consequence of feature operations and the existence of a number of features in the same model. Feature interaction affects not only the solid representation of the part, but also the functional intentions embedded within features. A technique is thus required to assess the integrity of a feature-based model from various perspectives, including the functional intentional one, and this technique must take into account the problems brought about by feature interactions and operations. The understanding, reasoning and resolution of invalid feature-based models requires an understanding of the feature interaction phenomena, as well as the characterisation of these functional intentions. A system capable of such assessment is called a feature-based representation validation system. This research studies feature interaction phenomena and feature-based designer's intents as a medium to achieve a feature-based representation validation system. [Continues.

    Hierarchical Graphs as Organisational Principle and Spatial Model Applied to Pedestrian Indoor Navigation

    Get PDF
    In this thesis, hierarchical graphs are investigated from two different angles – as a general modelling principle for (geo)spatial networks and as a practical means to enhance navigation in buildings. The topics addressed are of interest from a multi-disciplinary point of view, ranging from Computer Science in general over Artificial Intelligence and Computational Geometry in particular to other fields such as Geographic Information Science. Some hierarchical graph models have been previously proposed by the research community, e.g. to cope with the massive size of road networks, or as a conceptual model for human wayfinding. However, there has not yet been a comprehensive, systematic approach for modelling spatial networks with hierarchical graphs. One particular problem is the gap between conceptual models and models which can be readily used in practice. Geospatial data is commonly modelled - if at all - only as a flat graph. Therefore, from a practical point of view, it is important to address the automatic construction of a graph hierarchy based on the predominant data models. The work presented deals with this problem: an automated method for construction is introduced and explained. A particular contribution of my thesis is the proposition to use hierarchical graphs as the basis for an extensible, flexible architecture for modelling various (geo)spatial networks. The proposed approach complements classical graph models very well in the sense that their expressiveness is extended: various graphs originating from different sources can be integrated into a comprehensive, multi-level model. This more sophisticated kind of architecture allows for extending navigation services beyond the borders of one single spatial network to a collection of heterogeneous networks, thus establishing a meta-navigation service. Another point of discussion is the impact of the hierarchy and distribution on graph algorithms. They have to be adapted to properly operate on multi-level hierarchies. By investigating indoor navigation problems in particular, the guiding principles are demonstrated for modelling networks at multiple levels of detail. Complex environments like large public buildings are ideally suited to demonstrate the versatile use of hierarchical graphs and thus to highlight the benefits of the hierarchical approach. Starting from a collection of floor plans, I have developed a systematic method for constructing a multi-level graph hierarchy. The nature of indoor environments, especially their inherent diversity, poses an additional challenge: among others, one must deal with complex, irregular, and/or three-dimensional features. The proposed method is also motivated by practical considerations, such as not only finding shortest/fastest paths across rooms and floors, but also by providing descriptions for these paths which are easily understood by people. Beyond this, two novel aspects of using a hierarchy are discussed: one as an informed heuristic exploiting the specific characteristics of indoor environments in order to enhance classical, general-purpose graph search techniques. At the same time, as a convenient by- product of this method, clusters such as sections and wings can be detected. The other reason is to better deal with irregular, complex-shaped regions in a way that instructions can also be provided for these spaces. Previous approaches have not considered this problem. In summary, the main results of this work are: • hierarchical graphs are introduced as a general spatial data infrastructure. In particular, this architecture allows us to integrate different spatial networks originating from different sources. A small but useful set of operations is proposed for integrating these networks. In order to work in a hierarchical model, classical graph algorithms are generalised. This finding also has implications on the possible integration of separate navigation services and systems; • a novel set of core data structures and algorithms have been devised for modelling indoor environments. They cater to the unique characteristics of these environments and can be specifically used to provide enhanced navigation in buildings. Tested on models of several real buildings from our university, some preliminary but promising results were gained from a prototypical implementation and its application on the models

    Method versatility in analysing human attitudes towards technology

    Get PDF
    Various research domains are facing new challenges brought about by growing volumes of data. To make optimal use of them, and to increase the reproducibility of research findings, method versatility is required. Method versatility is the ability to flexibly apply widely varying data analytic methods depending on the study goal and the dataset characteristics. Method versatility is an essential characteristic of data science, but in other areas of research, such as educational science or psychology, its importance is yet to be fully accepted. Versatile methods can enrich the repertoire of specialists who validate psychometric instruments, conduct data analysis of large-scale educational surveys, and communicate their findings to the academic community, which corresponds to three stages of the research cycle: measurement, research per se, and communication. In this thesis, studies related to these stages have a common theme of human attitudes towards technology, as this topic becomes vitally important in our age of ever-increasing digitization. The thesis is based on four studies, in which method versatility is introduced in four different ways: the consecutive use of methods, the toolbox choice, the simultaneous use, and the range extension. In the first study, different methods of psychometric analysis are used consecutively to reassess psychometric properties of a recently developed scale measuring affinity for technology interaction. In the second, the random forest algorithm and hierarchical linear modeling, as tools from machine learning and statistical toolboxes, are applied to data analysis of a large-scale educational survey related to students’ attitudes to information and communication technology. In the third, the challenge of selecting the number of clusters in model-based clustering is addressed by the simultaneous use of model fit, cluster separation, and the stability of partition criteria, so that generalizable separable clusters can be selected in the data related to teachers’ attitudes towards technology. The fourth reports the development and evaluation of a scholarly knowledge graph-powered dashboard aimed at extending the range of scholarly communication means. The findings of the thesis can be helpful for increasing method versatility in various research areas. They can also facilitate methodological advancement of academic training in data analysis and aid further development of scholarly communication in accordance with open science principles.Verschiedene Forschungsbereiche müssen sich durch steigende Datenmengen neuen Herausforderungen stellen. Der Umgang damit erfordert – auch in Hinblick auf die Reproduzierbarkeit von Forschungsergebnissen – Methodenvielfalt. Methodenvielfalt ist die Fähigkeit umfangreiche Analysemethoden unter Berücksichtigung von angestrebten Studienzielen und gegebenen Eigenschaften der Datensätze flexible anzuwenden. Methodenvielfalt ist ein essentieller Bestandteil der Datenwissenschaft, der aber in seinem Umfang in verschiedenen Forschungsbereichen wie z. B. den Bildungswissenschaften oder der Psychologie noch nicht erfasst wird. Methodenvielfalt erweitert die Fachkenntnisse von Wissenschaftlern, die psychometrische Instrumente validieren, Datenanalysen von groß angelegten Umfragen im Bildungsbereich durchführen und ihre Ergebnisse im akademischen Kontext präsentieren. Das entspricht den drei Phasen eines Forschungszyklus: Messung, Forschung per se und Kommunikation. In dieser Doktorarbeit werden Studien, die sich auf diese Phasen konzentrieren, durch das gemeinsame Thema der Einstellung zu Technologien verbunden. Dieses Thema ist im Zeitalter zunehmender Digitalisierung von entscheidender Bedeutung. Die Doktorarbeit basiert auf vier Studien, die Methodenvielfalt auf vier verschiedenen Arten vorstellt: die konsekutive Anwendung von Methoden, die Toolbox-Auswahl, die simultane Anwendung von Methoden sowie die Erweiterung der Bandbreite. In der ersten Studie werden verschiedene psychometrische Analysemethoden konsekutiv angewandt, um die psychometrischen Eigenschaften einer entwickelten Skala zur Messung der Affinität von Interaktion mit Technologien zu überprüfen. In der zweiten Studie werden der Random-Forest-Algorithmus und die hierarchische lineare Modellierung als Methoden des Machine Learnings und der Statistik zur Datenanalyse einer groß angelegten Umfrage über die Einstellung von Schülern zur Informations- und Kommunikationstechnologie herangezogen. In der dritten Studie wird die Auswahl der Anzahl von Clustern im modellbasierten Clustering bei gleichzeitiger Verwendung von Kriterien für die Modellanpassung, der Clustertrennung und der Stabilität beleuchtet, so dass generalisierbare trennbare Cluster in den Daten zu den Einstellungen von Lehrern zu Technologien ausgewählt werden können. Die vierte Studie berichtet über die Entwicklung und Evaluierung eines wissenschaftlichen wissensgraphbasierten Dashboards, das die Bandbreite wissenschaftlicher Kommunikationsmittel erweitert. Die Ergebnisse der Doktorarbeit tragen dazu bei, die Anwendung von vielfältigen Methoden in verschiedenen Forschungsbereichen zu erhöhen. Außerdem fördern sie die methodische Ausbildung in der Datenanalyse und unterstützen die Weiterentwicklung der wissenschaftlichen Kommunikation im Rahmen von Open Science

    Social support at work and at home: Dual-buffering effects in the work-family conflict process

    Get PDF
    Using experience-sampling methodology, the present study offers a within-individual test of the buffering model of social support in the daily work-family conflict process. Building on the conceptualization of social support as a volatile resource, we examine how daily fluctuations in social support at work and at home influence the process through which work interferes with family life. A total of 112 employees participated in the study and were asked to respond to daily surveys in the work and home domains. Results showed that social support at work and at home—as volatile resources—buffered the daily work-family conflict process within their respective domains. First, a supportive supervisor mitigated the within-individual effect of workload on emotional exhaustion. Second, a supportive spouse protected the strained employee from the effect of emotional exhaustion on work-family conflict, and spousal support also moderated the indirect effect from workload to work-family conflict through emotional exhaustion. The findings suggest that enacting a dual social support system can effectively reduce the adverse effects of excessive job demands on exhaustion and work-family conflict, but buffering effects are highly dependent on the timely availability of social support

    The advantages and cost effectiveness of database improvement methods

    Get PDF
    Relational databases have proved inadequate for supporting new classes of applications, and as a consequence, a number of new approaches have been taken (Blaha 1998), (Harrington 2000). The most salient alternatives are denormalisation and conversion to an object-oriented database (Douglas 1997). Denormalisation can provide better performance but has deficiencies with respect to data modelling. Object-oriented databases can provide increased performance efficiency but without the deficiencies in data modelling (Blaha 2000). Although there have been various benchmark tests reported, none of these tests have compared normalised, object oriented and de-normalised databases. This research shows that a non-normalised database for data containing type code complexity would be normalised in the process of conversion to an objectoriented database. This helps to correct badly organised data and so gives the performance benefits of de-normalisation while improving data modelling. The costs of conversion from relational databases to object oriented databases were also examined. Costs were based on published benchmark tests, a benchmark carried out during this study and case studies. The benchmark tests were based on an engineering database benchmark. Engineering problems such as computer-aided design and manufacturing have much to gain from conversion to object-oriented databases. Costs were calculated for coding and development, and also for operation. It was found that conversion to an object-oriented database was not usually cost effective as many of the performance benefits could be achieved by the far cheaper process of de-normalisation, or by using the performance improving facilities provided by many relational database systems such as indexing or partitioning or by simply upgrading the system hardware. It is concluded therefore that while object oriented databases are a better alternative for databases built from scratch, the conversion of a legacy relational database to an object oriented database is not necessarily cost effective

    Conceptual roles of data in program: analyses and applications

    Get PDF
    Program comprehension is the prerequisite for many software evolution and maintenance tasks. Currently, the research falls short in addressing how to build tools that can use domain-specific knowledge to provide powerful capabilities for extracting valuable information for facilitating program comprehension. Such capabilities are critical for working with large and complex program where program comprehension often is not possible without the help of domain-specific knowledge.;Our research advances the state-of-art in program analysis techniques based on domain-specific knowledge. The program artifacts including variables and methods are carriers of domain concepts that provide the key to understand programs. Our program analysis is directed by domain knowledge stored as domain-specific rules. Our analysis is iterative and interactive. It is based on flexible inference rules and inter-exchangeable and extensible information storage. We designed and developed a comprehensive software environment SeeCORE based on our knowledge-centric analysis methodology. The SeeCORE tool provides multiple views and abstractions to assist in understanding complex programs. The case studies demonstrate the effectiveness of our method. We demonstrate the flexibility of our approach by analyzing two legacy programs in distinct domains
    corecore