395 research outputs found

    Query processing in temporal object-oriented databases

    Get PDF
    This PhD thesis is concerned with historical data management in the context of objectoriented databases. An extensible approach has been explored to processing temporal object queries within a uniform query framework. By the uniform framework, we mean temporal queries can be processed within the existing object-oriented framework that is extended from relational framework, by extending the existing query processing techniques and strategies developed for OODBs and RDBs. The unified model of OODBs and RDBs in UmSQL/X has been adopted as a basis for this purpose. A temporal object data model is thereby defined by incorporating a time dimension into this unified model of OODBs and RDBs to form temporal relational-like cubes but with the addition of aggregation and inheritance hierarchies. A query algebra, that accesses objects through these associations of aggregation, inheritance and timereference, is then defined as a general query model /language. Due to the extensive features of our data model and reducibility of the algebra, a layered structure of query processor is presented that provides a uniforrn framework for processing temporal object queries. Within the uniform framework, query transformation is carried out based on a set of transformation rules identified that includes the known relational and object rules plus those pertaining to the time dimension. To evaluate a temporal query involving a path with timereference, a strategy of decomposition is proposed. That is, evaluation of an enhanced path, which is defined to extend a path with time-reference, is decomposed by initially dividing the path into two sub-paths: one containing the time-stamped class that can be optimized by making use of the ordering information of temporal data and another an ordinary sub-path (without time-stamped classes) which can be further decomposed and evaluated using different algorithms. The intermediate results of traversing the two sub-paths are then joined together to create the query output. Algorithms for processing the decomposed query components, i. e., time-related operation algorithms, four join algorithms (nested-loop forward join, sort-merge forward join, nested-loop reverse join and sort-merge reverse join) and their modifications, have been presented with cost analysis and implemented with stream processing techniques using C++. Simulation results are also provided. Both cost analysis and simulation show the effects of time on the query processing algorithms: the join time cost is linearly increased with the expansion in the number of time-epochs (time-dimension in the case of a regular TS). It is also shown that using heuristics that make use of time information can lead to a significant time cost saving. Query processing with incomplete temporal data has also been discussed

    Semantic Similarity of Spatial Scenes

    Get PDF
    The formalization of similarity in spatial information systems can unleash their functionality and contribute technology not only useful, but also desirable by broad groups of users. As a paradigm for information retrieval, similarity supersedes tedious querying techniques and unveils novel ways for user-system interaction by naturally supporting modalities such as speech and sketching. As a tool within the scope of a broader objective, it can facilitate such diverse tasks as data integration, landmark determination, and prediction making. This potential motivated the development of several similarity models within the geospatial and computer science communities. Despite the merit of these studies, their cognitive plausibility can be limited due to neglect of well-established psychological principles about properties and behaviors of similarity. Moreover, such approaches are typically guided by experience, intuition, and observation, thereby often relying on more narrow perspectives or restrictive assumptions that produce inflexible and incompatible measures. This thesis consolidates such fragmentary efforts and integrates them along with novel formalisms into a scalable, comprehensive, and cognitively-sensitive framework for similarity queries in spatial information systems. Three conceptually different similarity queries at the levels of attributes, objects, and scenes are distinguished. An analysis of the relationship between similarity and change provides a unifying basis for the approach and a theoretical foundation for measures satisfying important similarity properties such as asymmetry and context dependence. The classification of attributes into categories with common structural and cognitive characteristics drives the implementation of a small core of generic functions, able to perform any type of attribute value assessment. Appropriate techniques combine such atomic assessments to compute similarities at the object level and to handle more complex inquiries with multiple constraints. These techniques, along with a solid graph-theoretical methodology adapted to the particularities of the geospatial domain, provide the foundation for reasoning about scene similarity queries. Provisions are made so that all methods comply with major psychological findings about people’s perceptions of similarity. An experimental evaluation supplies the main result of this thesis, which separates psychological findings with a major impact on the results from those that can be safely incorporated into the framework through computationally simpler alternatives

    Functional inferences over heterogeneous data

    Get PDF
    Inference enables an agent to create new knowledge from old or discover implicit relationships between concepts in a knowledge base (KB), provided that appropriate techniques are employed to deal with ambiguous, incomplete and sometimes erroneous data. The ever-increasing volumes of KBs on the web, available for use by automated systems, present an opportunity to leverage the available knowledge in order to improve the inference process in automated query answering systems. This thesis focuses on the FRANK (Functional Reasoning for Acquiring Novel Knowledge) framework that responds to queries where no suitable answer is readily contained in any available data source, using a variety of inference operations. Most question answering and information retrieval systems assume that answers to queries are stored in some form in the KB, thereby limiting the range of answers they can find. We take an approach motivated by rich forms of inference using techniques, such as regression, for prediction. For instance, FRANK can answer “what country in Europe will have the largest population in 2021?" by decomposing Europe geo-spatially, using regression on country population for past years and selecting the country with the largest predicted value. Our technique, which we refer to as Rich Inference, combines heuristics, logic and statistical methods to infer novel answers to queries. It also determines what facts are needed for inference, searches for them, and then integrates the diverse facts and their formalisms into a local query-specific inference tree. Our primary contribution in this thesis is the inference algorithm on which FRANK works. This includes (1) the process of recursively decomposing queries in way that allows variables in the query to be instantiated by facts in KBs; (2) the use of aggregate functions to perform arithmetic and statistical operations (e.g. prediction) to infer new values from child nodes; and (3) the estimation and propagation of uncertainty values into the returned answer based on errors introduced by noise in the KBs or errors introduced by aggregate functions. We also discuss many of the core concepts and modules that constitute FRANK. We explain the internal “alist” representation of FRANK that gives it the required flexibility to tackle different kinds of problems with minimal changes to its internal representation. We discuss the grammar for a simple query language that allows users to express queries in a formal way, such that we avoid the complexities of natural language queries, a problem that falls outside the scope of this thesis. We evaluate the framework with datasets from open sources

    PPP - personalized plan-based presenter

    Get PDF

    Foundations of secure computation

    Get PDF
    Issued as Workshop proceedings and Final report, Project no. G-36-61

    Interoperability of Traffic Infrastructure Planning and Geospatial Information Systems

    Get PDF
    Building Information Modelling (BIM) as a Model-based design facilitates to investigate multiple solutions in the infrastructure planning process. The most important reason for implementing model-based design is to help designers and to increase communication between different design parties. It decentralizes and coordinates team collaboration and facilitates faster and lossless project data exchange and management across extended teams and external partners in project lifecycle. Infrastructure are fundamental facilities, services, and installations needed for the functioning of a community or society, such as transportation, roads, communication systems, water and power networks, as well as power plants. Geospatial Information Systems (GIS) as the digital representation of the world are systems for maintaining, managing, modelling, analyzing, and visualizing of the world data including infrastructure. High level infrastructure suits mostly facilitate to analyze the infrastructure design based on the international or user defined standards. Called regulation1-based design, this minimizes errors, reduces costly design conflicts, increases time savings and provides consistent project quality, yet mostly in standalone solutions. Tasks of infrastructure usually require both model based and regulation based design packages. Infrastructure tasks deal with cross-domain information. However, the corresponding data is split in several domain models. Besides infrastructure projects demand a lot of decision makings on governmental as well as on private level considering different data models. Therefore lossless flow of project data as well as documents like regulations across project team, stakeholders, governmental and private level is highly important. Yet infrastructure projects have largely been absent from product modelling discourses for a long time. Thus, as will be explained in chapter 2 interoperability is needed in infrastructure processes. Multimodel (MM) is one of the interoperability methods which enable heterogeneous data models from various domains get bundled together into a container keeping their original format. Existing interoperability methods including existing MM solutions can’t satisfactorily fulfill the typical demands of infrastructure information processes like dynamic data resources and a huge amount of inter model relations. Therefore chapter 3 concept of infrastructure information modelling investigates a method for loose and rule based coupling of exchangeable heterogeneous information spaces. This hypothesis is an extension for the existing MM to a rule-based Multimodel named extended Multimodel (eMM) with semantic rules – instead of static links. The semantic rules will be used to describe relations between data elements of various models dynamically in a link-database. Most of the confusion about geospatial data models arises from their diversity. In some of these data models spatial IDs are the basic identities of entities and in some other data models there are no IDs. That is why in the geospatial data, data structure is more important than data models. There are always spatial indexes that enable accessing to the geodata. The most important unification of data models involved in infrastructure projects is the spatiality. Explained in chapter 4 the method of infrastructure information modelling for interoperation in spatial domains generate interlinks through spatial identity of entities. Match finding through spatial links enables any kind of data models sharing spatial property get interlinked. Through such spatial links each entity receives the spatial information from other data models which is related to the target entity due to sharing equivalent spatial index. This information will be the virtual properties for the object. The thesis uses Nearest Neighborhood algorithm for spatial match finding and performs filtering and refining approaches. For the abstraction of the spatial matching results hierarchical filtering techniques are used for refining the virtual properties. These approaches focus on two main application areas which are product model and Level of Detail (LoD). For the eMM suggested in this thesis a rule based interoperability method between arbitrary data models of spatial domain has been developed. The implementation of this method enables transaction of data in spatial domains run loss less. The system architecture and the implementation which has been applied on the case study of this thesis namely infrastructure and geospatial data models are described in chapter 5. Achieving afore mentioned aims results in reducing the whole project lifecycle costs, increasing reliability of the comprehensive fundamental information, and consequently in independent, cost-effective, aesthetically pleasing, and environmentally sensitive infrastructure design.:ABSTRACT 4 KEYWORDS 7 TABLE OF CONTENT 8 LIST OF FIGURES 9 LIST OF TABLES 11 LIST OF ABBREVIATION 12 INTRODUCTION 13 1.1. A GENERAL VIEW 14 1.2. PROBLEM STATEMENT 15 1.3. OBJECTIVES 17 1.4. APPROACH 18 1.5. STRUCTURE OF THESIS 18 INTEROPERABILITY IN INFRASTRUCTURE ENGINEERING 20 2.1. STATE OF INTEROPERABILITY 21 2.1.1. Interoperability of GIS and BIM 23 2.1.2. Interoperability of GIS and Infrastructure 25 2.2. MAIN CHALLENGES AND RELATED WORK 27 2.3. INFRASTRUCTURE MODELING IN GEOSPATIAL CONTEXT 29 2.3.1. LamdXML: Infrastructure Data Standards 32 2.3.2. CityGML: Geospatial Data Standards 33 2.3.3. LandXML and CityGML 36 2.4. INTEROPERABILITY AND MULTIMODEL TECHNOLOGY 39 2.5. LIMITATIONS OF EXISTING APPROACHES 41 INFRASTRUCTURE INFORMATION MODELLING 44 3.1. MULTI MODEL FOR GEOSPATIAL AND INFRASTRUCTURE DATA MODELS 45 3.2. LINKING APPROACH, QUERYING AND FILTERING 48 3.2.1. Virtual Properties via Link Model 49 3.3. MULTI MODEL AS AN INTERDISCIPLINARY METHOD 52 3.4. USING LEVEL OF DETAIL (LOD) FOR FILTERING 53 SPATIAL MODELLING AND PROCESSING 58 4.1. SPATIAL IDENTIFIERS 59 4.1.1. Spatial Indexes 60 4.1.2. Tree-Based Spatial Indexes 61 4.2. NEAREST NEIGHBORHOOD AS A BASIC LINK METHOD 63 4.3. HIERARCHICAL FILTERING 70 4.4. OTHER FUNCTIONAL LINK METHODS 75 4.5. ADVANCES AND LIMITATIONS OF FUNCTIONAL LINK METHODS 76 IMPLEMENTATION OF THE PROPOSED IIM METHOD 77 5.1. IMPLEMENTATION 78 5.2. CASE STUDY 83 CONCLUSION 89 6.1. SUMMERY 90 6.2. DISCUSSION OF RESULTS 92 6.3. FUTURE WORK 93 BIBLIOGRAPHY 94 7.1. BOOKS AND PAPERS 95 7.2. WEBSITES 10

    International Conference on Computer Science and Communication Engineering

    Get PDF
    UBT Annual International Conference is the 8th international interdisciplinary peer reviewed conference which publishes works of the scientists as well as practitioners in the area where UBT is active in Education, Research and Development. The UBT aims to implement an integrated strategy to establish itself as an internationally competitive, research-intensive university, committed to the transfer of knowledge and the provision of a world-class education to the most talented students from all background. The main perspective of the conference is to connect the scientists and practitioners from different disciplines in the same place and make them be aware of the recent advancements in different research fields, and provide them with a unique forum to share their experiences. It is also the place to support the new academic staff for doing research and publish their work in international standard level. This conference consists of sub conferences in different fields like: – Computer Science and Communication Engineering– Management, Business and Economics– Mechatronics, System Engineering and Robotics– Energy Efficiency Engineering– Information Systems and Security– Architecture – Spatial Planning– Civil Engineering , Infrastructure and Environment– Law– Political Science– Journalism , Media and Communication– Food Science and Technology– Pharmaceutical and Natural Sciences– Design– Psychology– Education and Development– Fashion– Music– Art and Digital Media– Dentistry– Applied Medicine– Nursing This conference is the major scientific event of the UBT. It is organizing annually and always in cooperation with the partner universities from the region and Europe. We have to thank all Authors, partners, sponsors and also the conference organizing team making this event a real international scientific event. Edmond Hajrizi, President of UBTUBT – Higher Education Institutio

    Towards Practical Predicate Analysis

    Get PDF
    Software model checking is a successful technique for automated program verification. Several of the most widely used approaches for software model checking are based on solving first-order-logic formulas over predicates using SMT solvers, e.g., predicate abstraction, bounded model checking, k-induction, and lazy abstraction with interpolants. We define a configurable framework for predicate-based analyses that allows expressing each of these approaches. This unifying framework highlights the differences between the approaches, producing new insights, and facilitates research of further algorithms and their combinations, as witnessed by several research projects that have been conducted on top of this framework. In addition to this theoretical contribution, we provide a mature implementation of our framework in the software verifier that allows applying all of the mentioned approaches to practice. This implementation is used by other research groups, e.g., to find bugs in the Linux kernel, and has proven its competitiveness by winning gold medals in the International Competition on Software Verification. Tools and approaches for software model checking like our predicate analysis are typically evaluated using performance benchmarking on large sets of verification tasks. We have identified several pitfalls that can silently arise during benchmarking, and we have found that the benchmarking techniques and tools that are used by many researchers do not guarantee valid results in practice, but may produce arbitrarily large measurement errors. Furthermore, certain hardware characteristics can also have nondeterministic influence on the measurements. In order to being able to properly evaluate our framework for software verification, we study the effects of these hardware characteristics, and define a list of the most important requirements that need to be ensured for reliable benchmarking. We present as solution an open-source benchmarking framework BenchExec, which in contrast to other benchmarking tools fulfills all our requirements and aims at making reliable benchmarking easy. BenchExec was already adopted by several research groups and the International Competition on Software Verification. Using the power of BenchExec we conduct an experimental evaluation of our unifying framework for predicate analysis. We study the effect of varying the SMT solver and the way program semantics are encoded in formulas across several verification algorithms and find that these technical choices can significantly influence the results of experimental studies of verification approaches. This is valuable information for both researchers who study verification approaches as well as for users who apply them in practice. Our comprehensive study of 120 different configurations would not have been possible without our highly flexible and configurable unifying framework for predicate analysis and shows that the latter is a valuable base for conducting experiments. Furthermore, we show using a comparison against top-ranking verifiers from the International Competition on Software Verification that our implementation is highly competitive and can outperform the state of the art
    • 

    corecore