1,766 research outputs found

    A 3d geoscience information system framework

    Get PDF
    Two-dimensional geographical information systems are extensively used in the geosciences to create and analyse maps. However, these systems are unable to represent the Earth's subsurface in three spatial dimensions. The objective of this thesis is to overcome this deficiency, to provide a general framework for a 3d geoscience information system (GIS), and to contribute to the public discussion about the development of an infrastructure for geological observation data, geomodels, and geoservices. Following the objective, the requirements for a 3d GIS are analysed. According to the requirements, new geologically sensible query functionality for geometrical, topological and geological properties has been developed and the integration of 3d geological modeling and data management system components in a generic framework has been accomplished. The 3d geoscience information system framework presented here is characterized by the following features: - Storage of geological observation data and geomodels in a XML-database server. According to a new data model, geological observation data can be referenced by a set of geomodels. - Functionality for querying observation data and 3d geomodels based on their 3d geometrical, topological, material, and geological properties were developed and implemented as plug-in for a 3d geomodeling user application. - For database queries, the standard XML query language has been extended with 3d spatial operators. The spatial database query operations are computed using a XML application server which has been developed for this specific purpose. This technology allows sophisticated 3d spatial and geological database queries. Using the developed methods, queries can be answered like: "Select all sandstone horizons which are intersected by the set of faults F". This request contains a topological and a geological material parameter. The combination of queries with other GIS methods, like visual and statistical analysis, allows geoscience investigations in a novel 3d GIS environment. More generally, a 3d GIS enables geologists to read and understand a 3d digital geomodel analogously as they read a conventional 2d geological map

    Qualitative Spatial Configuration Queries Towards Next Generation Access Methods for GIS

    Get PDF
    For a long time survey, management, and provision of geographic information in Geographic Information Systems (GIS) have mainly had an authoritative nature. Today the trend is changing and such an authoritative geographic information source is now accompanied by a public and freely available one which is usually referred to as Volunteered Geographic Information (VGI). Actually, the term VGI does not refer only to the mere geographic information, but, more generally, to the whole process which assumes the engagement of volunteers to collect and maintain such information in freely accessible GIS. The quick spread of VGI gives new relevance to a well-known challenge: developing new methods and techniques to ease down the interaction between users and GIS. Indeed, in spite of continuous improvements, GIS mainly provide interfaces tailored for experts, denying the casual user usually a non-expert the possibility to access VGI information. One main obstacle resides in the different ways GIS and humans deal with spatial information: GIS mainly encode spatial information in a quantitative format, whereas human beings typically prefer a qualitative and relational approach. For example, we use expressions like the lake is to the right-hand side of the wood or is there a supermarket close to the university? which qualitatively locate a spatial entity with respect to another. Nowadays, such a gap in representation has to be plugged by the user, who has to learn about the system structure and to encode his requests in a form suitable to the system. Contrarily, enabling gis to explicitly deal with qualitative spatial information allows for shifting the translation effort to the system side. Thus, to facilitate the interaction with human beings, GIS have to be enhanced with tools for efficiently handling qualitative spatial information. The work presented in this thesis addresses the problem of enabling Qualitative Spatial Configuration Queries (QSCQs) in GIS. A QSCQ is a spatial database query which allows for an automatic mapping of spatial descriptions produced by humans: A user naturally expresses his request of spatial information by drawing a sketch map or producing a verbal description. The qualitative information conveyed by such descriptions is automatically extracted and encoded into a QSCQ. The focus of this work is on two main challenges: First, the development of a framework that allows for managing in a spatial database the variety of spatial aspects that might be enclosed in a spatial description produced by a human. Second, the conception of Qualitative Spatial Access Methods (QSAMs): algorithms and data structures tailored for efficiently solving QSCQs. The main objective of a QSAM is that of countering the exponential explosion in terms of storage space occurring when switching from a quantitative to a qualitative spatial representation while keeping query response time acceptable

    An Empirical Analysis of Scheduling Techniques for Real-Time Cloud-Based Data Processing

    Get PDF
    In this paper, we explore the challenges and needs of current cloud infrastructures, to better support cloud-based data-intensive applications that are not only latency-sensitive but also require strong timing guarantees. These applications have strict deadlines (e.g., to perform time-dependent mission critical tasks or to complete real-time control decisions using a human-in-the-loop), and deadline misses are undesirable. To highlight the challenges in this space, we provide a case study of the online scheduling of MapReduce jobs executed by Hadoop. Our evaluations on Amazon EC2 show that the existing Hadoop scheduler is ill-equipped to handle jobs with deadlines. However, by adapting existing multiprocessor scheduling techniques for the cloud environment, we observe significant performance improvements in minimizing missed deadlines and tardiness. Based on our case study, we discuss a range of challenges in this domain posed by virtualization and scale, and propose our research agenda centered around the application of advanced real-time scheduling techniques in the cloud environment

    Semantically defined Analytics for Industrial Equipment Diagnostics

    Get PDF
    In this age of digitalization, industries everywhere accumulate massive amount of data such that it has become the lifeblood of the global economy. This data may come from various heterogeneous systems, equipment, components, sensors, systems and applications in many varieties (diversity of sources), velocities (high rate of changes) and volumes (sheer data size). Despite significant advances in the ability to collect, store, manage and filter data, the real value lies in the analytics. Raw data is meaningless, unless it is properly processed to actionable (business) insights. Those that know how to harness data effectively, have a decisive competitive advantage, through raising performance by making faster and smart decisions, improving short and long-term strategic planning, offering more user-centric products and services and fostering innovation. Two distinct paradigms in practice can be discerned within the field of analytics: semantic-driven (deductive) and data-driven (inductive). The first emphasizes logic as a way of representing the domain knowledge encoded in rules or ontologies and are often carefully curated and maintained. However, these models are often highly complex, and require intensive knowledge processing capabilities. Data-driven analytics employ machine learning (ML) to directly learn a model from the data with minimal human intervention. However, these models are tuned to trained data and context, making it difficult to adapt. Industries today that want to create value from data must master these paradigms in combination. However, there is great need in data analytics to seamlessly combine semantic-driven and data-driven processing techniques in an efficient and scalable architecture that allows extracting actionable insights from an extreme variety of data. In this thesis, we address these needs by providing: • A unified representation of domain-specific and analytical semantics, in form of ontology models called TechOnto Ontology Stack. It is highly expressive, platform-independent formalism to capture conceptual semantics of industrial systems such as technical system hierarchies, component partonomies etc and its analytical functional semantics. • A new ontology language Semantically defined Analytical Language (SAL) on top of the ontology model that extends existing DatalogMTL (a Horn fragment of Metric Temporal Logic) with analytical functions as first class citizens. • A method to generate semantic workflows using our SAL language. It helps in authoring, reusing and maintaining complex analytical tasks and workflows in an abstract fashion. • A multi-layer architecture that fuses knowledge- and data-driven analytics into a federated and distributed solution. To our knowledge, the work in this thesis is one of the first works to introduce and investigate the use of the semantically defined analytics in an ontology-based data access setting for industrial analytical applications. The reason behind focusing our work and evaluation on industrial data is due to (i) the adoption of semantic technology by the industries in general, and (ii) the common need in literature and in practice to allow domain expertise to drive the data analytics on semantically interoperable sources, while still harnessing the power of analytics to enable real-time data insights. Given the evaluation results of three use-case studies, our approach surpass state-of-the-art approaches for most application scenarios.Im Zeitalter der Digitalisierung sammeln die Industrien überall massive Daten-mengen, die zum Lebenselixier der Weltwirtschaft geworden sind. Diese Daten können aus verschiedenen heterogenen Systemen, Geräten, Komponenten, Sensoren, Systemen und Anwendungen in vielen Varianten (Vielfalt der Quellen), Geschwindigkeiten (hohe Änderungsrate) und Volumina (reine Datengröße) stammen. Trotz erheblicher Fortschritte in der Fähigkeit, Daten zu sammeln, zu speichern, zu verwalten und zu filtern, liegt der eigentliche Wert in der Analytik. Rohdaten sind bedeutungslos, es sei denn, sie werden ordnungsgemäß zu verwertbaren (Geschäfts-)Erkenntnissen verarbeitet. Wer weiß, wie man Daten effektiv nutzt, hat einen entscheidenden Wettbewerbsvorteil, indem er die Leistung steigert, indem er schnellere und intelligentere Entscheidungen trifft, die kurz- und langfristige strategische Planung verbessert, mehr benutzerorientierte Produkte und Dienstleistungen anbietet und Innovationen fördert. In der Praxis lassen sich im Bereich der Analytik zwei unterschiedliche Paradigmen unterscheiden: semantisch (deduktiv) und Daten getrieben (induktiv). Die erste betont die Logik als eine Möglichkeit, das in Regeln oder Ontologien kodierte Domänen-wissen darzustellen, und wird oft sorgfältig kuratiert und gepflegt. Diese Modelle sind jedoch oft sehr komplex und erfordern eine intensive Wissensverarbeitung. Datengesteuerte Analysen verwenden maschinelles Lernen (ML), um mit minimalem menschlichen Eingriff direkt ein Modell aus den Daten zu lernen. Diese Modelle sind jedoch auf trainierte Daten und Kontext abgestimmt, was die Anpassung erschwert. Branchen, die heute Wert aus Daten schaffen wollen, müssen diese Paradigmen in Kombination meistern. Es besteht jedoch ein großer Bedarf in der Daten-analytik, semantisch und datengesteuerte Verarbeitungstechniken nahtlos in einer effizienten und skalierbaren Architektur zu kombinieren, die es ermöglicht, aus einer extremen Datenvielfalt verwertbare Erkenntnisse zu gewinnen. In dieser Arbeit, die wir auf diese Bedürfnisse durch die Bereitstellung: • Eine einheitliche Darstellung der Domänen-spezifischen und analytischen Semantik in Form von Ontologie Modellen, genannt TechOnto Ontology Stack. Es ist ein hoch-expressiver, plattformunabhängiger Formalismus, die konzeptionelle Semantik industrieller Systeme wie technischer Systemhierarchien, Komponenten-partonomien usw. und deren analytische funktionale Semantik zu erfassen. • Eine neue Ontologie-Sprache Semantically defined Analytical Language (SAL) auf Basis des Ontologie-Modells das bestehende DatalogMTL (ein Horn fragment der metrischen temporären Logik) um analytische Funktionen als erstklassige Bürger erweitert. • Eine Methode zur Erzeugung semantischer workflows mit unserer SAL-Sprache. Es hilft bei der Erstellung, Wiederverwendung und Wartung komplexer analytischer Aufgaben und workflows auf abstrakte Weise. • Eine mehrschichtige Architektur, die Wissens- und datengesteuerte Analysen zu einer föderierten und verteilten Lösung verschmilzt. Nach unserem Wissen, die Arbeit in dieser Arbeit ist eines der ersten Werke zur Einführung und Untersuchung der Verwendung der semantisch definierten Analytik in einer Ontologie-basierten Datenzugriff Einstellung für industrielle analytische Anwendungen. Der Grund für die Fokussierung unserer Arbeit und Evaluierung auf industrielle Daten ist auf (i) die Übernahme semantischer Technologien durch die Industrie im Allgemeinen und (ii) den gemeinsamen Bedarf in der Literatur und in der Praxis zurückzuführen, der es der Fachkompetenz ermöglicht, die Datenanalyse auf semantisch inter-operablen Quellen voranzutreiben, und nutzen gleichzeitig die Leistungsfähigkeit der Analytik, um Echtzeit-Daten-einblicke zu ermöglichen. Aufgrund der Evaluierungsergebnisse von drei Anwendungsfällen Übertritt unser Ansatz für die meisten Anwendungsszenarien Modernste Ansätze

    Probabilistic Inference Using Partitioned Bayesian Networks:Introducing a Compositional Framework

    Get PDF
    Probability theory offers an intuitive and formally sound way to reason in situations that involve uncertainty. The automation of probabilistic reasoning has many applications such as predicting future events or prognostics, providing decision support, action planning under uncertainty, dealing with multiple uncertain measurements, making a diagnosis, and so forth. Bayesian networks in particular have been used to represent probability distributions that model the various applications of uncertainty reasoning. However, present-day automated reasoning approaches involving uncertainty struggle when models increase in size and complexity to fit real-world applications.In this thesis, we explore and extend a state-of-the-art automated reasoning method, called inference by Weighted Model Counting (WMC), when applied to increasingly complex Bayesian network models. WMC is comprised of two distinct phases: compilation and inference. The computational cost of compilation has limited the applicability of WMC. To overcome this limitation we have proposed theoretical and practical solutions that have been tested extensively in empirical studies using real-world Bayesian network models.We have proposed a weighted variant of OBDDs, called Weighted Positive Binary Decision Diagrams (WPBDD), which in turn is based on the new notion of positive Shannon decomposition. WPBDDs are particularly well suited to represent discrete probabilistic models. The conciseness of WPBDDs leads to a reduction in the cost of probabilistic inference.We have introduced Compositional Weighted Model Counting (CWMC), a language-agnostic framework for probabilistic inference that partitions a Bayesian network into subproblems. These subproblems are then compiled and subsequently composed in order to perform inference. This approach significantly reduces the cost of compilation, yet increases the cost of inference. The best results are obtained by seeking a partitioning that allows compilation to (barely) become feasible, but no more, as compilation cost can be amortized over multiple inference queries.Theoretical concepts have been implemented in a readily available open-source tool called ParaGnosis. Further implementational improvements have been found through parallelism, by exploiting independencies that are introduced by CWMC. The proposed methods combined push the boundaries of WMC, allowing this state-of-the-art method to be used on much larger models than before

    Emergent relational schemas for RDF

    Get PDF

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware
    • …
    corecore