35 research outputs found

    30th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    Information modelling is becoming more and more important topic for researchers, designers, and users of information systems. The amount and complexity of information itself, the number of abstraction levels of information, and the size of databases and knowledge bases are continuously growing. Conceptual modelling is one of the sub-areas of information modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers

    Configurable nD-visualization for complex Building Information Models

    Get PDF
    With the ongoing development of building information modelling (BIM) towards a comprehensive coverage of all construction project information in a semantically explicit way, visual representations became decoupled from the building information models. While traditional construction drawings implicitly contained the visual representation besides the information, nowadays they are generated on the fly, hard-coded in software applications dedicated to other tasks such as analysis, simulation, structural design or communication. Due to the abstract nature of information models and the increasing amount of digital information captured during construction projects, visual representations are essential for humans in order to access the information, to understand it, and to engage with it. At the same time digital media open up the new field of interactive visualizations. The full potential of BIM can only be unlocked with customized task-specific visualizations, with engineers and architects actively involved in the design and development process of these visualizations. The visualizations must be reusable and reliably reproducible during communication processes. Further, to support creative problem solving, it must be possible to modify and refine them. This thesis aims at reconnecting building information models and their visual representations: on a theoretic level, on the level of methods and in terms of tool support. First, the research seeks to improve the knowledge about visualization generation in conjunction with current BIM developments such as the multimodel. The approach is based on the reference model of the visualization pipeline and addresses structural as well as quantitative aspects of the visualization generation. Second, based on the theoretic foundation, a method is derived to construct visual representations from given visualization specifications. To this end, the idea of a domain-specific language (DSL) is employed. Finally, a software prototype proofs the concept. Using the visualization framework, visual representations can be generated from a specific building information model and a specific visualization description.Mit der fortschreitenden Entwicklung des Building Information Modelling (BIM) hin zu einer umfassenden Erfassung aller Bauprojektinformationen in einer semantisch expliziten Weise werden Visualisierungen von den GebĂ€udeinformationen entkoppelt. WĂ€hrend traditionelle Architektur- und Bauzeichnungen die visuellen ReprĂ€Ìˆsentationen implizit als TrĂ€ger der Informationen enthalten, werden sie heute on-the-fly generiert. Die Details ihrer Generierung sind festgeschrieben in Softwareanwendungen, welche eigentlich fĂŒr andere Aufgaben wie Analyse, Simulation, Entwurf oder Kommunikation ausgelegt sind. Angesichts der abstrakten Natur von Informationsmodellen und der steigenden Menge digitaler Informationen, die im Verlauf von Bauprojekten erfasst werden, sind visuelle ReprĂ€sentationen essentiell, um sich die Information erschließen, sie verstehen, durchdringen und mit ihnen arbeiten zu können. Gleichzeitig entwickelt sich durch die digitalen Medien eine neues Feld der interaktiven Visualisierungen. Das volle Potential von BIM kann nur mit angepassten aufgabenspezifischen Visualisierungen erschlossen werden, bei denen Ingenieur*innen und Architekt*innen aktiv in den Entwurf und die Entwicklung dieser Visualisierungen einbezogen werden. Die Visualisierungen mĂŒssen wiederverwendbar sein und in Kommunikationsprozessen zuverlĂ€ssig reproduziert werden können. Außerdem muss es möglich sein, Visualisierungen zu modifizieren und neu zu definieren, um das kreative Problemlösen zu unterstĂŒtzen. Die vorliegende Arbeit zielt darauf ab, GebĂ€udemodelle und ihre visuellen ReprĂ€sentationen wieder zu verbinden: auf der theoretischen Ebene, auf der Ebene der Methoden und hinsichtlich der unterstĂŒtzenden Werkzeuge. Auf der theoretischen Ebene trĂ€gt die Arbeit zunĂ€chst dazu bei, das Wissen um die Erstellung von Visualisierungen im Kontext von Bauprojekten zu erweitern. Der verfolgte Ansatz basiert auf dem Referenzmodell der Visualisierungspipeline und geht dabei sowohl auf strukturelle als auch auf quantitative Aspekte des Visualisierungsprozesses ein. Zweitens wird eine Methode entwickelt, die visuelle ReprĂ€sentationen auf Basis gegebener Visualisierungsspezifikationen generieren kann. Schließlich belegt ein Softwareprototyp die Realisierbarkeit des Konzepts. Mit dem entwickelten Framework können visuelle ReprĂ€sentationen aus jeweils einem spezifischen GebĂ€udemodell und einer spezifischen Visualisierungsbeschreibung generiert werden

    Building the knowledge base for environmental action and sustainability

    Get PDF

    31th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    Information modelling is becoming more and more important topic for researchers, designers, and users of information systems.The amount and complexity of information itself, the number of abstractionlevels of information, and the size of databases and knowledge bases arecontinuously growing. Conceptual modelling is one of the sub-areas ofinformation modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers

    2019 EC3 July 10-12, 2019 Chania, Crete, Greece

    Get PDF

    Towards An Improved Long-term Data Record From The Advanced Very-high Resolution Radiometer: Evaluation, Atmospheric Correction, And Intercalibration

    Get PDF
    Long-term data records from satellite observations are crucial for the study of land surface properties and their long-term dynamics. The AVHRR long term data record (LTDR) is an ongoing effort to generate a consistent climate record of daily atmospherically corrected observations with global coverage that is suitable for long term studies of the Earth surface. In this dissertation, I identified three areas for the improvement of the LTDR: (1) The comprehensive evaluation of the LTDR performance and characterization if its uncertainties. (2) The retrieval of water vapor information from AVHRR data for a more accurate atmospheric correction. (3) The recalibration of the record to address inconsistency issues. The first study consisted on a global long-term evaluation of the LTDR with matched observations from the Landat-5 Thematic Mapper instrument. Results from this evaluation showed that the record performance was close to the proposed specification. The second study proposed a method for the retrieval of water vapor from AVHRR data, which provides a crucial input for the atmospheric correction process. Evaluation of the retrieved values with reference datasets showed excellent results, with a water vapor error lower than 0.45g/cm2. Finally, the last chapter proposed a novel method for the selection of stable areas suitable for satellite intercalibration and for the derivation of recalibration coefficients. The evaluation of the original and recalibrated record showed that for most cases the recalibrated record performed better

    Development of Semantics-Based Distributed Middleware for Heterogeneous Data Integration and its Application for Drought

    Get PDF
    ThesisDrought is a complex environmental phenomenon that affects millions of people and communities all over the globe and is too elusive to be accurately predicted. This is mostly due to the scalability and variability of the web of environmental parameters that directly/indirectly causes the onset of different categories of drought. Since the dawn of man, efforts have been made to uniquely understand the natural indicators that provide signs of likely environmental events. These indicators/signs in the form of indigenous knowledge system have been used for generations. Also, since the dawn of modern science, different drought prediction and forecasting models/indices have been developed which usually incorporate data from sparsely located weather stations in their computation, producing less accurate results – due to lack of the desired scalability in the input datasets. The intricate complexity of drought has, however, always been a major stumbling block for accurate drought prediction and forecasting systems. Recently, scientists in the field of ethnoecology, agriculture and environmental monitoring have been discussing the integration of indigenous knowledge and scientific knowledge for a more accurate environmental forecasting system in order to incorporate diverse environmental information for a reliable drought forecast. Hence, in this research, the core objective is the development of a semantics-based data integration middleware that encompasses and integrates heterogeneous data models of local indigenous knowledge and sensor data towards an accurate drought forecasting system for the study areas of the KwaZulu-Natal province of South Africa and Mbeere District of Kenya. For the study areas, the local indigenous knowledge on drought gathered from the domain experts and local elderly farmers, is transformed into rules to be used for performing deductive inference in conjunction with sensors data for determining the onset of drought through an automated inference generation module of the middleware. The semantic middleware incorporates, inter alia, a distributed architecture that consists of a streaming data processing engine based on Apache Kafka for real-time stream processing; a rule-based reasoning module; an ontology module for semantic representation of the knowledge bases. The plethora of sub-systems in the semantic middleware produce a service(s) as a combined output – in the form of drought forecast advisory information (DFAI). The DFAI as an output of the semantic middleware is disseminated across multiple channels for utilisation by policy-makers to develop mitigation strategies to combat the effect of drought and their drought-related decision-making processes

    Segmentation sémantique des contenus audio-visuels

    Get PDF
    Dans ce travail, nous avons mis au point une mĂ©thode de segmentation des contenus audiovisuels applicable aux appareils de stockage domestiques pour cela nous avons expĂ©rimentĂ© un systĂšme distribuĂ© pour l’analyse du contenu composĂ© de modules individuels d’analyse : les Service Unit. L’un d’entre eux a Ă©tĂ© dĂ©diĂ© Ă  la caractĂ©risation des Ă©lĂ©ments hors contenu, i.e. les publicitĂ©s, et offre de bonnes performances. ParallĂšlement, nous avons testĂ© diffĂ©rents dĂ©tecteurs de changement de plans afin de retenir le meilleur d’entre eux pour la suite. Puis, nous avons proposĂ© une Ă©tude des rĂšgles de production des films, i.e. grammaire de films, qui a permis de dĂ©finir les sĂ©quences de Parallel Shot. Nous avons, ainsi, testĂ© quatre mĂ©thodes de regroupement basĂ©es similaritĂ© afin de retenir la meilleure d’entre elles pour la suite. Finalement, nous avons recherchĂ© diffĂ©rentes mĂ©thodes de dĂ©tection des frontiĂšres de scĂšnes et avons obtenu les meilleurs rĂ©sultats en combinant une mĂ©thode basĂ©e couleur avec un critĂšre de longueur de plan. Ce dernier offre des performances justifiant son intĂ©gration dans les appareils de stockage grand public.In this work we elaborated a method for semantic segmentation of audiovisual content applicable for consumer electronics storage devices. For the specific solution we researched first a service-oriented distributed multimedia content analysis framework composed of individual content analysis modules, i.e. Service Units. One of the latter was dedicated to identify non-content related inserts, i.e. commercials blocks, which reached high performance results. In a subsequent step we researched and benchmarked various Shot Boundary Detectors and implement the best performing one as Service Unit. Here after, our study of production rules, i.e. film grammar, provided insights of Parallel Shot sequences, i.e. Cross-Cuttings and Shot-Reverse-Shots. We researched and benchmarked four similarity-based clustering methods, two colour- and two feature-point-based ones, in order to retain the best one for our final solution. Finally, we researched several audiovisual Scene Boundary Detector methods and achieved best results combining a colour-based method with a shot length based criteria. This Scene Boundary Detector identified semantic scene boundaries with a robustness of 66% for movies and 80% for series, which proofed to be sufficient for our envisioned application Advanced Content Navigation

    Web 2.0 for social learning in higher education

    Get PDF
    corecore