372 research outputs found

    Security plane for data authentication in information-centric networks

    Get PDF
    Orientadores: Maurício Ferreira Magalhães, Jussi KangasharjuTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A segurança da informação é responsável pela proteção das informações contra o acesso nãoautorizado, uso, modificação ou a sua destruição. Com o objetivo de proteger os dados contra esses ataques de segurança, vários protocolos foram desenvolvidos, tais como o Internet Protocol Security (IPSEC) e o Transport Layer Security (TLS), provendo mecanismos de autenticação, integridade e confidencialidade dos dados para os usuários. Esses protocolos utilizam o endereço IP como identificador de hosts na Internet, tornando-o referência e identificador no estabelecimento de conexões seguras para a troca de dados entre aplicações na rede. Com o advento da Web e o aumento exponencial do consumo de conteúdos, como vídeos e áudios, há indícios da migração gradual do uso predominante da Internet, passando da ênfase voltada para a conexão entre hosts para uma ênfase voltada para a obtenção de conteúdo da rede, paradigma esse conhecido como information-centric networking. Nesse paradigma, usuários buscam por documentos e recursos na Internet sem se importarem com o conhecimento explícito da localização do conteúdo. Como consequência, o endereço IP que previamente era utilizado como ponto de referência do provedor de dados, torna-se meramente um identificador efêmero do local onde o conteúdo está armazenado, resultando em implicações para a autenticação correta dos dados. Nesse contexto, a simples autenticação de um endereço IP não garante a autenticidade dos dados, uma vez que o servidor identificado por um dado endereço IP não é necessariamente o endereço do produtor do conteúdo. No contexto de redes orientadas à informação, existem propostas na literatura que possibilitam a autenticação dos dados utilizando somente o conteúdo propriamente dito, como a utilização de assinaturas digitais por bloco de dado e a construção de árvores de hash sobre os blocos de dados. A ideia principal dessas abordagens é atrelar uma informação do provedor original do conteúdo nos blocos de dados transportados, por exemplo, uma assinatura digital, possibilitando a autenticação direta dos dados com o provedor, independentemente do host onde o dado foi obtido. Apesar do mecanismo citado anteriormente possibilitar tal verificação, esse procedimento é muito oneroso do ponto de vista de processamento, especialmente quando o número de blocos é grande, tornando-o inviável de ser utilizado na prática. Este trabalho propõe um novo mecanismo de autenticação utilizando árvores de hash com o objetivo de prover a autenticação dos dados de forma eficiente e explícita com o provedor original e, também, de forma independente do host onde os dados foram obtidos. Nesta tese, propomos duas técnicas de autenticação de dados baseadas em árvores de hash, chamadas de skewed hash tree (SHT) e composite hash tree (CHT), para a autenticação de dados em redes orientadas à informação. Uma vez criadas, parte dos dados de autenticação é armazenada em um plano de segurança e uma outra parte permanece acoplada ao dado propriamente dito, possibilitando a verificação baseada no conteúdo e não no host de origem. Além disso, essa tese apresenta o modelo formal, a especificação e a implementação das duas técnicas de árvore de hash para autenticação dos dados em redes de conteúdo através de um plano de segurança. Por fim, esta tese detalha a instanciação do modelo de plano de segurança proposto em dois cenários de autenticação de dados: 1) redes Peer-to-Peer e 2) autenticação paralela de dados sobre o HTTPAbstract: Information security is responsible for protecting information against unauthorized access, use, modification or destruction. In order to protect such data against security attacks, many security protocols have been developed, for example, Internet Protocol Security (IPSec) and Transport Layer Security (TLS), providing mechanisms for data authentication, integrity and confidentiality for users. These protocols use the IP address as host identifier on the Internet, making it as a reference and identifier during the establishment of secure connections for data exchange between applications on the network. With the advent of the Web and the exponential increase in content consumption (e.g., video and audio), there is an evidence of a gradual migration of the predominant usage of the Internet, moving the emphasis on the connection between hosts to the content retrieval from the network, which paradigm is known as information-centric networking. In this paradigm, users look for documents and resources on the Internet without caring about the explicit knowledge of the location of the content. As a result, the IP address that was used previously as a reference point of a data provider, becomes merely an ephemeral identifier of where the content is stored, resulting in implications for the correct authentication data. In this context, the simple authentication of an IP address does not guarantee the authenticity of the data, because a hosting server identified by a given IP address is not necessarily the same one that is producing the requested content. In the context of information-oriented networks, some proposals in the literature proposes authentication mechanisms based on the content itself, for example, digital signatures over a data block or the usage of hash trees over data blocks. The main idea of these approaches is to add some information from the original provider in the transported data blocks, for example, a digital signature, enabling data authentication directly with the original provider, regardless of the host where the data was obtained. Although the mechanism mentioned previously allows for such verification, this procedure is very costly in terms of processing, especially when the number of blocks is large, making it unfeasible in practice. This thesis proposes a new authentication mechanism using hash trees in order to provide efficient data authentication and explicitly with the original provider, and also independently of the host where the data were obtained. We propose two techniques for data authentication based on hash trees, called skewed hash tree (SHT) and composite hash tree (CHT), for data authentication in information-oriented networks. Once created, part of the authentication data is stored in a security plane and another part remains attached to the data itself, allowing for the verification based on content and not on the source host. In addition, this thesis presents the formal model, specification and implementation of two hash tree techniques for data authentication in information-centric networks through a security plane. Finally, this thesis details the instantiation of the security plane model in two scenarios of data authentication: 1) Peer-to-Peer and 2) parallel data authentication over HTTPDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétric

    NON-LINEAR AND SPARSE REPRESENTATIONS FOR MULTI-MODAL RECOGNITION

    Get PDF
    In the first part of this dissertation, we address the problem of representing 2D and 3D shapes. In particular, we introduce a novel implicit shape representation based on Support Vector Machine (SVM) theory. Each shape is represented by an analytic decision function obtained by training an SVM, with a Radial Basis Function (RBF) kernel, so that the interior shape points are given higher values. This empowers support vector shape (SVS) with multifold advantages. First, the representation uses a sparse subset of feature points determined by the support vectors, which significantly improves the discriminative power against noise, fragmentation and other artifacts that often come with the data. Second, the use of the RBF kernel provides scale, rotation, and translation invariant features, and allows a shape to be represented accurately regardless of its complexity. Finally, the decision function can be used to select reliable feature points. These features are described using gradients computed from highly consistent decision functions instead of conventional edges. Our experiments on 2D and 3D shapes demonstrate promising results. The availability of inexpensive 3D sensors like Kinect necessitates the design of new representation for this type of data. We present a 3D feature descriptor that represents local topologies within a set of folded concentric rings by distances from local points to a projection plane. This feature, called as Concentric Ring Signature (CORS), possesses similar computational advantages to point signatures yet provides more accurate matches. CORS produces compact and discriminative descriptors, which makes it more robust to noise and occlusions. It is also well-known to computer vision researchers that there is no universal representation that is optimal for all types of data or tasks. Sparsity has proved to be a good criterion for working with natural images. This motivates us to develop efficient sparse and non-linear learning techniques for automatically extracting useful information from visual data. Specifically, we present dictionary learning methods for sparse and redundant representations in a high-dimensional feature space. Using the kernel method, we describe how the well-known dictionary learning approaches such as the method of optimal directions and KSVD can be made non-linear. We analyse their kernel constructions and demonstrate their effectiveness through several experiments on classification problems. It is shown that non-linear dictionary learning approaches can provide significantly better discrimination compared to their linear counterparts and kernel PCA, especially when the data is corrupted by different types of degradations. Visual descriptors are often high dimensional. This results in high computational complexity for sparse learning algorithms. Motivated by this observation, we introduce a novel framework, called sparse embedding (SE), for simultaneous dimensionality reduction and dictionary learning. We formulate an optimization problem for learning a transformation from the original signal domain to a lower-dimensional one in a way that preserves the sparse structure of data. We propose an efficient optimization algorithm and present its non-linear extension based on the kernel methods. One of the key features of our method is that it is computationally efficient as the learning is done in the lower-dimensional space and it discards the irrelevant part of the signal that derails the dictionary learning process. Various experiments show that our method is able to capture the meaningful structure of data and can perform significantly better than many competitive algorithms on signal recovery and object classification tasks. In many practical applications, we are often confronted with the situation where the data that we use to train our models are different from that presented during the testing. In the final part of this dissertation, we present a novel framework for domain adaptation using a sparse and hierarchical network (DASH-N), which makes use of the old data to improve the performance of a system operating on a new domain. Our network jointly learns a hierarchy of features together with transformations that rectify the mismatch between different domains. The building block of DASH-N is the latent sparse representation. It employs a dimensionality reduction step that can prevent the data dimension from increasing too fast as traversing deeper into the hierarchy. Experimental results show that our method consistently outperforms the current state-of-the-art by a significant margin. Moreover, we found that a multi-layer {DASH-N} has an edge over the single-layer DASH-N

    Privacy-Enhanced Query Processing in a Cloud-Based Encrypted DBaaS (Database as a Service)

    Get PDF
    In this dissertation, we researched techniques to support trustable and privacy enhanced solutions for on-line applications accessing to “always encrypted” data in remote DBaaS (data-base-as-a-service) or Cloud SQL-enabled backend solutions. Although solutions for SQL-querying of encrypted databases have been proposed in recent research, they fail in providing: (i) flexible multimodal query facilities includ ing online image searching and retrieval as extended queries to conventional SQL-based searches, (ii) searchable cryptographic constructions for image-indexing, searching and retrieving operations, (iii) reusable client-appliances for transparent integration of multi modal applications, and (iv) lack of performance and effectiveness validations for Cloud based DBaaS integrated deployments. At the same time, the study of partial homomorphic encryption and multimodal searchable encryption constructions is yet an ongoing research field. In this research direction, the need for a study and practical evaluations of such cryptographic is essential, to evaluate those cryptographic methods and techniques towards the materialization of effective solutions for practical applications. The objective of the dissertation is to design, implement and perform experimental evaluation of a security middleware solution, implementing a client/client-proxy/server appliance software architecture, to support the execution of applications requiring on line multimodal queries on “always encrypted” data maintained in outsourced cloud DBaaS backends. In this objective we include the support for SQL-based text-queries enhanced with searchable encrypted image-retrieval capabilities. We implemented a prototype of the proposed solution and we conducted an experimental benchmarking evaluation, to observe the effectiveness, latency and performance conditions in support ing those queries. The dissertation addressed the envisaged security middleware solution, as an experimental and usable solution that can be extended for future experimental testbench evaluations using different real cloud DBaaS deployments, as offered by well known cloud-providers.Nesta dissertação foram investigadas técnicas para suportar soluções com garantias de privacidade para aplicações que acedem on-line a dados que são mantidos sempre cifrados em nuvens que disponibilizam serviços de armazenamento de dados, nomeadamente soluções do tipo bases de dados interrogáveis por SQL. Embora soluções para suportar interrogações SQL em bases de dados cifradas tenham sido propostas anteriormente, estas falham em providenciar: (i) capacidade de efectuar pesquisas multimodais que possam incluir pesquisa combinada de texto e imagem com obtenção de imagens online, (ii) suporte de privacidade com base em construções criptograficas que permitam operações de indexacao, pesquisa e obtenção de imagens como dados cifrados pesquisáveis, (iii) suporte de integração para aplicações de gestão de dados em contexto multimodal, e (iv) ausência de validações experimentais com benchmarking dobre desempenho e eficiência em soluções DBaaS em que os dados sejam armazenados e manipulados na sua forma cifrada. A pesquisa de soluções de privacidade baseada em primitivas de cifras homomórficas parciais, tem sido vista como uma possível solução prática para interrogação de dados e bases de dados cifradas. No entanto, este é ainda um campo de investigação em desenvolvimento. Nesta direção de investigação, a necessidade de estudar e efectuar avaliações experimentais destas primitivas em bibliotecas de cifras homomórficas, reutilizáveis em diferentes contextos de aplicação e como solução efetiva para uso prático mais generalizado, é um aspeto essencial. O objectivo da dissertação e desenhar, implementar e efectuar avalições experimentais de uma proposta de solução middleware para suportar pesquisas multimodais em bases de dados mantidas cifradas em soluções de nuvens de armazenamento. Esta proposta visa a concepção e implementação de uma arquitectura de software client/client-proxy/server appliance para suportar execução eficiente de interrogações online sobre dados cifrados, suportando operações multimodais sobre dados mantidos protegidos em serviços de nuvens de armazenamento. Neste objectivo incluímos o suporte para interrogações estendidas de SQL, com capacidade para pesquisa e obtenção de dados cifrados que podem incluir texto e pesquisa de imagens por similaridade. Foi implementado um prototipo da solução proposta e foi efectuada uma avaliação experimental do mesmo, para observar as condições de eficiencia, latencia e desempenho do suporte dessas interrogações. Nesta avaliação incluímos a análise experimental da eficiência e impacto de diferentes construções criptográficas para pesquisas cifradas (searchable encryption) e cifras parcialmente homomórficas e que são usadas como componentes da solução proposta. A dissertaçao aborda a soluçao de seguranca projectada, como uma solução experimental que pode ser estendida e utilizavel para futuras aplcações e respetivas avaliações experimentais. Estas podem vir a adoptar soluções do tipo DBaaS, oferecidos como serviços na nuvem, por parte de diversos provedores ou fornecedores

    Evaluation and performance of reading from big data formats

    Get PDF
    The emergence of new application profiles has caused a steep surge in the volume of data generated nowadays. Data heterogeneity is a modern trend, as unstructured types of data, such as videos and images, and semi-structured types, such as JSON and XML files, are becoming increasingly widespread. Consequently, new challenges related to analyzing and extracting important insights from huge bodies of information arise. The field of big data analytics has been developed to address these issues. Performance plays a key role in analytical scenarios, as it empowers applications to generate value in a more efficient and less time-consuming way. In this context, files are used to persist large quantities of information, which can be accessed later by analytic queries. Text files have the advantage of providing an easier interaction with the end user, whereas binary files propose structures that enhance data access. Among them, Apache ORC and Apache Parquet are formats that present characteristics such as column-oriented organization and data compression, which are used to achieve a better performance in queries. The objective of this project is to assess the usage of such files by SAP Vora, a distributed database management system, in order to draw out processing techniques used in big data analytics scenarios, and apply them to improve the performance of queries executed upon CSV files in Vora. Two techniques were employed to achieve such goal: file pruning, which allows Vora’s relational engine to ignore files possessing irrelevant information for the query, and block pruning, which disregards individual file blocks that do not possess data targeted by the query when processing files. Results demonstrate that these modifications enhance the efficiency of analytical workloads executed upon CSV files in Vora, thus narrowing the performance gap of queries executed upon this format and those targeting files tailored for big data scenarios, such as Apache Parquet and Apache ORC. The project was developed during an internship at SAP, in Walldorf, Germany.A emergência de novos perfis de aplicação ocasionou um aumento abrupto no volume de dados gerado na atualidade. A heterogeneidade de tipos de dados é uma nova tendência: encontram-se tipos não-estruturados, como vídeos e imagens, e semi-estruturados, tais quais arquivos JSON e XML. Consequentemente, novos desafios relacionados à extração de valores importantes de corpos de dados surgiram. Para este propósito, criou-se o ramo de big data analytics. Nele, a performance é um fator primordial pois garante análises rápidas e uma geração de valores eficiente. Neste contexto, arquivos são utilizados para persistir grandes quantidades de informações, que podem ser utilizadas posteriormente em consultas analíticas. Arquivos de texto têm a vantagem de proporcionar uma fácil interação com o usuário final, ao passo que arquivos binários propõem estruturas que melhoram o acesso aos dados. Dentre estes, o Apache ORC e o Apache Parquet são formatos que apresentam uma organização orientada a colunas e compressão de dados, o que permite aumentar o desempenho de acesso. O objetivo deste projeto é avaliar o uso desses arquivos na plataforma SAP Vora, um sistema de gestão de base de dados distribuído, com o intuito de otimizar a performance de consultas sobre arquivos CSV, de tipo texto, em cenários de big data analytics. Duas técnicas foram empregadas para este fim: file pruning, a qual permite que arquivos possuindo informações desnecessárias para consulta sejam ignorados, e block pruning, que permite eliminar blocos individuais do arquivo que não fornecerão dados relevantes para consultas. Os resultados indicam que essas modificações melhoram o desempenho de cargas de trabalho analíticas sobre o formato CSV na plataforma Vora, diminuindo a discrepância de performance entre consultas sobre esses arquivos e aquelas feitas sobre outros formatos especializados para cenários de big data, como o Apache Parquet e o Apache ORC. Este projeto foi desenvolvido durante um estágio realizado na SAP em Walldorf, na Alemanha

    Automated Assessment of the Aftermath of Typhoons Using Social Media Texts

    Full text link
    Disasters are one of the major threats to economics and human societies, causing substantial losses of human lives, properties and infrastructures. It has been our persistent endeavors to understand, prevent and reduce such disasters, and the popularization of social media is offering new opportunities to enhance disaster management in a crowd-sourcing approach. However, social media data is also characterized by its undue brevity, intense noise, and informality of language. The existing literature has not completely addressed these disadvantages, otherwise vast manual efforts are devoted to tackling these problems. The major focus of this research is on constructing a holistic framework to exploit social media data in typhoon damage assessment. The scope of this research covers data collection, relevance classification, location extraction and damage assessment while assorted approaches are utilized to overcome the disadvantages of social media data. Moreover, a semi-supervised or unsupervised approach is prioritized in forming the framework to minimize manual intervention. In data collection, query expansion strategy is adopted to optimize the search recall of typhoon-relevant information retrieval. Multiple filtering strategies are developed to screen the keywords and maintain the relevance to search topics in the keyword updates. A classifier based on a convolutional neural network is presented for relevance classification, with hashtags and word clusters as extra input channels to augment the information. In location extraction, a model is constructed by integrating Bidirectional Long Short-Time Memory and Conditional Random Fields. Feature noise correction layers and label smoothing are leveraged to handle the noisy training data. Finally, a multi-instance multi-label classifier identifies the damage relations in four categories, and the damage categories of a message are integrated with the damage descriptions score to obtain damage severity score for the message. A case study is conducted to verify the effectiveness of the framework. The outcomes indicate that the approaches and models developed in this study significantly improve in the classification of social media texts especially under the framework of semi-supervised or unsupervised learning. Moreover, the results of damage assessment from social media data are remarkably consistent with the official statistics, which demonstrates the practicality of the proposed damage scoring scheme

    Automatic Generation of Trace Links in Model-driven Software Development

    Get PDF
    Traceability data provides the knowledge on dependencies and logical relations existing amongst artefacts that are created during software development. In reasoning over traceability data, conclusions can be drawn to increase the quality of software. The paradigm of Model-driven Software Engineering (MDSD) promotes the generation of software out of models. The latter are specified through different modelling languages. In subsequent model transformations, these models are used to generate programming code automatically. Traceability data of the involved artefacts in a MDSD process can be used to increase the software quality in providing the necessary knowledge as described above. Existing traceability solutions in MDSD are based on the integral model mapping of transformation execution to generate traceability data. Yet, these solutions still entail a wide range of open challenges. One challenge is that the collected traceability data does not adhere to a unified formal definition, which leads to poorly integrated traceability data. This aggravates the reasoning over traceability data. Furthermore, these traceability solutions all depend on the existence of a transformation engine. However, not in all cases pertaining to MDSD can a transformation engine be accessed, while taking into account proprietary transformation engines, or manually implemented transformations. In these cases it is not possible to instrument the transformation engine for the sake of generating traceability data, resulting in a lack of traceability data. In this work, we address these shortcomings. In doing so, we propose a generic traceability framework for augmenting arbitrary transformation approaches with a traceability mechanism. To integrate traceability data from different transformation approaches, our approach features a methodology for augmentation possibilities based on a design pattern. The design pattern supplies the engineer with recommendations for designing the traceability mechanism and for modelling traceability data. Additionally, to provide a traceability mechanism for inaccessible transformation engines, we leverage parallel model matching to generate traceability data for arbitrary source and target models. This approach is based on a language-agnostic concept of three similarity measures for matching. To realise the similarity measures, we exploit metamodel matching techniques for graph-based model matching. Finally, we evaluate our approach according to a set of transformations from an SAP business application and the domain of MDSD

    Towards Personalized and Human-in-the-Loop Document Summarization

    Full text link
    The ubiquitous availability of computing devices and the widespread use of the internet have generated a large amount of data continuously. Therefore, the amount of available information on any given topic is far beyond humans' processing capacity to properly process, causing what is known as information overload. To efficiently cope with large amounts of information and generate content with significant value to users, we require identifying, merging and summarising information. Data summaries can help gather related information and collect it into a shorter format that enables answering complicated questions, gaining new insight and discovering conceptual boundaries. This thesis focuses on three main challenges to alleviate information overload using novel summarisation techniques. It further intends to facilitate the analysis of documents to support personalised information extraction. This thesis separates the research issues into four areas, covering (i) feature engineering in document summarisation, (ii) traditional static and inflexible summaries, (iii) traditional generic summarisation approaches, and (iv) the need for reference summaries. We propose novel approaches to tackle these challenges, by: i)enabling automatic intelligent feature engineering, ii) enabling flexible and interactive summarisation, iii) utilising intelligent and personalised summarisation approaches. The experimental results prove the efficiency of the proposed approaches compared to other state-of-the-art models. We further propose solutions to the information overload problem in different domains through summarisation, covering network traffic data, health data and business process data.Comment: PhD thesi

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    Configurable nD-visualization for complex Building Information Models

    Get PDF
    With the ongoing development of building information modelling (BIM) towards a comprehensive coverage of all construction project information in a semantically explicit way, visual representations became decoupled from the building information models. While traditional construction drawings implicitly contained the visual representation besides the information, nowadays they are generated on the fly, hard-coded in software applications dedicated to other tasks such as analysis, simulation, structural design or communication. Due to the abstract nature of information models and the increasing amount of digital information captured during construction projects, visual representations are essential for humans in order to access the information, to understand it, and to engage with it. At the same time digital media open up the new field of interactive visualizations. The full potential of BIM can only be unlocked with customized task-specific visualizations, with engineers and architects actively involved in the design and development process of these visualizations. The visualizations must be reusable and reliably reproducible during communication processes. Further, to support creative problem solving, it must be possible to modify and refine them. This thesis aims at reconnecting building information models and their visual representations: on a theoretic level, on the level of methods and in terms of tool support. First, the research seeks to improve the knowledge about visualization generation in conjunction with current BIM developments such as the multimodel. The approach is based on the reference model of the visualization pipeline and addresses structural as well as quantitative aspects of the visualization generation. Second, based on the theoretic foundation, a method is derived to construct visual representations from given visualization specifications. To this end, the idea of a domain-specific language (DSL) is employed. Finally, a software prototype proofs the concept. Using the visualization framework, visual representations can be generated from a specific building information model and a specific visualization description.Mit der fortschreitenden Entwicklung des Building Information Modelling (BIM) hin zu einer umfassenden Erfassung aller Bauprojektinformationen in einer semantisch expliziten Weise werden Visualisierungen von den Gebäudeinformationen entkoppelt. Während traditionelle Architektur- und Bauzeichnungen die visuellen Reprä̈sentationen implizit als Träger der Informationen enthalten, werden sie heute on-the-fly generiert. Die Details ihrer Generierung sind festgeschrieben in Softwareanwendungen, welche eigentlich für andere Aufgaben wie Analyse, Simulation, Entwurf oder Kommunikation ausgelegt sind. Angesichts der abstrakten Natur von Informationsmodellen und der steigenden Menge digitaler Informationen, die im Verlauf von Bauprojekten erfasst werden, sind visuelle Repräsentationen essentiell, um sich die Information erschließen, sie verstehen, durchdringen und mit ihnen arbeiten zu können. Gleichzeitig entwickelt sich durch die digitalen Medien eine neues Feld der interaktiven Visualisierungen. Das volle Potential von BIM kann nur mit angepassten aufgabenspezifischen Visualisierungen erschlossen werden, bei denen Ingenieur*innen und Architekt*innen aktiv in den Entwurf und die Entwicklung dieser Visualisierungen einbezogen werden. Die Visualisierungen müssen wiederverwendbar sein und in Kommunikationsprozessen zuverlässig reproduziert werden können. Außerdem muss es möglich sein, Visualisierungen zu modifizieren und neu zu definieren, um das kreative Problemlösen zu unterstützen. Die vorliegende Arbeit zielt darauf ab, Gebäudemodelle und ihre visuellen Repräsentationen wieder zu verbinden: auf der theoretischen Ebene, auf der Ebene der Methoden und hinsichtlich der unterstützenden Werkzeuge. Auf der theoretischen Ebene trägt die Arbeit zunächst dazu bei, das Wissen um die Erstellung von Visualisierungen im Kontext von Bauprojekten zu erweitern. Der verfolgte Ansatz basiert auf dem Referenzmodell der Visualisierungspipeline und geht dabei sowohl auf strukturelle als auch auf quantitative Aspekte des Visualisierungsprozesses ein. Zweitens wird eine Methode entwickelt, die visuelle Repräsentationen auf Basis gegebener Visualisierungsspezifikationen generieren kann. Schließlich belegt ein Softwareprototyp die Realisierbarkeit des Konzepts. Mit dem entwickelten Framework können visuelle Repräsentationen aus jeweils einem spezifischen Gebäudemodell und einer spezifischen Visualisierungsbeschreibung generiert werden
    corecore