4 research outputs found

    A Computational Model of Creative Design as a Sociocultural Process Involving the Evolution of Language

    Get PDF
    The aim of this research is to investigate the mechanisms of creative design within the context of an evolving language through computational modelling. Computational Creativity is a subfield of Artificial Intelligence that focuses on modelling creative behaviours. Typically, research in Computational Creativity has treated language as a medium, e.g., poetry, rather than an active component of the creative process. Previous research studying the role of language in creative design has relied on interviewing human participants, limiting opportunities for computational modelling. This thesis explores the potential for language to play an active role in computational creativity by connecting computational models of the evolution of artificial languages and creative design processes. Multi-agent simulations based on the Domain-Individual-Field-Interaction framework are employed to evolve artificial languages with features that may support creative designing including ambiguity, incongruity, exaggeration and elaboration. The simulation process consists of three steps: (1) constructing representations associating topics, meanings and utterances; (2) structured communication of utterances and meanings through the playing of “language games”; and (3) evaluation of design briefs and works. The use of individual agents with different evaluation criteria, preferences and roles enriches the scope and diversity of the simulations. The results of the experiments conducted with artificial creative language systems demonstrate the expansion of design spaces by generating compositional utterances representing novel concepts among design agents using language features and weighted context free grammars. They can be used to computationally explore the roles of language in creative design, and possibly point to computational applications. Understanding the evolution of artificial languages may provide insights into human languages, especially those features that support creativity

    Structural Performance Comparison of Parallel Software Applications

    Get PDF
    With rising complexity of high performance computing systems and their parallel software, performance analysis and optimization has become essential in the development of efficient applications. The comparison of performance data is a key operation required in performance analysis. An analyst may conduct different types of comparisons in order to understand the performance properties of an application. One use case is comparing performance data from multiple measurements. Typical examples for such comparisons are before/after comparisons when applying optimizations or changing code versions. Besides comparing performance between multiple runs, also comparing performance characteristics across the parallel execution streams of an application is essential to detect performance problems. This is typically useful to detect imbalances, outliers, or changing runtime behavior during the execution of an application. While such comparisons are straightforward for the aggregated data in performance profiles, only limited solutions exist for comparing event traces. Trace-based analysis, i.e., the collection of fine-grained information on individual application events with timestamps and application context, has proven to be a powerful technique. The detailed performance information included in event traces make them very suitable for performance analysis. However, this level of detail also presents a challenge because it implies a large and overwhelming amount of data. Currently, users need to perform manual comparison of event traces, which is extremely challenging and time consuming because of the large volume of detailed data and the need to correctly line up trace events. To fill the gap of missing solutions for automatic comparison of event traces, this work proposes a set of techniques that automatically align traces. The alignment allows their structural comparison and the highlighting of differences between them. A set of novel metrics provide the user with an objective measure of the differences between traces, both in terms of differences in the event stream and timing differences across events. An additional important aspect of trace-based analysis is the visualization of performance data in event timelines. This has proven to be a powerful approach for the detection of various types of performance problems. However, visualization of large numbers of event timelines quickly hits the limits of available display resolution. Likewise, identifying performance problems is challenging in the large amount of visualized performance data. To alleviate these problems this work proposes two new approaches for event timeline visualization. First, novel folding strategies for event timelines facilitate visual scalability and provide powerful overviews of performance data at the same time. Second, this work presents an effective approach that automatically identifies and highlights several types of performance critical sections in an application run. This approach identifies time dominant functions of an application and subsequently uses them to analyze runtime imbalances throughout the application run. Intuitive visualizations present the resulting runtime variations and guide the analyst to performance hot spots. Evaluations with benchmarks and real-world applications assess all introduced techniques. The effectiveness of the comparison approaches is demonstrated by showing automatically detected performance issues and structural differences between different versions of applications and across parallel execution streams. Case studies showcase the capabilities of the event timeline visualization techniques by demonstrating scalable performance data visualizations and detecting performance problems and code inefficiencies in real-world applications

    OM-2017: Proceedings of the Twelfth International Workshop on Ontology Matching

    Get PDF
    shvaiko2017aInternational audienceOntology matching is a key interoperability enabler for the semantic web, as well as auseful tactic in some classical data integration tasks dealing with the semantic heterogeneityproblem. It takes ontologies as input and determines as output an alignment,that is, a set of correspondences between the semantically related entities of those ontologies.These correspondences can be used for various tasks, such as ontology merging,data translation, query answering or navigation on the web of data. Thus, matchingontologies enables the knowledge and data expressed with the matched ontologies tointeroperate
    corecore