20 research outputs found

    Evaluating Extensible 3D (X3D) Graphics For Use in Software Visualisation

    No full text
    3D web software visualisation has always been expensive, special purpose, and hard to program. Most of the technologies used require large amounts of scripting, are not reliable on all platforms, are binary formats, or no longer maintained. We can make end-user web software visualisation of object-oriented programs cheap, portable, and easy by using Extensible (X3D) 3D Graphics, which is a new open standard. In this thesis we outline our experience with X3D and discuss the suitability of X3D as an output format for software visualisation

    Visualization of the Static aspects of Software: a survey

    Get PDF
    International audienceSoftware is usually complex and always intangible. In practice, the development and maintenance processes are time-consuming activities mainly because software complexity is difficult to manage. Graphical visualization of software has the potential to result in a better and faster understanding of its design and functionality, saving time and providing valuable information to improve its quality. However, visualizing software is not an easy task because of the huge amount of information comprised in the software. Furthermore, the information content increases significantly once the time dimension to visualize the evolution of the software is taken into account. Human perception of information and cognitive factors must thus be taken into account to improve the understandability of the visualization. In this paper, we survey visualization techniques, both 2D- and 3D-based, representing the static aspects of the software and its evolution. We categorize these techniques according to the issues they focus on, in order to help compare them and identify the most relevant techniques and tools for a given problem

    Software composition with templates

    Get PDF
    Software composition systems are systems that concentrate on the composition of components. Thes.e systems represent a growi~ subfield of software engineering. Traditional software composition approaches define components as black-boxes. Black-boxes are characterised by their visible behaviour, but not their visible structure. They describe what can be done, rather than how it can be done. Basically, black-boxes are structurally monolithic units that can be composed together via provided interfaces. Growing complexity of software systems and dynamically changing requirements to these systems demand better parameterisation of components. State of the art approaches have tried to increase parameterisation of systems with so-called grey-box components (grey-boxes). These types of components introduced a structural configurability of components. Greyboxes could improve composability, reusability, extensibility and adaptability of software systems. However, there is still there is a big gap between grey-box approaches and business. ,' We see two main reasons for this. Firstly, a structurally non-monolithic nature of grey-boxes results in a significantly increased number of components and relationships that may form a software system. This makes grey-box approaches more complex and their development more expensive. There is a lack of tools to decrease the complexity of grey-box approaches. Secondly, grey-box composition approaches are oriented to the experts with a technical background in programming languages and software architectures. Up to now, state-of-the-art approaches have not addressed the question of their efficient applicability by domain experts with no technical background in programming languages. We consider a structural visibility of grey-boxes gives a chance to provide better externalisation of business logic, so that even a non-expert in programming language could design a software system for hislher special domain. In this thesis, we propose a holistic approach, called Neurath Composition Framework, to compose software systems according to well-defined requirements which have been externalised, giving the ownership of the design to the end-user. We show how externalisation of business logic can be achieved using grey-box composition systems augmented with the domain-specific visual interfaces. We define our own grey-box composition system based on the Parametric Code Templates component model and Molecular Operations composition technique. With this composition system awareness 'of a design, comprehensive development and the reuse of program code templates can be achieved. Finally, we present a sample implementation that shows the applicability of the composition framework to solve real-life business tasks.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Software Visualization in 3D: Implementation, Evaluation, and Applicability

    Get PDF
    The focus of this thesis is on the implementation, the evaluation and the useful application of the third dimension in software visualization. Software engineering is characterized by a complex interplay of different stakeholders that produce and use several artifacts. Software visualization is used as one mean to address this increasing complexity. It provides role- and task-specific views of artifacts that contain information about structure, behavior, and evolution of a software system in its entirety. The main potential of the third dimension is the possibility to provide multiple views in one software visualization for all three aspects. However, empirical findings concerning the role of the third dimension in software visualization are rare. Furthermore, there are only few 3D software visualizations that provide multiple views of a software system including all three aspects. Finally, the current tool support lacks of generating easy integrateable, scalable, and platform independent 2D, 2.5D, and 3D software visualizations automatically. Hence, the objective is to develop a software visualization that represents all important structural entities and relations of a software system, that can display behavioral and evolutionary aspects of a software system as well, and that can be generated automatically. In order to achieve this objective the following research methods are applied. A literature study is conducted, a software visualization generator is conceptualized and prototypically implemented, a structured approach to plan and design controlled experiments in software visualization is developed, and a controlled experiment is designed and performed to investigate the role of the third dimension in software visualization. The main contributions are an overview of the state-of-the-art in 3D software visualization, a structured approach including a theoretical model to control influence factors during controlled experiments in software visualization, an Eclipse-based generator for producing automatically role- and task-specific 2D, 2.5D, and 3D software visualizations, the controlled experiment investigating the role of the third dimension in software visualization, and the recursive disk metaphor combining the findings with focus on the structure of software including useful applications of the third dimension regarding behavior and evolution

    A conceptual framework and a risk management approach for interoperability between geospatial datacubes

    Get PDF
    De nos jours, nous observons un intérêt grandissant pour les bases de données géospatiales multidimensionnelles. Ces bases de données sont développées pour faciliter la prise de décisions stratégiques des organisations, et plus spécifiquement lorsqu’il s’agit de données de différentes époques et de différents niveaux de granularité. Cependant, les utilisateurs peuvent avoir besoin d’utiliser plusieurs bases de données géospatiales multidimensionnelles. Ces bases de données peuvent être sémantiquement hétérogènes et caractérisées par différent degrés de pertinence par rapport au contexte d’utilisation. Résoudre les problèmes sémantiques liés à l’hétérogénéité et à la différence de pertinence d’une manière transparente aux utilisateurs a été l’objectif principal de l’interopérabilité au cours des quinze dernières années. Dans ce contexte, différentes solutions ont été proposées pour traiter l’interopérabilité. Cependant, ces solutions ont adopté une approche non systématique. De plus, aucune solution pour résoudre des problèmes sémantiques spécifiques liés à l’interopérabilité entre les bases de données géospatiales multidimensionnelles n’a été trouvée. Dans cette thèse, nous supposons qu’il est possible de définir une approche qui traite ces problèmes sémantiques pour assurer l’interopérabilité entre les bases de données géospatiales multidimensionnelles. Ainsi, nous définissons tout d’abord l’interopérabilité entre ces bases de données. Ensuite, nous définissons et classifions les problèmes d’hétérogénéité sémantique qui peuvent se produire au cours d’une telle interopérabilité de différentes bases de données géospatiales multidimensionnelles. Afin de résoudre ces problèmes d’hétérogénéité sémantique, nous proposons un cadre conceptuel qui se base sur la communication humaine. Dans ce cadre, une communication s’établit entre deux agents système représentant les bases de données géospatiales multidimensionnelles impliquées dans un processus d’interopérabilité. Cette communication vise à échanger de l’information sur le contenu de ces bases. Ensuite, dans l’intention d’aider les agents à prendre des décisions appropriées au cours du processus d’interopérabilité, nous évaluons un ensemble d’indicateurs de la qualité externe (fitness-for-use) des schémas et du contexte de production (ex., les métadonnées). Finalement, nous mettons en œuvre l’approche afin de montrer sa faisabilité.Today, we observe wide use of geospatial databases that are implemented in many forms (e.g., transactional centralized systems, distributed databases, multidimensional datacubes). Among those possibilities, the multidimensional datacube is more appropriate to support interactive analysis and to guide the organization’s strategic decisions, especially when different epochs and levels of information granularity are involved. However, one may need to use several geospatial multidimensional datacubes which may be semantically heterogeneous and having different degrees of appropriateness to the context of use. Overcoming the semantic problems related to the semantic heterogeneity and to the difference in the appropriateness to the context of use in a manner that is transparent to users has been the principal aim of interoperability for the last fifteen years. However, in spite of successful initiatives, today's solutions have evolved in a non systematic way. Moreover, no solution has been found to address specific semantic problems related to interoperability between geospatial datacubes. In this thesis, we suppose that it is possible to define an approach that addresses these semantic problems to support interoperability between geospatial datacubes. For that, we first describe interoperability between geospatial datacubes. Then, we define and categorize the semantic heterogeneity problems that may occur during the interoperability process of different geospatial datacubes. In order to resolve semantic heterogeneity between geospatial datacubes, we propose a conceptual framework that is essentially based on human communication. In this framework, software agents representing geospatial datacubes involved in the interoperability process communicate together. Such communication aims at exchanging information about the content of geospatial datacubes. Then, in order to help agents to make appropriate decisions during the interoperability process, we evaluate a set of indicators of the external quality (fitness-for-use) of geospatial datacube schemas and of production context (e.g., metadata). Finally, we implement the proposed approach to show its feasibility

    Intégration des algorithmes de généralisation et des patrons géométriques pour la création des objets auto-généralisants (SGO) afin d'améliorer la généralisation cartographique à la volée

    Get PDF
    Le développement technologique de ces dernières années a eu comme conséquence la démocratisation des données spatiales. Ainsi, des applications comme la cartographie en ligne et les SOLAP qui permettent d’accéder à ces données ont fait leur apparition. Malheureusement, ces applications sont très limitées du point de vue cartographique car elles ne permettent pas une personnalisation flexible des cartes demandées par l’utilisateur. Pour permettre de générer des produits plus adaptés aux besoins des utilisateurs de ces technologies, les outils de visualisation doivent permettre entre autres de générer des données à des échelles variables choisies par l'utilisateur. Pour cela, une solution serait d’utiliser la généralisation cartographique automatique afin de générer les données à différentes échelles à partir d’une base de données unique à grande échelle. Mais, compte tenu de la nature interactive de ces applications, cette généralisation doit être réalisée à la volée. Depuis plus de trois décennies, la généralisation automatique est devenue un sujet de recherche important. Malheureusement, en dépit des avancées considérables réalisées ces dernières années, les méthodes de généralisation cartographique existantes ne garantissent pas un résultat exhaustif et une performance acceptable pour une généralisation à la volée efficace. Comme, il est actuellement impossible de créer à la volée des cartes à des échelles arbitraires à partir d’une seule carte à grande échelle, les résultats de la généralisation (i.e. les cartes à plus petites échelles générées grâce à la généralisation cartographique) sont stockés dans une base de données à représentation multiple (RM) en vue d’une éventuelle utilisation. Par contre, en plus du manque de flexibilité (car les échelles sont prédéfinies), la RM introduit aussi la redondance à cause du fait que plusieurs représentations de chaque objet sont stockées dans la même base de données. Tout ceci empêche parfois les utilisateurs d’avoir des données avec un niveau d’abstraction qui correspond exactement à leurs besoins. Pour améliorer le processus de la généralisation à la volée, cette thèse propose une approche basée sur un nouveau concept appelé SGO (objet auto-généralisant: Self-Generalizing Object). Le SGO permet d’encapsuler des patrons géométriques (des formes géométriques génériques communes à plusieurs objets de la carte), des algorithmes de généralisation et des contraintes d’intégrité dans un même objet cartographique. Les SGO se basent sur un processus d’enrichissement de la base de données qui permet d’introduire les connaissances du cartographe dans les données cartographiques plutôt que de les générer à l’aide des algorithmes comme c’est typiquement le cas. Un SGO est créé pour chaque objet individuel (ex. un bâtiment) ou groupe d’objets (ex. des bâtiments alignés). Les SGO sont dotés de comportements spécifiques qui leur permettent de s'auto-généraliser, c.-à-d. de savoir comment généraliser l’objet qu’ils représentent lors d’un changement d’abstraction (ex. changement d’échelle). Comme preuve de concept, deux prototypes basés sur des technologies Open Source ont été développés lors de cette thèse. Le premier permet la création des SGO et l’enrichissement de la base de données. Le deuxième prototype basé sur la technologie multi-agent, utilise les SGO créés pour générer des données à des échelles arbitraires grâce à un processus de généralisation à la volée. Pour tester les prototypes, des données réelles de la ville de Québec à l’échelle 1 : 1000 ont été utilisées.With the technological development of these past years, geospatial data became increasingly accessible to general public. New applications such as Webmapping or SOLAP which allow visualising the data also appeared. However, the dynamic and interactive nature of these new applications requires that all operations, including generalization processes, must be carried on-the–fly. Automatic generalization has been an important research topic for more than thirty years. In spite of recent advances, it clearly appears that actual generalization methods can not reach alone the degree of automation and the response time needed by these new applications. To improve the process of on-the-fly map generalization, this thesis proposes an approach based on a new concept called SGO (Self-generalizing object). The SGO allows to encapsulate geometric patterns (generic geometric forms common to several map features), generalization algorithms and the spatial integrity constraints in the same object. This approach allows us to include additional human expertise in an efficient way at the level of individual cartographic features, which then leads to database enrichment that better supports automatic generalization. Thus, during a database enrichment process, a SGO is created and associated with a cartographic feature, or a group of features. Then, each created SGO is transformed into a software agent (SGO agent) in order to give them autonomy. SGO agents are equipped with behaviours which enable them to coordinate the generalization process. As a proof of concept, two prototypes based on Open Source technologies were developed in this thesis. The first prototype allows the creation of the SGO. The second prototype based on multi-agents technology, uses the created SGO in order to generate data on arbitrary scales thanks to an on-the-fly map generalization process. Real data of Quebec City at scale 1: 1000 were used in order to test the developed prototypes

    Acta Polytechnica Hungarica 2015

    Get PDF

    Cognitive Foundations for Visual Analytics

    Full text link

    SEMKIS: A CONTRIBUTION TO SOFTWARE ENGINEERING METHODOLOGIES FOR NEURAL NETWORK DEVELOPMENT

    Get PDF
    Today, there is a high demand for neural network-based software systems supporting humans during their daily activities. Neural networks are computer programs that simulate the behaviour of simplified human brains. These neural networks can be deployed on various devices e.g. cars, phones, medical devices...) in many domains (e.g. automotive industry, medicine...). To meet the high demand, software engineers require methods and tools to engineer these software systems for their customers. Neural networks acquire their recognition skills e.g. recognising voice, image content...) from large datasets during a training process. Therefore, neural network engineering (NNE) shall not be only about designing and implementing neural network models, but also about dataset engineering (DSE). In the literature, there are no software engineering methodologies supporting DSE with precise dataset selection criteria for improving neural networks. Most traditional approaches focus only on improving the neural network’s architecture or follow crafted approaches based on augmenting datasets with randomly gathered data. Moreover, they do not consider a comparative evaluation of the neural network’s recognition skills and customer’s requirements for building appropriate datasets. In this thesis, we introduce a software engineering methodology (called SEMKIS) supported by a tool for engineering datasets with precise data selection criteria to improve neural networks. Our method considers mainly the improvement of neural networks through augmenting datasets with synthetic data. SEMKIS has been designed as a rigorous iterative process for guiding software engineers during their neural network-based projects. The SEMKIS process is composed of many activities covering different development phases: requirements’ specification; dataset and neural network engineering; recognition skills specification; dataset augmentation with synthetized data. We introduce the notion of key-properties, used all along the process in cooperation with a customer, to describe the recognition skills. We define a domain-specific language (called SEMKIS-DSL) for the specification of the requirements and recognition skills. The SEMKIS-DSL grammar has been designed to support a comparative evaluation of the customer’s requirements with the key-properties. We define a method for interpreting the specification and defining a dataset augmentation. Lastly, we apply the SEMKIS process to a complete case study on the recognition of a meter counter. Our experiment shows a successful application of our process in a concrete example
    corecore