56 research outputs found

    API Evolution and Compatibility: A Data Corpus and Tool Evaluation.

    Full text link

    The evolutionary significance of gene and genome duplications

    Get PDF

    Visual approaches to knowledge organization and contextual exploration

    Get PDF
    This thesis explores possible visual approaches for the representation of semantic structures, such as zz-structures. Some holistic visual representations of complex domains have been investigated through the proposal of new views - the so-called zz-views - that allow both to make visible the interconnections between elements and to support a contextual and multilevel exploration of knowledge. The potential of this approach has been examined in the context of two case studies that have led to the creation of two Web applications. The \ufb01rst domain of study regarded the visual representation, analysis and management of scienti\ufb01c bibliographies. In this context, we modeled a Web application, we called VisualBib, to support researchers in building, re\ufb01ning, analyzing and sharing bibliographies. We adopted a multi-faceted approach integrating features that are typical of three di\ufb00erent classes of tools: bibliography visual analysis systems, bibliographic citation indexes and personal research assistants. The evaluation studies carried out on a \ufb01rst prototype highlighted the positive impact of our visual model and encouraged us to improve it and develop further visual analysis features we incorporated in the version 3.0 of the application. The second case study concerned the modeling and development of a multimedia catalog of Web and mobile applications. The objective was to provide an overview of a significant number of tools that can help teachers in the implementation of active learning approaches supported by technology and in the design of Teaching and Learning Activities (TLAs). We analyzed and documented 281 applications, preparing for each of them a detailed multilingual card and a video-presentation, organizing all the material in an original purpose-based taxonomy, visually represented through a browsable holistic view. The catalog, we called AppInventory, provides contextual exploration mechanisms based on zz-structures, collects user contributions and evaluations about the apps and o\ufb00ers visual analysis tools for the comparison of the applications data and user evaluations. The results of two user studies carried out on groups of teachers and students shown a very positive impact of our proposal in term of graphical layout, semantic structure, navigation mechanisms and usability, also in comparison with two similar catalogs

    Automaatioalusta etĂ€asennuksille – suunnitelma EDMS-sovellusalustan asennusten automatisoimiseksi

    Get PDF
    Software installation is a reoccurring task in complex energy data management system (EDMS) platform development projects. The installation verification is part of the deployment, Continuous Integration and Quality Assurance testing efforts. While the procedure in general follows infrequently changing patterns the task requires technical knowledge and consumes time that is taken away from other development tasks. Moreover, the business is driving the deployment model from larger infrequent deployments towards more frequent incremental deployments that will demand high efficiency of the software deployment methods. In this thesis we studied the installation cases of a specific EDMS platform and the problems related to remote installation and installation automation. The goal was to come up with a solution that will increase the installation efficiency and decrease the human effort required to complete the installation tasks. Based on the findings, we modelled the installation process as an abstracted work flow and designed a multi-agent software platform. The designed platform can execute the installations in remote environments in an automated manner. The design includes fully automated installation execution and result reporting, centralized management interface for all the installation processes and proposes different feedback methods for installation information distribution. To prove that the concepts and ideas presented in the design work well in practise, we build a reference implementation. With hundreds of executed, differentiating installations that served a real life purpose the platform proved that with automation it can improve the installation process efficiency and decrease the required manual human effort after the processes that previously required manual effort were automated.EnergiatiedonhallintajÀrjestelmien (EDMS) kehitystyöprojektien eri vaiheissa on jatkuvasti tarvetta uudelleenasentaa ja pÀivittÀÀ jÀrjestelmiÀ. Asennusta tarvitaan paitsi tuotteen toimituksissa asiakkaille, mutta myös osana jatkuvaa integraatiota (CI) ja laadunvarmistus (QA) testausta. Vaikka nÀmÀ ohjelmistoasennukset pÀÀosin seuraavatkin samaa kaavaa, ne vaativat silti teknistÀ osaamista sekÀ kuluttavat sovellusammattilaisten aikaa, joka on poissa muusta kehitystyöstÀ. LisÀksi sovelluskehitystyön trendi on vahvasti siirtymÀssÀ harvemmin tehtÀvistÀ suurista pÀivitysoperaatioista kohti jatkuvasti tapahtuvaa pienempien osa-kokonaisuuksien pÀivittÀmistÀ. TÀmÀ muutos asettaa ohjelmistojen toimitus- ja asennusprosessin tehokkuuden entistÀkin tÀrkeÀmpÀÀn asemaan. TÀssÀ diplomityössÀ tutkittiin erÀÀn EDMS-sovellusalustan asennustapauksia sekÀ ongelmia, joita asennuksen automatisointi ja etÀasennus tuovat mukanaan. Tavoitteena oli löytÀÀ ratkaisu, jolla asennusoperaatio saadaan tehokkaammaksi sekÀ siihen ihmiseltÀ vaadittavan työmÀÀrÀn suuruutta saadaan pienennettyÀ. Tutkimustyön löydösten perusteella asennus mallinnettiin sarjaksi tehtÀviÀ ja ehdotetaan sovellusalustaratkaisua, joka kykenee suorittamaan etÀasennustehtÀviÀ automaattisesti. Ratkaisu sisÀltÀÀ asennusten suorittamisen, raportoinnin, operaatioiden keskitetyn hallinnan sekÀ ehdotuksia menetelmistÀ asennuksista saatavan informaation vÀlittÀmiseksi. Osana tutkimusta toteutettiin toimiva ohjelmistoalusta ehdotettujen ratkaisujen ja konseptien toimivuuden testaamiseksi. KehitetyllÀ ohjelmistolla suoritettiin satoja vaihtelevia asennuksia, jotka pohjautuivat todellisiin asennustarpeisiin. Asennukset osoittavat, ettÀ ehdotettu ratkaisu toimii tarkoituksessaan. Ratkaisun seurauksena asennusprosessin tehokkuutta saatiin nostettua ja prosessien automatisoituessa asennustyöhön ihmisiltÀ vaadittava työpanos pieneni

    Artificial Intelligence in Oral Health

    Get PDF
    This Special Issue is intended to lay the foundation of AI applications focusing on oral health, including general dentistry, periodontology, implantology, oral surgery, oral radiology, orthodontics, and prosthodontics, among others

    A software architecture for electro-mobility services: a milestone for sustainable remote vehicle capabilities

    Get PDF
    To face the tough competition, changing markets and technologies in automotive industry, automakers have to be highly innovative. In the previous decades, innovations were electronics and IT-driven, which increased exponentially the complexity of vehicle’s internal network. Furthermore, the growing expectations and preferences of customers oblige these manufacturers to adapt their business models and to also propose mobility-based services. One other hand, there is also an increasing pressure from regulators to significantly reduce the environmental footprint in transportation and mobility, down to zero in the foreseeable future. This dissertation investigates an architecture for communication and data exchange within a complex and heterogeneous ecosystem. This communication takes place between various third-party entities on one side, and between these entities and the infrastructure on the other. The proposed solution reduces considerably the complexity of vehicle communication and within the parties involved in the ODX life cycle. In such an heterogeneous environment, a particular attention is paid to the protection of confidential and private data. Confidential data here refers to the OEM’s know-how which is enclosed in vehicle projects. The data delivered by a car during a vehicle communication session might contain private data from customers. Our solution ensures that every entity of this ecosystem has access only to data it has the right to. We designed our solution to be non-technological-coupling so that it can be implemented in any platform to benefit from the best environment suited for each task. We also proposed a data model for vehicle projects, which improves query time during a vehicle diagnostic session. The scalability and the backwards compatibility were also taken into account during the design phase of our solution. We proposed the necessary algorithms and the workflow to perform an efficient vehicle diagnostic with considerably lower latency and substantially better complexity time and space than current solutions. To prove the practicality of our design, we presented a prototypical implementation of our design. Then, we analyzed the results of a series of tests we performed on several vehicle models and projects. We also evaluated the prototype against quality attributes in software engineering

    Visualization challenges in distributed heterogeneous computing environments

    Get PDF
    Large-scale computing environments are important for many aspects of modern life. They drive scientific research in biology and physics, facilitate industrial rapid prototyping, and provide information relevant to everyday life such as weather forecasts. Their computational power grows steadily to provide faster response times and to satisfy the demand for higher complexity in simulation models as well as more details and higher resolutions in visualizations. For some years now, the prevailing trend for these large systems is the utilization of additional processors, like graphics processing units. These heterogeneous systems, that employ more than one kind of processor, are becoming increasingly widespread since they provide many benefits, like higher performance or increased energy efficiency. At the same time, they are more challenging and complex to use because the various processing units differ in their architecture and programming model. This heterogeneity is often addressed by abstraction but existing approaches often entail restrictions or are not universally applicable. As these systems also grow in size and complexity, they become more prone to errors and failures. Therefore, developers and users become more interested in resilience besides traditional aspects, like performance and usability. While fault tolerance is well researched in general, it is mostly dismissed in distributed visualization or not adapted to its special requirements. Finally, analysis and tuning of these systems and their software is required to assess their status and to improve their performance. The available tools and methods to capture and evaluate the necessary information are often isolated from the context or not designed for interactive use cases. These problems are amplified in heterogeneous computing environments, since more data is available and required for the analysis. Additionally, real-time feedback is required in distributed visualization to correlate user interactions to performance characteristics and to decide on the validity and correctness of the data and its visualization. This thesis presents contributions to all of these aspects. Two approaches to abstraction are explored for general purpose computing on graphics processing units and visualization in heterogeneous computing environments. The first approach hides details of different processing units and allows using them in a unified manner. The second approach employs per-pixel linked lists as a generic framework for compositing and simplifying order-independent transparency for distributed visualization. Traditional methods for fault tolerance in high performance computing systems are discussed in the context of distributed visualization. On this basis, strategies for fault-tolerant distributed visualization are derived and organized in a taxonomy. Example implementations of these strategies, their trade-offs, and resulting implications are discussed. For analysis, local graph exploration and tuning of volume visualization are evaluated. Challenges in dense graphs like visual clutter, ambiguity, and inclusion of additional attributes are tackled in node-link diagrams using a lens metaphor as well as supplementary views. An exploratory approach for performance analysis and tuning of parallel volume visualization on a large, high-resolution display is evaluated. This thesis takes a broader look at the issues of distributed visualization on large displays and heterogeneous computing environments for the first time. While the presented approaches all solve individual challenges and are successfully employed in this context, their joint utility form a solid basis for future research in this young field. In its entirety, this thesis presents building blocks for robust distributed visualization on current and future heterogeneous visualization environments.Große Rechenumgebungen sind fĂŒr viele Aspekte des modernen Lebens wichtig. Sie treiben wissenschaftliche Forschung in Biologie und Physik, ermöglichen die rasche Entwicklung von Prototypen in der Industrie und stellen wichtige Informationen fĂŒr das tĂ€gliche Leben, beispielsweise Wettervorhersagen, bereit. Ihre Rechenleistung steigt stetig, um Resultate schneller zu berechnen und dem Wunsch nach komplexeren Simulationsmodellen sowie höheren Auflösungen in der Visualisierung nachzukommen. Seit einigen Jahren ist die Nutzung von zusĂ€tzlichen Prozessoren, z.B. Grafikprozessoren, der vorherrschende Trend fĂŒr diese Systeme. Diese heterogenen Systeme, welche mehr als eine Art von Prozessor verwenden, finden zunehmend mehr Verbreitung, da sie viele VorzĂŒge, wie höhere Leistung oder erhöhte Energieeffizienz, bieten. Gleichzeitig sind diese jedoch aufwendiger und komplexer in der Nutzung, da die verschiedenen Prozessoren sich in Architektur und Programmiermodel unterscheiden. Diese HeterogenitĂ€t wird oft durch Abstraktion angegangen, aber bisherige AnsĂ€tze sind hĂ€ufig nicht universal anwendbar oder bringen EinschrĂ€nkungen mit sich. Diese Systeme werden zusĂ€tzlich anfĂ€lliger fĂŒr Fehler und AusfĂ€lle, da ihre GrĂ¶ĂŸe und KomplexitĂ€t zunimmt. Entwickler sind daher neben traditionellen Aspekten, wie Leistung und Bedienbarkeit, zunehmend an WiderstandfĂ€higkeit gegenĂŒber Fehlern und AusfĂ€llen interessiert. Obwohl Fehlertoleranz im Allgemeinen gut untersucht ist, wird diese in der verteilten Visualisierung oft ignoriert oder nicht auf die speziellen UmstĂ€nde dieses Feldes angepasst. Analyse und Optimierung dieser Systeme und ihrer Software ist notwendig, um deren Zustand einzuschĂ€tzen und ihre Leistung zu verbessern. Die verfĂŒgbaren Werkzeuge und Methoden, um die erforderlichen Informationen zu sammeln und auszuwerten, sind oft vom Kontext entkoppelt oder nicht fĂŒr interaktive Szenarien ausgelegt. Diese Probleme sind in heterogenen Rechenumgebungen verstĂ€rkt, da dort mehr Daten fĂŒr die Analyse verfĂŒgbar und notwendig sind. FĂŒr verteilte Visualisierung ist zusĂ€tzlich RĂŒckmeldung in Echtzeit notwendig, um Interaktionen der Benutzer mit Leistungscharakteristika zu korrelieren und um die GĂŒltigkeit und Korrektheit der Daten und ihrer Visualisierung zu entscheiden. Diese Dissertation prĂ€sentiert BeitrĂ€ge fĂŒr all diese Aspekte. ZunĂ€chst werden zwei AnsĂ€tze zur Abstraktion im Kontext von generischen Berechnungen auf Grafikprozessoren und Visualisierung in heterogenen Umgebungen untersucht. Der erste Ansatz verbirgt Details verschiedener Prozessoren und ermöglicht deren Nutzung ĂŒber einheitliche Schnittstellen. Der zweite Ansatz verwendet pro-Pixel verkettete Listen (per-pixel linked lists) zur Kombination von Pixelfarben und zur Vereinfachung von ordnungsunabhĂ€ngiger Transparenz in verteilter Visualisierung. Übliche Fehlertoleranz-Methoden im Hochleistungsrechnen werden im Kontext der verteilten Visualisierung diskutiert. Auf dieser Grundlage werden Strategien fĂŒr fehlertolerante verteilte Visualisierung abgeleitet und in einer Taxonomie organisiert. Beispielhafte Umsetzungen dieser Strategien, ihre Kompromisse und ZugestĂ€ndnisse, und die daraus resultierenden Implikationen werden diskutiert. Zur Analyse werden lokale Exploration von Graphen und die Optimierung von Volumenvisualisierung untersucht. Herausforderungen in dichten Graphen wie visuelle Überladung, AmbiguitĂ€t und Einbindung zusĂ€tzlicher Attribute werden in Knoten-Kanten Diagrammen mit einer Linsenmetapher sowie ergĂ€nzenden Ansichten der Daten angegangen. Ein explorativer Ansatz zur Leistungsanalyse und Optimierung paralleler Volumenvisualisierung auf einer großen, hochaufgelösten Anzeige wird untersucht. Diese Dissertation betrachtet zum ersten Mal Fragen der verteilten Visualisierung auf großen Anzeigen und heterogenen Rechenumgebungen in einem grĂ¶ĂŸeren Kontext. WĂ€hrend jeder vorgestellte Ansatz individuelle Herausforderungen löst und erfolgreich in diesem Zusammenhang eingesetzt wurde, bilden alle gemeinsam eine solide Basis fĂŒr kĂŒnftige Forschung in diesem jungen Feld. In ihrer Gesamtheit prĂ€sentiert diese Dissertation Bausteine fĂŒr robuste verteilte Visualisierung auf aktuellen und kĂŒnftigen heterogenen Visualisierungsumgebungen
    • 

    corecore