14 research outputs found

    Improvement and Evaluation of a Function for Tracing the Diffusion of Classified Information on KVM

    Get PDF
    The increasing amount of classified information currently being managed by personal computers has resulted in the leakage of such information to external computers, which is a major problem. To prevent such leakage, we previously proposed a function for tracing the diffusion of classified information in a guest operating system (OS) using a virtual machine monitor (VMM). The tracing function hooks a system call in the guest OS from the VMM, and acquiring the information. By analyzing the information on the VMM side, the tracing function makes it possible to notify the user of the diffusion of classified information. However, this function has a problem in that the administrator of the computer platform cannot grasp the transition of the diffusion of classified processes or file information. In this paper, we present the solution to this problem and report on its evaluation

    Visualization challenges in distributed heterogeneous computing environments

    Get PDF
    Large-scale computing environments are important for many aspects of modern life. They drive scientific research in biology and physics, facilitate industrial rapid prototyping, and provide information relevant to everyday life such as weather forecasts. Their computational power grows steadily to provide faster response times and to satisfy the demand for higher complexity in simulation models as well as more details and higher resolutions in visualizations. For some years now, the prevailing trend for these large systems is the utilization of additional processors, like graphics processing units. These heterogeneous systems, that employ more than one kind of processor, are becoming increasingly widespread since they provide many benefits, like higher performance or increased energy efficiency. At the same time, they are more challenging and complex to use because the various processing units differ in their architecture and programming model. This heterogeneity is often addressed by abstraction but existing approaches often entail restrictions or are not universally applicable. As these systems also grow in size and complexity, they become more prone to errors and failures. Therefore, developers and users become more interested in resilience besides traditional aspects, like performance and usability. While fault tolerance is well researched in general, it is mostly dismissed in distributed visualization or not adapted to its special requirements. Finally, analysis and tuning of these systems and their software is required to assess their status and to improve their performance. The available tools and methods to capture and evaluate the necessary information are often isolated from the context or not designed for interactive use cases. These problems are amplified in heterogeneous computing environments, since more data is available and required for the analysis. Additionally, real-time feedback is required in distributed visualization to correlate user interactions to performance characteristics and to decide on the validity and correctness of the data and its visualization. This thesis presents contributions to all of these aspects. Two approaches to abstraction are explored for general purpose computing on graphics processing units and visualization in heterogeneous computing environments. The first approach hides details of different processing units and allows using them in a unified manner. The second approach employs per-pixel linked lists as a generic framework for compositing and simplifying order-independent transparency for distributed visualization. Traditional methods for fault tolerance in high performance computing systems are discussed in the context of distributed visualization. On this basis, strategies for fault-tolerant distributed visualization are derived and organized in a taxonomy. Example implementations of these strategies, their trade-offs, and resulting implications are discussed. For analysis, local graph exploration and tuning of volume visualization are evaluated. Challenges in dense graphs like visual clutter, ambiguity, and inclusion of additional attributes are tackled in node-link diagrams using a lens metaphor as well as supplementary views. An exploratory approach for performance analysis and tuning of parallel volume visualization on a large, high-resolution display is evaluated. This thesis takes a broader look at the issues of distributed visualization on large displays and heterogeneous computing environments for the first time. While the presented approaches all solve individual challenges and are successfully employed in this context, their joint utility form a solid basis for future research in this young field. In its entirety, this thesis presents building blocks for robust distributed visualization on current and future heterogeneous visualization environments.Große Rechenumgebungen sind für viele Aspekte des modernen Lebens wichtig. Sie treiben wissenschaftliche Forschung in Biologie und Physik, ermöglichen die rasche Entwicklung von Prototypen in der Industrie und stellen wichtige Informationen für das tägliche Leben, beispielsweise Wettervorhersagen, bereit. Ihre Rechenleistung steigt stetig, um Resultate schneller zu berechnen und dem Wunsch nach komplexeren Simulationsmodellen sowie höheren Auflösungen in der Visualisierung nachzukommen. Seit einigen Jahren ist die Nutzung von zusätzlichen Prozessoren, z.B. Grafikprozessoren, der vorherrschende Trend für diese Systeme. Diese heterogenen Systeme, welche mehr als eine Art von Prozessor verwenden, finden zunehmend mehr Verbreitung, da sie viele Vorzüge, wie höhere Leistung oder erhöhte Energieeffizienz, bieten. Gleichzeitig sind diese jedoch aufwendiger und komplexer in der Nutzung, da die verschiedenen Prozessoren sich in Architektur und Programmiermodel unterscheiden. Diese Heterogenität wird oft durch Abstraktion angegangen, aber bisherige Ansätze sind häufig nicht universal anwendbar oder bringen Einschränkungen mit sich. Diese Systeme werden zusätzlich anfälliger für Fehler und Ausfälle, da ihre Größe und Komplexität zunimmt. Entwickler sind daher neben traditionellen Aspekten, wie Leistung und Bedienbarkeit, zunehmend an Widerstandfähigkeit gegenüber Fehlern und Ausfällen interessiert. Obwohl Fehlertoleranz im Allgemeinen gut untersucht ist, wird diese in der verteilten Visualisierung oft ignoriert oder nicht auf die speziellen Umstände dieses Feldes angepasst. Analyse und Optimierung dieser Systeme und ihrer Software ist notwendig, um deren Zustand einzuschätzen und ihre Leistung zu verbessern. Die verfügbaren Werkzeuge und Methoden, um die erforderlichen Informationen zu sammeln und auszuwerten, sind oft vom Kontext entkoppelt oder nicht für interaktive Szenarien ausgelegt. Diese Probleme sind in heterogenen Rechenumgebungen verstärkt, da dort mehr Daten für die Analyse verfügbar und notwendig sind. Für verteilte Visualisierung ist zusätzlich Rückmeldung in Echtzeit notwendig, um Interaktionen der Benutzer mit Leistungscharakteristika zu korrelieren und um die Gültigkeit und Korrektheit der Daten und ihrer Visualisierung zu entscheiden. Diese Dissertation präsentiert Beiträge für all diese Aspekte. Zunächst werden zwei Ansätze zur Abstraktion im Kontext von generischen Berechnungen auf Grafikprozessoren und Visualisierung in heterogenen Umgebungen untersucht. Der erste Ansatz verbirgt Details verschiedener Prozessoren und ermöglicht deren Nutzung über einheitliche Schnittstellen. Der zweite Ansatz verwendet pro-Pixel verkettete Listen (per-pixel linked lists) zur Kombination von Pixelfarben und zur Vereinfachung von ordnungsunabhängiger Transparenz in verteilter Visualisierung. Übliche Fehlertoleranz-Methoden im Hochleistungsrechnen werden im Kontext der verteilten Visualisierung diskutiert. Auf dieser Grundlage werden Strategien für fehlertolerante verteilte Visualisierung abgeleitet und in einer Taxonomie organisiert. Beispielhafte Umsetzungen dieser Strategien, ihre Kompromisse und Zugeständnisse, und die daraus resultierenden Implikationen werden diskutiert. Zur Analyse werden lokale Exploration von Graphen und die Optimierung von Volumenvisualisierung untersucht. Herausforderungen in dichten Graphen wie visuelle Überladung, Ambiguität und Einbindung zusätzlicher Attribute werden in Knoten-Kanten Diagrammen mit einer Linsenmetapher sowie ergänzenden Ansichten der Daten angegangen. Ein explorativer Ansatz zur Leistungsanalyse und Optimierung paralleler Volumenvisualisierung auf einer großen, hochaufgelösten Anzeige wird untersucht. Diese Dissertation betrachtet zum ersten Mal Fragen der verteilten Visualisierung auf großen Anzeigen und heterogenen Rechenumgebungen in einem größeren Kontext. Während jeder vorgestellte Ansatz individuelle Herausforderungen löst und erfolgreich in diesem Zusammenhang eingesetzt wurde, bilden alle gemeinsam eine solide Basis für künftige Forschung in diesem jungen Feld. In ihrer Gesamtheit präsentiert diese Dissertation Bausteine für robuste verteilte Visualisierung auf aktuellen und künftigen heterogenen Visualisierungsumgebungen

    Design of Function for Tracing Diffusion of Classified Information for IPC on KVM

    Get PDF
    The leaking of information has increased in recent years. To address this problem, we previously proposed a function for tracing the diffusion of classified information in a guest OS using a virtual machine monitor (VMM). This function makes it possible to grasp the location of classified information and detect information leakage without modifying the source codes of the guest OS. The diffusion of classified information is caused by the file operation, child process creation, and inter-process communication (IPC). In a previous study, we implemented the proposed function for a file operation and child process creation excluding IPC using a kernel-based virtual machine (KVM). In this paper, we describe the design of the proposed function for IPC on a KVM without modifying the guest OS. The proposed function traces the local and remote IPCs inside the guest OS from the outside so as to trace the information diffusion. Because IPC with an outside computer might cause an information leakage, tracing the IPCs enables the detection of such a leakage. We also report the evaluation results including the traceability and performance of the proposed function

    Textbook on Scar Management

    Get PDF
    This text book is open access under a CC BY 4.0 license. Written by a group of international experts in the field and the result of over ten years of collaboration, it allows students and readers to gain to gain a detailed understanding of scar and wound treatment – a topic still dispersed among various disciplines. The content is divided into three parts for easy reference. The first part focuses on the fundamentals of scar management, including assessment and evaluation procedures, classification, tools for accurate measurement of all scar-related elements (volume density, color, vascularization), descriptions of the different evaluation scales. It also features chapters on the best practices in electronic-file storage for clinical reevaluation and telemedicine procedures for safe remote evaluation. The second section offers a comprehensive review of treatment and evidence-based technologies, presenting a consensus of the various available guidelines (silicone, surgery, chemical injections, mechanical tools for scar stabilization, lasers). The third part evaluates the full range of emerging technologies offered to physicians as alternative or complementary solutions for wound healing (mechanical, chemical, anti-proliferation). Textbook on Scar Management will appeal to trainees, fellows, residents and physicians dealing with scar management in plastic surgery, dermatology, surgery and oncology, as well as to nurses and general practitioners ; Comprehensive reference covering the complete field of wounds and scar management: semiology, classifications and scoring Highly educational contents for trainees as well as professionals in plastic surgery, dermatology, surgery, oncology as well as nurses and general practitioners Fast access to information through key points, take home messages, highlights, and a wealth of clinical cases Book didactic contents enhanced by supplementary material and video

    Textbook on Scar Management

    Get PDF
    This text book is open access under a CC BY 4.0 license. Written by a group of international experts in the field and the result of over ten years of collaboration, it allows students and readers to gain to gain a detailed understanding of scar and wound treatment – a topic still dispersed among various disciplines. The content is divided into three parts for easy reference. The first part focuses on the fundamentals of scar management, including assessment and evaluation procedures, classification, tools for accurate measurement of all scar-related elements (volume density, color, vascularization), descriptions of the different evaluation scales. It also features chapters on the best practices in electronic-file storage for clinical reevaluation and telemedicine procedures for safe remote evaluation. The second section offers a comprehensive review of treatment and evidence-based technologies, presenting a consensus of the various available guidelines (silicone, surgery, chemical injections, mechanical tools for scar stabilization, lasers). The third part evaluates the full range of emerging technologies offered to physicians as alternative or complementary solutions for wound healing (mechanical, chemical, anti-proliferation). Textbook on Scar Management will appeal to trainees, fellows, residents and physicians dealing with scar management in plastic surgery, dermatology, surgery and oncology, as well as to nurses and general practitioners ; Comprehensive reference covering the complete field of wounds and scar management: semiology, classifications and scoring Highly educational contents for trainees as well as professionals in plastic surgery, dermatology, surgery, oncology as well as nurses and general practitioners Fast access to information through key points, take home messages, highlights, and a wealth of clinical cases Book didactic contents enhanced by supplementary material and video

    Anales del XIII Congreso Argentino de Ciencias de la ComputaciĂłn (CACIC)

    Get PDF
    Contenido: Arquitecturas de computadoras Sistemas embebidos Arquitecturas orientadas a servicios (SOA) Redes de comunicaciones Redes heterogéneas Redes de Avanzada Redes inalámbricas Redes móviles Redes activas Administración y monitoreo de redes y servicios Calidad de Servicio (QoS, SLAs) Seguridad informática y autenticación, privacidad Infraestructura para firma digital y certificados digitales Análisis y detección de vulnerabilidades Sistemas operativos Sistemas P2P Middleware Infraestructura para grid Servicios de integración (Web Services o .Net)Red de Universidades con Carreras en Informática (RedUNCI
    corecore