59 research outputs found

    Automated gateware discovery using open firmware

    Get PDF
    Includes abstract.Includes bibliographical references.This dissertation describes the design and implementation of a mechanism that automates gateware device detection for reconfigurable hardware. The research facilitates the process of identifying and operating on gateware images by extending the existing infrastructure of probing devices in traditional software by using the chosen technology

    SDN based security solutions for multi-tenancy NFV

    Get PDF
    The Internet continues to expand drastically as a result of explosion of mobile devices, content, server virtualization, and advancement of cloud services. This increase has significantly changed traffic patterns within the enterprise data centres. Therefore, advanced technologies are needed to improve traditional network deployments to enable them to handle the changing network patterns. Software defined networks (SDN) and network function virtualisation (NFV) are innovative technologies that enable network flexibility, increase network and service agility, and support service-driven virtual networks using concepts of virtualisation and softwarisation. Collaboration of these two concepts enable cloud operator to offer network-as-a-service (NaaS) to multiple tenants in a data-centre deployment. Despite the benefits brought by these technologies, they also bring along security challenges that need to be addressed and managed to ensure successful deployment and encourage faster adoption in industry. This dissertation proposes security solution based on tenant isolation, network access control (NAC) and network reconfiguration that can be implemented in NFV multi-tenant deployment to guarantee privacy and security of tenant functions. The evaluation of the proof-of-concept framework proves that SDN based tenant isolation solution provides a high level of isolation in a multi-tenant NFV cloud. It also shows that the proposed network reconfiguration greatly reduces chances of an attacker correctly identifying location and IP addresses of tenant functions within the cloud environment. Because of resource limitation, the proposed NAC solution was not evaluated. The efficiency of this solution for multitenancy NFV has been added as part of future work

    Modern software metrics : design and implementation

    Get PDF
    As software becomes more complex, new measurements methods are needed to leverage quality, improve user's experience and reduce energy consumption. This dissertation introduces the Inspector, a software tool for framework-level dynamic software analysis and the Thermal Painting, a new software metric for measuring the performance of the graphical subsystem of a program. The Inspector breaks many of the constraints that aected traditional tools like debuggers and function-level prolers, like the need to alter the source or binary code and the impossibility to prole already running code that exhibits bad behavior, and provides a unique work environment for conducting the tests. The Thermal Painting is the new software metric that measures the per-pixel energy required to paint a graphical user interface, allowing to prole and improve the graphical performance of a program

    CLINICAL GENOMIC RESEARCH MANAGEMENT

    Get PDF
    Technological advancement in Genomics has propelled research in a new era, where methods of conducting experiments have completely been renovated. Riding the wave of Information Technology, equipped with statistical tools, Genomics provide a revolutionized perspective unthought-of in the past. With the completion of the Human Genome project, we have a common reference for analysis at the level of the complete genome. High throughput technologies for gene expression, genotyping and sequencing are propelling present research. Attempts are now being made for the incorporation of these methods in the health care in a structured format. Clinicians cherish the use of genomics for the assessment disease predisposition and realizing personalized medical care for a better health care. As genome sequencing is becoming swifter and its cost reducing, the public genomic data has increased many folds. Data from other high throughput technologies and annotations further increase the storage requirements. Laboratory management software, LIMS, is now becoming the limiting factor as automation and integration increases. Thus genomics now faces the challenge of management of this enormous data catering to varied needs, not limited only for the research laboratories, but extends also to health care institutions and individual clinicians. Further, there is a growing need for the analysis and visualization of the generated data to be integrated into the same platform for a continuous research experience and systematic supervision. Data security is of prime concern, especially in health care concerning human subjects. The interest of the clinicians adds another management requirement, a delivery system for the concerned subject. Hypertension is a complex disorder with world-wide prevalence. HYPERGENES project was centered on the objective of integrating biological data and processes with Hypertension as the disease model. The HYPERGENES project focuses on the definition of a comprehensive genetic epidemiological model of complex traits like Essential Hypertension (EH) and intermediate phenotypes of hypertension such as Target Organ Damage (TOD). During the HYPERGENES project, the above mentioned challenges were comprehended and evaluated, leading to the present work as an endeavor to provide a generalized integrated solution towards the management of genomic and clinical data for clinical genomic research. This PhD thesis represents the description of AD2BioDB, biological data management platform and SeqPipe, dynamic pipeline management software, in the path of meeting the challenges posed in the area of clinical genomics. AD2BioDB provides the platform where data generated using different technologies can be managed and analyzed with reporting and visualization modules for improved understanding of the results among all research collaborators. AD2BioDB is the management software environment in which the in-silico data can be shared and analyzed. The analysis software is connected within AD2BioDB through the plug-in system. SeqPipe software provides opportunity to dynamically create pipeline workflows for the multi-step analysis of data. The interactive graphical user interface provides the opportunity for coding free pipeline creation and analysis. This tool is especially useful in the dynamic NGS analysis, where multiple tools i with different versions are in use. SeqPipe can be used as independent software or as a plug-in analysis tool within an application like AD2BioDB. The key features of AD2BioDB can be summarized as: \uf0b7 Clinical genomics data management \uf0b7 Project management \uf0b7 Data security \uf0b7 Dynamic creation of graphical representation. \uf0b7 Distributed workflow analysis \uf0b7 Reporting and alert features. \uf0b7 Dynamic integration of high throughput technologies We developed AD2BioDB as a prototype in our laboratory for providing support to the increasing genomic data and complexity of analysis. The software aims at providing a continuous research experience with a versatile platform that supports data management, analysis and public knowledge integration. Through the integration of SeqPipe into AD2BioDB, the management system becomes robust in providing a distributed analysis environment

    Just-in-time Analytics Over Heterogeneous Data and Hardware

    Get PDF
    Industry and academia are continuously becoming more data-driven and data-intensive, relying on the analysis of a wide variety of datasets to gain insights. At the same time, data variety increases continuously across multiple axes. First, data comes in multiple formats, such as the binary tabular data of a DBMS, raw textual files, and domain-specific formats. Second, different datasets follow different data models, such as the relational and the hierarchical one. Data location also varies: Some datasets reside in a central "data lake", whereas others lie in remote data sources. In addition, users execute widely different analysis tasks over all these data types. Finally, the process of gathering and integrating diverse datasets introduces several inconsistencies and redundancies in the data, such as duplicate entries for the same real-world concept. In summary, heterogeneity significantly affects the way data analysis is performed. In this thesis, we aim for data virtualization: Abstracting data out of its original form and manipulating it regardless of the way it is stored or structured, without a performance penalty. To achieve data virtualization, we design and implement systems that i) mask heterogeneity through the use of heterogeneity-aware, high-level building blocks and ii) offer fast responses through on-demand adaptation techniques. Regarding the high-level building blocks, we use a query language and algebra to handle multiple collection types, such as relations and hierarchies, express transformations between these collection types, as well as express complex data cleaning tasks over them. In addition, we design a location-aware compiler and optimizer that masks away the complexity of accessing multiple remote data sources. Regarding on-demand adaptation, we present a design to produce a new system per query. The design uses customization mechanisms that trigger runtime code generation to mimic the system most appropriate to answer a query fast: Query operators are thus created based on the query workload and the underlying data models; the data access layer is created based on the underlying data formats. In addition, we exploit emerging hardware by customizing the system implementation based on the available heterogeneous processors Ăą CPUs and GPGPUs. We thus pair each workload with its ideal processor type. The end result is a just-in-time database system that is specific to the query, data, workload, and hardware instance. This thesis redesigns the data management stack to natively cater for data heterogeneity and exploit hardware heterogeneity. Instead of centralizing all relevant datasets, converting them to a single representation, and loading them in a monolithic, static, suboptimal system, our design embraces heterogeneity. Overall, our design decouples the type of performed analysis from the original data layout; users can perform their analysis across data stores, data models, and data formats, but at the same time experience the performance offered by a custom system that has been built on demand to serve their specific use case

    Network-Integrated Multimedia Middleware, Services, and Applications

    Get PDF
    Today, there is a strong trend towards networked multimedia devices. However, common multimedia software architectures are restricted to perform all processing on a single system. Available software infrastructures for distributed computing — commonly referred to as middleware — only partly provide the facilities needed for supporting multimedia in distributed and dynamic environments. Approaches from the research community only focus on specific aspects and do not achieve the coverage needed for a full-featured multimedia middleware solution. The Network-Integrated Multimedia Middleware (NMM) presented in this thesis considers the network as an integral part. Despite the inherent heterogeneity of present networking and device technologies, the architecture allows to extend control and cooperation to the network and enables the development of distributed multimedia applications that transparently use local and remote components in combination. The base architecture of this middleware is augmented by several middleware services that especially aim at providing additional support for developing complex applications that involve mobile users and devices. To this end, previously not available services and corresponding abstractions are proposed, realized, and evaluated. The performance and applicability of the developed middleware and its additional services are demonstrated by describing different realized application scenarios.Eine wachsende Anzahl von Multimedia-Geraeten verfuegt heute bereits ueber Netzwerkschnittstellen. Verfueugbare Multimedia Software-Architekturen beschraeanken jedoch die gesamte Datenverarbeitung auf ein einzelnes System. Verbreitete Software-Infrastrukturen fuer Verteilte Systeme — ueblicherweise Middleware genannt — bieten nur teilweise die Eigenschaften, die fuer die Multimedia-Datenverarbeitung in vernetzten und dynamischen Umgebungen benoetigt werden. Ansaetze aus der Forschung behandeln nur spezielle Teilaspekte und erreichen deshalb nicht den Funktionsumfang einer vollwertigen Middleware fuer Multimedia. Die in dieser Arbeit beschriebene Netzwerk-Integrierte Multimedia Middleware (NMM) betrachtet das Netzwerk als integralen Bestandteil. Die Architektur erlaubt trotz der inhaerenten Heterogenitaet der vorhandenen Netzwerk- und Geraetetechnologie die Kontrolle und das Zusammenspiel von Systemen auf das Netzwerk auszuweiten. Dies ermoeglicht die Entwicklung verteilter Multimedia-Anwendungen, die transparent lokale und entfernte Komponenten zusammen einsetzen. Die Kernarchitektur dieser Middleware wird durch verschiedene Dienste erweitert, die speziell die Realisierung komplexer Anwendungsszenarien mitmobilen Geraeten und Benutzern unterstuetzt. Insbesondere werden neue, bisher nicht vorhandene Middleware-Dienste und zugehoerige Abstraktionen vorgeschlagen, realisiert und evaluiert. Anhand verschiedener Anwendungsszenarien wird die Leistungfaehigkeit, die Effizienz und die praktische Relevanz der entwickelten Middleware und der ergaenzenden Dienste demonstriert

    A framework for flexible integration in robotics and its applications for calibration and error compensation

    Get PDF
    Robotics has been considered as a viable automation solution for the aerospace industry to address manufacturing cost. Many of the existing robot systems augmented with guidance from a large volume metrology system have proved to meet the high dimensional accuracy requirements in aero-structure assembly. However, they have been mainly deployed as costly and dedicated systems, which might not be ideal for aerospace manufacturing having low production rate and long cycle time. The work described in this thesis is to provide technical solutions to improve the flexibility and cost-efficiency of such metrology-integrated robot systems. To address the flexibility, a software framework that supports reconfigurable system integration is developed. The framework provides a design methodology to compose distributed software components which can be integrated dynamically at runtime. This provides the potential for the automation devices (robots, metrology, actuators etc.) controlled by these software components to be assembled on demand for various assembly applications. To reduce the cost of deployment, this thesis proposes a two-stage error compensation scheme for industrial robots that requires only intermittent metrology input, thus allowing for one expensive metrology system to be used by a number of robots. Robot calibration is employed in the first stage to reduce the majority of robot inaccuracy then the metrology will correct the residual errors. In this work, a new calibration model for serial robots having a parallelogram linkage is developed that takes into account both geometric errors and joint deflections induced by link masses and weight of the end-effectors. Experiments are conducted to evaluate the two pieces of work presented above. The proposed framework is adopted to create a distributed control system that implements calibration and error compensation for a large industrial robot having a parallelogram linkage. The control system is formed by hot-plugging the control applications of the robot and metrology used together. Experimental results show that the developed error model was able to improve the 3 positional accuracy of the loaded robot from several millimetres to less than one millimetre and reduce half of the time previously required to correct the errors by using only the metrology. The experiments also demonstrate the capability of sharing one metrology system to more than one robot

    A framework for adaptive monitoring and performance management of component-based enterprise applications

    Get PDF
    Most large-scale enterprise applications are currently built using component-based middleware platforms such as J2EE or .NET. Developers leverage enterprise services provided by such platforms to speed up development and increase the robustness of their applications. In addition, using a component-oriented development model brings benefits such as increased reusability and flexibility in integrating with third-party systems. In order to provide the required services, the application servers implementing the corresponding middleware specifications employ a complex run-time infrastructure that integrates with developer-written business logic. The resulting complexity of the execution environment in such systems makes it difficult for architects and developers to understand completely the implications of alternative design options over the resulting performance of the running system. They often make incorrect assumptions about the behaviour of the middleware, which may lead to design decisions that cause severe performance problems after the system has been deployed. This situation is aggravated by the fact that although application servers vary greatly in performance and capabilities, many advertise a similar set of features, making it difficult to choose the one that is the most appropriate for their task. The thesis presents a methodology and tool for approaching performance management in enterprise component-based systems. By leveraging the component platform infrastructure, the described solution can nonintrusively instrument running applications and extract performance statistics. The use of component meta-data for target analysis, together with standards-based implementation strategies, ensures the complete portability of the instrumentation solution across different application servers. Based on this instrumentation infrastructure, a complete performance management framework including modelling and performance prediction is proposed. Most instrumentation solutions exhibit static behaviour by targeting a specified set of components. For long running applications, a constant overhead profile is undesirable and typically, such a solution would only be used for the duration of a performance audit, sacrificing the benefits of constantly observing a production system in favour of a reduced performance impact. This is addressed in this thesis by proposing an adaptive approach to monitoring which uses execution models to target profiling operations dynamically on components that exhibit performance degradation; this ensures a negligible overhead when the target application performs as expected and a minimum impact when certain components under-perform. Experimental results obtained with the prototype tool demonstrate the feasibility of the approach in terms of induced overhead. The portable and extensible architecture yields a versatile and adaptive basic instrumentation facility for a variety of potential applications that need a flexible solution for monitoring long running enterprise applications

    Adaptive Query Processing on RAW Data

    Get PDF
    Database systems deliver impressive performance for large classes of workloads as the result of decades of research into optimizing database engines. High performance, however, is achieved at the cost of versatility. In particular, database systems only operate efficiently over loaded data, i.e., data converted from its original raw format into the system’s internal data format. At the same time, data volume continues to increase exponentially and data varies increasingly, with an escalating number of new formats. The consequence is a growing impedance mismatch between the original structures holding the data in the raw files and the structures used by query engines for efficient processing. In an ideal scenario, the query engine would seamlessly adapt itself to the data and ensure efficient query processing regardless of the input data formats, optimizing itself to each instance of a file and of a query by leveraging information available at query time. Today’s systems, however, force data to adapt to the query engine during data loading. This paper proposes adapting the query engine to the formats of raw data. It presents RAW, a prototype query engine which enables querying heterogeneous data sources transparently. RAW employs Just-In-Time access paths, which efficiently couple heterogeneous raw files to the query engine and reduce the overhead of traditional general-purpose scan operators. There are, however, inherent overheads with accessing raw data directly that cannot be eliminated, such as converting the raw values. Therefore, RAW also uses column shreds, ensuring that we pay these costs only for the subsets of raw data strictly needed by a query. We use RAW in a real-world scenario and achieve a two-order of magnitude speedup against the existing hand-written solution
    • 

    corecore