119 research outputs found

    Unified System on Chip RESTAPI Service (USOCRS)

    Get PDF
    Abstract. This thesis investigates the development of a Unified System on Chip RESTAPI Service (USOCRS) to enhance the efficiency and effectiveness of SOC verification reporting. The research aims to overcome the challenges associated with the transfer, utilization, and interpretation of SoC verification reports by creating a unified platform that integrates various tools and technologies. The research methodology used in this study follows a design science approach. A thorough literature review was conducted to explore existing approaches and technologies related to SOC verification reporting, automation, data visualization, and API development. The review revealed gaps in the current state of the field, providing a basis for further investigation. Using the insights gained from the literature review, a system design and implementation plan were developed. This plan makes use of cutting-edge technologies such as FASTAPI, SQL and NoSQL databases, Azure Active Directory for authentication, and Cloud services. The Verification Toolbox was employed to validate SoC reports based on the organization’s standards. The system went through manual testing, and user satisfaction was evaluated to ensure its functionality and usability. The results of this study demonstrate the successful design and implementation of the USOCRS, offering SOC engineers a unified and secure platform for uploading, validating, storing, and retrieving verification reports. The USOCRS facilitates seamless communication between users and the API, granting easy access to vital information including successes, failures, and test coverage derived from submitted SoC verification reports. By automating and standardizing the SOC verification reporting process, the USOCRS eliminates manual and repetitive tasks usually done by developers, thereby enhancing productivity, and establishing a robust and reliable framework for report storage and retrieval. Through the integration of diverse tools and technologies, the USOCRS presents a comprehensive solution that adheres to the required specifications of the SOC schema used within the organization. Furthermore, the USOCRS significantly improves the efficiency and effectiveness of SOC verification reporting. It facilitates the submission process, reduces latency through optimized data storage, and enables meaningful extraction and analysis of report data

    Automatic performance optimisation of component-based enterprise systems via redundancy

    Get PDF
    Component technologies, such as J2EE and .NET have been extensively adopted for building complex enterprise applications. These technologies help address complex functionality and flexibility problems and reduce development and maintenance costs. Nonetheless, current component technologies provide little support for predicting and controlling the emerging performance of software systems that are assembled from distinct components. Static component testing and tuning procedures provide insufficient performance guarantees for components deployed and run in diverse assemblies, under unpredictable workloads and on different platforms. Often, there is no single component implementation or deployment configuration that can yield optimal performance in all possible conditions under which a component may run. Manually optimising and adapting complex applications to changes in their running environment is a costly and error-prone management task. The thesis presents a solution for automatically optimising the performance of component-based enterprise systems. The proposed approach is based on the alternate usage of multiple component variants with equivalent functional characteristics, each one optimized for a different execution environment. A management framework automatically administers the available redundant variants and adapts the system to external changes. The framework uses runtime monitoring data to detect performance anomalies and significant variations in the application's execution environment. It automatically adapts the application so as to use the optimal component configuration under the current running conditions. An automatic clustering mechanism analyses monitoring data and infers information on the components' performance characteristics. System administrators use decision policies to state high-level performance goals and configure system management processes. A framework prototype has been implemented and tested for automatically managing a J2EE application. Obtained results prove the framework's capability to successfully manage a software system without human intervention. The management overhead induced during normal system execution and through management operations indicate the framework's feasibility

    BDEv 3.0: energy efficiency and microarchitectural characterization of Big Data processing frameworks

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in Future Generation Computer Systems. The final authenticated version is available online at: https://doi.org/10.1016/j.future.2018.04.030[Abstract] As the size of Big Data workloads keeps increasing, the evaluation of distributed frameworks becomes a crucial task in order to identify potential performance bottlenecks that may delay the processing of large datasets. While most of the existing works generally focus only on execution time and resource utilization, analyzing other important metrics is key to fully understanding the behavior of these frameworks. For example, microarchitecture-level events can bring meaningful insights to characterize the interaction between frameworks and hardware. Moreover, energy consumption is also gaining increasing attention as systems scale to thousands of cores. This work discusses the current state of the art in evaluating distributed processing frameworks, while extending our Big Data Evaluator tool (BDEv) to extract energy efficiency and microarchitecture-level metrics from the execution of representative Big Data workloads. An experimental evaluation using BDEv demonstrates its usefulness to bring meaningful information from popular frameworks such as Hadoop, Spark and Flink.Ministerio de Economía, Industria y Competitividad; TIN2016-75845-PMinisterio de Educación; FPU14/02805Ministerio de Educación; FPU15/0338

    Towards Scalable, Cloud Based, Confidential Data Stream Processing

    Get PDF
    Increasing data availability, velocity, variability, and size have lead to the development of new data processing paradigms that offer users different ways to process and manage data specific to their needs. One such paradigm is data stream processing, as managed by Data Stream Processing Systems (DSPS). In contrast to traditional database management systems wherein data is stationary and queries are transient, in stream processing systems, data is transient and queries are stationary (that is, continuous and long running). In such systems, users are expecting to process temporal data, where data is only considered for some period of time, and discarded after. Often, as with many other software applications, those who employ such systems will outsource computation to third party computation platforms such as Amazon, IBM, or Google. The use of third parties not only outsources computation, but it outsources hardware and software maintenance costs as well, relieving the user from having to incur these costs themselves. Moreover, when a user outsources their DSPS, they often have some service level agreement that places guarantees on service availability and uptime. Given the above benefits to outsourcing computation, it is clearly desirable for a user to outsource their DSPS computation. Such outsourcing, however, may violate the privacy constraints of the those who provide the data stream. Specifically, they may not wish to share their plaintext data with a third-party that they may not trust. This leads to an interesting dichotomy between the desire of the user to outsource as much of their computation as possible and the desire of the data stream providers to keep their data private and avoid leaking data to a third-party system. Current work that explores linking the two poles of this dichotomy either limits the expressiveness of supported queries, requires the data provider to trust the third-party systems, or incurs computational or monetary overheads prohibitive for the querier. In this dissertation, we explore the methods for shrinking the gap between the poles of this dichotomy and overcome the limitation of the state-of-the art systems by providing data providers and queriers with efficient access control enforcement on untrusted third party systems over encrypted data. Specifically, we introduce our system PolyStream for executing queries on encrypted data using computation-enabling encryption, with an online key management system. We further introduce Sanctuary to provide computation on any data on third-party systems using trusted hardware. Finally we introduce Shoal, our query optimizer that considers the heterogeneous nature of streaming systems at optimization time to improve query performance when access controls are enforced on the streaming data. Through the union of the contributions of this dissertation, we show that considering access controls at optimization time can lead to better utilization, performance, and protection for streaming data

    Performance analysis of a database caching system in a grid environment

    Get PDF
    Tese de mestrado. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto. 200
    corecore