1,520 research outputs found

    Rethinking the Delivery Architecture of Data-Intensive Visualization

    Get PDF
    The web has transformed the way people create and consume information. However, data-intensive science applications have rarely been able to take full benefits of the web ecosystem so far. Analysis and visualization have remained close to large datasets on large servers and desktops, because of the vast resources that data-intensive applications require. This hampers the accessibility and on-demand availability of data-intensive science. In this work, I propose a novel architecture for the delivery of interactive, data-intensive visualization to the web ecosystem. The proposed architecture, codenamed Fabric, follows the idea of keeping the server-side oblivious of application logic as a set of scalable microservices that 1) manage data and 2) compute data products. Disconnected from application logic, the services allow interactive data-intensive visualization be simultaneously accessible to many users. Meanwhile, the client-side of this architecture perceives visualization applications as an interaction-in image-out black box with the sole responsibility of keeping track of application state and mapping interactions into well-defined and structured visualization requests. Fabric essentially provides a separation of concern that decouples the otherwise tightly coupled client and server seen in traditional data applications. Initial results show that as a result of this, Fabric enables high scalability of audience, scientific reproducibility, and improves control and protection of data products

    10th SC@RUG 2013 proceedings:Student Colloquium 2012-2013

    Get PDF

    10th SC@RUG 2013 proceedings:Student Colloquium 2012-2013

    Get PDF

    10th SC@RUG 2013 proceedings:Student Colloquium 2012-2013

    Get PDF

    10th SC@RUG 2013 proceedings:Student Colloquium 2012-2013

    Get PDF

    10th SC@RUG 2013 proceedings:Student Colloquium 2012-2013

    Get PDF

    10th SC@RUG 2013 proceedings:Student Colloquium 2012-2013

    Get PDF

    Machine Learning-Based Anomaly Detection in Cloud Virtual Machine Resource Usage

    Get PDF
    Anomaly detection is an important activity in cloud computing systems because it aids in the identification of odd behaviours or actions that may result in software glitch, security breaches, and performance difficulties. Detecting aberrant resource utilization trends in virtual machines is a typical application of anomaly detection in cloud computing (VMs). Currently, the most serious cyber threat is distributed denial-of-service attacks. The afflicted server\u27s resources and internet traffic resources, such as bandwidth and buffer size, are slowed down by restricting the server\u27s capacity to give resources to legitimate customers. To recognize attacks and common occurrences, machine learning techniques such as Quadratic Support Vector Machines (QSVM), Random Forest, and neural network models such as MLP and Autoencoders are employed. Various machine learning algorithms are used on the optimised NSL-KDD dataset to provide an efficient and accurate predictor of network intrusions. In this research, we propose a neural network based model and experiment on various central and spiral rearrangements of the features for distinguishing between different types of attacks and support our approach of better preservation of feature structure with image representations. The results are analysed and compared to existing models and prior research. The outcomes of this study have practical implications for improving the security and performance of cloud computing systems, specifically in the area of identifying and mitigating network intrusions
    corecore