132 research outputs found

    MAC Based Dynamic VLAN Tagging with OpenFlow for WLAN Access Networks

    Get PDF
    Many network device vendors are providing a vendor specific VLAN based access solutions for WLAN clients. This applications allows network operators to specify WLAN devices which automatically fall into their department specific networks ans allows them to access their local resources like e.g. printers. The configuration of these VLAN mappings is usually manufacturer specific and depends also on the local VLAN policies. However, the presented OpenFlow approach on the other hand presents a solution to encapsulate this functionality as network application. Thus, an architecture, implementation, and evaluation is presented in order to demonstrate that this particular functionality can be easily realized in an OpenFlow network application

    Bildretrieval mit dynamisch extrahierten Merkmalen

    Get PDF

    Towards Benchmarking Power-Performance Characteristics of Federated Learning Clients

    Full text link
    Federated Learning (FL) is a decentralized machine learning approach where local models are trained on distributed clients, allowing privacy-preserving collaboration by sharing model updates instead of raw data. However, the added communication overhead and increased training time caused by heterogenous data distributions results in higher energy consumption and carbon emissions for achieving similar model performance than traditional machine learning. At the same time, efficient usage of available energy is an important requirement for battery constrained devices. Because of this, many different approaches on energy-efficient and carbon-efficient FL scheduling and client selection have been published in recent years. However, most of this research oversimplifies power performance characteristics of clients by assuming that they always require the same amount of energy per processed sample throughout training. This overlooks real-world effects arising from operating devices under different power modes or the side effects of running other workloads in parallel. In this work, we take a first look on the impact of such factors and discuss how better power-performance estimates can improve energy-efficient and carbon-efficient FL scheduling.Comment: Machine Learning and Networking Workshop, NetSys 202

    OpenIncrement: A Unified Framework for Open Set Recognition and Deep Class-Incremental Learning

    Full text link
    In most works on deep incremental learning research, it is assumed that novel samples are pre-identified for neural network retraining. However, practical deep classifiers often misidentify these samples, leading to erroneous predictions. Such misclassifications can degrade model performance. Techniques like open set recognition offer a means to detect these novel samples, representing a significant area in the machine learning domain. In this paper, we introduce a deep class-incremental learning framework integrated with open set recognition. Our approach refines class-incrementally learned features to adapt them for distance-based open set recognition. Experimental results validate that our method outperforms state-of-the-art incremental learning techniques and exhibits superior performance in open set recognition compared to baseline methods

    Predicting Dynamic Memory Requirements for Scientific Workflow Tasks

    Full text link
    With the increasing amount of data available to scientists in disciplines as diverse as bioinformatics, physics, and remote sensing, scientific workflow systems are becoming increasingly important for composing and executing scalable data analysis pipelines. When writing such workflows, users need to specify the resources to be reserved for tasks so that sufficient resources are allocated on the target cluster infrastructure. Crucially, underestimating a task's memory requirements can result in task failures. Therefore, users often resort to overprovisioning, resulting in significant resource wastage and decreased throughput. In this paper, we propose a novel online method that uses monitoring time series data to predict task memory usage in order to reduce the memory wastage of scientific workflow tasks. Our method predicts a task's runtime, divides it into k equally-sized segments, and learns the peak memory value for each segment depending on the total file input size. We evaluate the prototype implementation of our method using workflows from the publicly available nf-core repository, showing an average memory wastage reduction of 29.48% compared to the best state-of-the-art approac

    Selecting Efficient Cluster Resources for Data Analytics: When and How to Allocate for In-Memory Processing?

    Full text link
    Distributed dataflow systems such as Apache Spark or Apache Flink enable parallel, in-memory data processing on large clusters of commodity hardware. Consequently, the appropriate amount of memory to allocate to the cluster is a crucial consideration. In this paper, we analyze the challenge of efficient resource allocation for distributed data processing, focusing on memory. We emphasize that in-memory processing with in-memory data processing frameworks can undermine resource efficiency. Based on the findings of our trace data analysis, we compile requirements towards an automated solution for efficient cluster resource allocation.Comment: 4 pages, 3 Figures; ACM SSDBM 202
    • …
    corecore