141,389 research outputs found

    Arrays and References in Resource Aware ML

    Get PDF
    This article introduces a technique to accurately perform static prediction of resource usage for ML-like functional programs with references and arrays. Previous research successfully integrated the potential method of amortized analysis with a standard type system to automatically derive parametric resource bounds. The analysis is naturally compositional and the resource consumption of functions can be abstracted using potential-annotated types. The soundness theorem of the analysis guarantees that the derived bounds are correct with respect to the resource usage defined by a cost semantics. Type inference can be efficiently automated using off-the-shelf LP solvers, even if the derived bounds are polynomials. However, side effects and aliasing of heap references make it notoriously difficult to derive bounds that depend on mutable structures, such as arrays and references. As a result, existing automatic amortized analysis systems for ML-like programs cannot derive bounds for programs whose resource consumption depends on data in such structures. This article extends the potential method to handle mutable structures with minimal changes to the type rules while preserving the stated advantages of amortized analysis. To do so, we introduce a swap operation for references and arrays that users can use to make programs suitable for automatic analysis. We prove the soundness of the analysis introducing a potential-annotated memory typing, which gathers all unique locations reachable from a reference. Apart from the design of the system, the main contribution is the proof of soundness for the extended analysis system

    Flexible Payload Configuration for Satellites using Machine Learning

    Full text link
    Satellite communications, essential for modern connectivity, extend access to maritime, aeronautical, and remote areas where terrestrial networks are unfeasible. Current GEO systems distribute power and bandwidth uniformly across beams using multi-beam footprints with fractional frequency reuse. However, recent research reveals the limitations of this approach in heterogeneous traffic scenarios, leading to inefficiencies. To address this, this paper presents a machine learning (ML)-based approach to Radio Resource Management (RRM). We treat the RRM task as a regression ML problem, integrating RRM objectives and constraints into the loss function that the ML algorithm aims at minimizing. Moreover, we introduce a context-aware ML metric that evaluates the ML model's performance but also considers the impact of its resource allocation decisions on the overall performance of the communication system.Comment: in review for conferenc

    Verifying and Synthesizing Constant-Resource Implementations with Types

    Full text link
    We propose a novel type system for verifying that programs correctly implement constant-resource behavior. Our type system extends recent work on automatic amortized resource analysis (AARA), a set of techniques that automatically derive provable upper bounds on the resource consumption of programs. We devise new techniques that build on the potential method to achieve compositionality, precision, and automation. A strict global requirement that a program always maintains constant resource usage is too restrictive for most practical applications. It is sufficient to require that the program's resource behavior remain constant with respect to an attacker who is only allowed to observe part of the program's state and behavior. To account for this, our type system incorporates information flow tracking into its resource analysis. This allows our system to certify programs that need to violate the constant-time requirement in certain cases, as long as doing so does not leak confidential information to attackers. We formalize this guarantee by defining a new notion of resource-aware noninterference, and prove that our system enforces it. Finally, we show how our type inference algorithm can be used to synthesize a constant-time implementation from one that cannot be verified as secure, effectively repairing insecure programs automatically. We also show how a second novel AARA system that computes lower bounds on resource usage can be used to derive quantitative bounds on the amount of information that a program leaks through its resource use. We implemented each of these systems in Resource Aware ML, and show that it can be applied to verify constant-time behavior in a number of applications including encryption and decryption routines, database queries, and other resource-aware functionality.Comment: 30, IEEE S&P 201

    Context-aware Collaborative Neuro-Symbolic Inference in Internet of Battlefield Things

    Get PDF
    IoBTs must feature collaborative, context-aware, multi-modal fusion for real-time, robust decision-making in adversarial environments. The integration of machine learning (ML) models into IoBTs has been successful at solving these problems at a small scale (e.g., AiTR), but state-of-the-art ML models grow exponentially with increasing temporal and spatial scale of modeled phenomena, and can thus become brittle, untrustworthy, and vulnerable when interpreting large-scale tactical edge data. To address this challenge, we need to develop principles and methodologies for uncertainty-quantified neuro-symbolic ML, where learning and inference exploit symbolic knowledge and reasoning, in addition to, multi-modal and multi-vantage sensor data. The approach features integrated neuro-symbolic inference, where symbolic context is used by deep learning, and deep learning models provide atomic concepts for symbolic reasoning. The incorporation of high-level symbolic reasoning improves data efficiency during training and makes inference more robust, interpretable, and resource-efficient. In this paper, we identify the key challenges in developing context-aware collaborative neuro-symbolic inference in IoBTs and review some recent progress in addressing these gaps

    Co-Design of Approximate Multilayer Perceptron for Ultra-Resource Constrained Printed Circuits

    Get PDF
    Printed Electronics (PE) exhibits on-demand, extremely low-cost hardware due to its additive manufacturing process, enabling machine learning (ML) applications for domains that feature ultra-low cost, conformity, and non-toxicity requirements that silicon-based systems cannot deliver. Nevertheless, large feature sizes in PE prohibit the realization of complex printed ML circuits. In this work, we present, for the first time, an automated printed-aware software/hardware co-design framework that exploits approximate computing principles to enable ultra-resource constrained printed multilayer perceptrons (MLPs). Our evaluation demonstrates that, compared to the state-of-the-art baseline, our circuits feature on average 6x (5.7x) lower area (power) and less than 1% accuracy loss

    Towards Optimising WLANs Power Saving: Novel Context-aware Network Traffic Classification Based on a Machine Learning Approach

    Get PDF
    Energy is a vital resource in wireless computing systems. Despite the increasing popularity of Wireless Local Area Networks (WLANs), one of the most important outstanding issues remains the power consumption caused by Wireless Network Interface Controller (WNIC). To save this energy and reduce the overall power consumption of wireless devices, most approaches proposed to-date are focused on static and adaptive power saving modes. Existing literature has highlighted several issues and limitations in regards to their power consumption and performance degradation, warranting the need for further enhancements. In this paper, we propose a novel context-aware network traffic classification approach based on Machine Learning (ML) classifiers for optimizing WLAN power saving. The levels of traffic interaction in the background are contextually exploited for application of ML classifiers. Finally, the classified output traffic is used to optimize our proposed, Context-aware Listen Interval (CALI) power saving modes. A real-world dataset is recorded, based on nine smartphone applications’ network traffic, reflecting different types of network behaviour and interaction. This is used to evaluate the performance of eight ML classifiers in this initial study. Comparative results show that more than 99% of accuracy can be achieved. Our study indicates that ML classifiers are suited for classifying smartphone applications’ network traffic based on levels of interaction in the background

    PDM: Privacy-Aware Deployment of Machine-Learning Applications for Industrial Cyber–Physical Cloud Systems

    Get PDF
    The cyber-physical cloud systems (CPCSs) release powerful capability in provisioning the complicated industrial services. Due to the advances of machine learning (ML) in attack detection, a wide range of ML applications are involved in industrial CPCSs. However, how to ensure the implementation efficiency of these applications, and meanwhile avoid the privacy disclosure of the datasets due to data acquisition by different operators, remain challenging for the design of the CPCSs. To fill this gap, in this article a privacy-aware deployment method (PDM), named PDM, is devised for hosting the ML applications in the industrial CPCSs. In PDM, the ML applications are partitioned as multiple computing tasks with certain execution order, like workflows. Specifically, the deployment problem is formulated as a multiobjective problem for improving the implementation performance and resource utility. Then, the most balanced and optimal strategy is selected by leveraging an improved differential evolution technique. Finally, through comprehensive experiments and comparison analysis, PDM is fully evaluated

    CEFL: Carbon-Efficient Federated Learning

    Full text link
    Federated Learning (FL) distributes machine learning (ML) training across many edge devices to reduce data transfer overhead and protect data privacy. Since FL model training may span millions of devices and is thus resource-intensive, prior work has focused on improving its resource efficiency to optimize time-to-accuracy. However, prior work generally treats all resources the same, while, in practice, they may incur widely different costs, which instead motivates optimizing cost-to-accuracy. To address the problem, we design CEFL, which uses adaptive cost-aware client selection policies to optimize an arbitrary cost metric when training FL models. Our policies extend and combine prior work on utility-based client selection and critical learning periods by making them cost-aware. We demonstrate CEFL by designing carbon-efficient FL, where energy's carbon-intensity is the cost, and show that it i) reduces carbon emissions by 93\% and reduces training time by 50% compared to random client selection and ii) reduces carbon emissions by 80%, while only increasing training time by 38%, compared to a state-of-the-art approach that optimizes training time
    • …
    corecore