91,635 research outputs found

    Efficient task optimization algorithm for green computing in cloud.

    Get PDF
    Cloud infrastructure assets are accessed by all hooked heterogeneous network servers and applications to maintain entail reliability towards global subscribers with high performance and low cost is a tedious challenging task. Most of the extant techniques are considered limited constraints like task deadline, which leads Service Level Agreement (SLA) violation. In this manuscript, we develop Hadoop based Task Scheduling (HTS) algorithm which considers a task deadline time, completion time, migration time and future resource availability of each virtual machine. The Intelligent System (IS) enabled with adaptive neural computation method to assess all above attributes. Specifically, the result of Prophecy Resource Availability (PRA) method has been used to assess the status of each Virtual Machine (VM), which helps to streamline the resource wastage and increases the response time with low SLA violation rate

    Designing Technology for Different Scales of Irrigation Scheduling

    Get PDF
    Uncertainty in water availability is a significant challenge to the agriculture industry. Farmers and irrigators depend on novel uses of sensors and data to maximize water efficiency. Documented studies have demonstrated scheduling irrigation is a straightforward, deterministic means of achieving water efficiency. Irrigation scheduling uses several parameters to determine the moment of crop water stress due to available water in the soil. However, sensors and data for soil moisture and matric potential, a parameter describing water available to plants, have the potential to train machine learning algorithms to forecast water irrigation needs based on previous measurements. Satellite remote-sensing is another developing technology that describes the environmental conditions that enable irrigation scheduling and provides data on crop health by allowing for calculations on collected field images. This project trains a learning machine with soil moisture and home-brew tensiometer information. To create a water management system that avoids exposing crops to stress, the learning machine uses previous soil water conditions to forecast crop water demand. This machine learning model informs the farmer of the moment maximum water depletion will occur, providing the farmer opportunity to irrigate in advance of crop water stress conditions. Additionally, this research evaluates the value of soil moisture, matric potential, and trained machine learning against characteristics of the specified agricultural undertaking. Because larger agricultural undertakings can be managed with remote-sensing of crop health, this research investigates the viability of ground-sensing against satellite remote-sensing. Sensor-improvements would be more viable for an urban agriculture system. Understanding scenarios in agriculture to tailor technological development will allow farmers to further maximize crop yield and quality with their increasingly limited water availability

    Preventive maintenance for heterogeneous industrial vehicles with incomplete usage data

    Get PDF
    Large fleets of industrial and construction vehicles require periodic maintenance activities. Scheduling these operations is potentially challenging because the optimal timeline depends on the vehicle characteristics and usage. This paper studies a real industrial case study, where a company providing telematics services supports fleet managers in scheduling maintenance operations of about 2000 construction vehicles of various types. The heterogeneity of the fleet and the availability of historical data fosters the use of data-driven solutions based on machine learning techniques. The paper addresses the learning of per-vehicle predictors aimed at forecasting the next-day utilisation level and the remaining time until the next maintenance. We explore the performance of both linear and non-liner models, showing that machine learning models are able to capture the underlying trends describing non-stationary vehicle usage patterns. We also explicitly consider the lack of data for vehicles that have been recently added to the fleet. Results show that the availability of even a limited portion of past utilisation levels enables the identification of vehicles with similar usage trends and the opportunistic reuse of their historical data

    MOON: MapReduce On Opportunistic eNvironments

    Get PDF
    Abstract—MapReduce offers a flexible programming model for processing and generating large data sets on dedicated resources, where only a small fraction of such resources are every unavailable at any given time. In contrast, when MapReduce is run on volunteer computing systems, which opportunistically harness idle desktop computers via frameworks like Condor, it results in poor performance due to the volatility of the resources, in particular, the high rate of node unavailability. Specifically, the data and task replication scheme adopted by existing MapReduce implementations is woefully inadequate for resources with high unavailability. To address this, we propose MOON, short for MapReduce On Opportunistic eNvironments. MOON extends Hadoop, an open-source implementation of MapReduce, with adaptive task and data scheduling algorithms in order to offer reliable MapReduce services on a hybrid resource architecture, where volunteer computing systems are supplemented by a small set of dedicated nodes. The adaptive task and data scheduling algorithms in MOON distinguish between (1) different types of MapReduce data and (2) different types of node outages in order to strategically place tasks and data on both volatile and dedicated nodes. Our tests demonstrate that MOON can deliver a 3-fold performance improvement to Hadoop in volatile, volunteer computing environments

    On-Device Deep Learning Inference for System-on-Chip (SoC) Architectures

    Get PDF
    As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems

    Scheduling Jobs in Flowshops with the Introduction of Additional Machines in the Future

    Get PDF
    This is the author's peer-reviewed final manuscript, as accepted by the publisher. The published article is copyrighted by Elsevier and can be found at: http://www.journals.elsevier.com/expert-systems-with-applications/.The problem of scheduling jobs to minimize total weighted tardiness in flowshops,\ud with the possibility of evolving into hybrid flowshops in the future, is investigated in\ud this paper. As this research is guided by a real problem in industry, the flowshop\ud considered has considerable flexibility, which stimulated the development of an\ud innovative methodology for this research. Each stage of the flowshop currently has\ud one or several identical machines. However, the manufacturing company is planning\ud to introduce additional machines with different capabilities in different stages in the\ud near future. Thus, the algorithm proposed and developed for the problem is not only\ud capable of solving the current flow line configuration but also the potential new\ud configurations that may result in the future. A meta-heuristic search algorithm based\ud on Tabu search is developed to solve this NP-hard, industry-guided problem. Six\ud different initial solution finding mechanisms are proposed. A carefully planned\ud nested split-plot design is performed to test the significance of different factors and\ud their impact on the performance of the different algorithms. To the best of our\ud knowledge, this research is the first of its kind that attempts to solve an industry-guided\ud problem with the concern for future developments
    • …
    corecore