2,068 research outputs found
Dynamic scheduling in a multi-product manufacturing system
To remain competitive in global marketplace, manufacturing companies need to improve their operational practices. One of the methods to increase competitiveness in manufacturing is by implementing proper scheduling system. This is important to enable job orders to be completed on time, minimize waiting time and maximize utilization of equipment and machineries. The dynamics of real manufacturing system are very complex in nature. Schedules developed based on deterministic algorithms are unable to effectively deal with uncertainties in demand and capacity. Significant differences can be found between planned schedules and actual schedule implementation. This study attempted to develop a scheduling system that is able to react quickly and reliably for accommodating changes in product demand and manufacturing capacity. A case study, 6 by 6 job shop scheduling problem was adapted with uncertainty elements added to the data sets. A simulation model was designed and implemented using ARENA simulation package to generate various job shop scheduling scenarios. Their performances were evaluated using scheduling rules, namely, first-in-first-out (FIFO), earliest due date (EDD), and shortest processing time (SPT). An artificial neural network (ANN) model was developed and trained using various scheduling scenarios generated by ARENA simulation. The experimental results suggest that the ANN scheduling model can provided moderately reliable prediction results for limited scenarios when predicting the number completed jobs, maximum flowtime, average machine utilization, and average length of queue. This study has provided better understanding on the effects of changes in demand and capacity on the job shop schedules. Areas for further study includes: (i) Fine tune the proposed ANN scheduling model (ii) Consider more variety of job shop environment (iii) Incorporate an expert system for interpretation of results. The theoretical framework proposed in this study can be used as a basis for further investigation
Statistical Methods and Artificial Neural Networks
Artificial Neural Networks and statistical methods are applied on real data sets for forecasting, classification, and clustering problems. Hybrid models for two components are examined on different data sets; tourist arrival forecasting to Turkey, macro-economic problem on rescheduling of the countries’ international debts, and grouping twenty-five European Union member and four candidate countries according to macro-economic indicators
Predicting Scheduling Failures in the Cloud
Cloud Computing has emerged as a key technology to deliver and manage
computing, platform, and software services over the Internet. Task scheduling
algorithms play an important role in the efficiency of cloud computing services
as they aim to reduce the turnaround time of tasks and improve resource
utilization. Several task scheduling algorithms have been proposed in the
literature for cloud computing systems, the majority relying on the
computational complexity of tasks and the distribution of resources. However,
several tasks scheduled following these algorithms still fail because of
unforeseen changes in the cloud environments. In this paper, using tasks
execution and resource utilization data extracted from the execution traces of
real world applications at Google, we explore the possibility of predicting the
scheduling outcome of a task using statistical models. If we can successfully
predict tasks failures, we may be able to reduce the execution time of jobs by
rescheduling failed tasks earlier (i.e., before their actual failing time). Our
results show that statistical models can predict task failures with a precision
up to 97.4%, and a recall up to 96.2%. We simulate the potential benefits of
such predictions using the tool kit GloudSim and found that they can improve
the number of finished tasks by up to 40%. We also perform a case study using
the Hadoop framework of Amazon Elastic MapReduce (EMR) and the jobs of a gene
expression correlations analysis study from breast cancer research. We find
that when extending the scheduler of Hadoop with our predictive models, the
percentage of failed jobs can be reduced by up to 45%, with an overhead of less
than 5 minutes
Country risk analysis: an application of logistic regression and neural networks
A research report submitted to the Faculty of Science, School of Statistics and Actuarial Science in partial fulfilment of the requirements for the degree of Master of Science, University of the Witwatersrand. Johannesburg, 08 June 2017.
Mathematical
Statistics degree, 2017Country risk evaluation is a crucial exercise when determining the ability of
countries to repay their debts. The global environment is volatile and is filled
with macro-economic, financial and political factors that may affect a country’s
commercial environment, resulting in its inability to service its debt. This re
search report compares the ability of conventional neural network models and
traditional panel logistic regression models in assessing country risk. The mod
els are developed using a set of economic, financial and political risk factors
obtained from the World Bank for the years 1996 to 2013 for 214 economies.
These variables are used to assess the debt servicing capacity of the economies
as this has a direct impact on the return on investments for financial institu
tions, investors, policy makers as well as researchers. The models developed
may act as early warning systems to reduce exposure to country risk.
Keywords: Country risk, Debt rescheduling, Panel logit model, Neural net
work modelsXL201
Artificial Neural Networks in Production Scheduling and Yield Prediction of Semiconductor Wafer Fabrication System
With the development of artificial intelligence, the artificial neural networks (ANN) are widely used in the control, decision‐making and prediction of complex discrete event manufacturing systems. Wafer fabrication is one of the most complicated and high competence manufacturing phases. The production scheduling and yield prediction are two critical issues in the operation of semiconductor wafer fabrication system (SWFS). This chapter proposed two fuzzy neural networks for the production rescheduling strategy decision and the die yield prediction. Firstly, a fuzzy neural network (FNN)‐based rescheduling decision model is implemented, which can rapidly choose an optimized rescheduling strategy to schedule the semiconductor wafer fabrication lines according to the current system disturbances. The experimental results demonstrate the effectiveness of proposed FNN‐based rescheduling decision mechanism approach over the alternatives (back‐propagation neural network and Multivariate regression). Secondly, a novel fuzzy neural network‐based yield prediction model is proposed to improve prediction accuracy of die yield in which the impact factors of yield and critical electrical test parameters are considered simultaneously and are taken as independent variables. The comparison experiment verifies the proposed yield prediction method improves on three traditional yield prediction methods with respect to prediction accuracy
Astraea: Self-balancing Federated Learning for Improving Classification Accuracy of Mobile Deep Learning Applications
Federated learning (FL) is a distributed deep learning method which enables
multiple participants, such as mobile phones and IoT devices, to contribute a
neural network model while their private training data remains in local
devices. This distributed approach is promising in the edge computing system
where have a large corpus of decentralized data and require high privacy.
However, unlike the common training dataset, the data distribution of the edge
computing system is imbalanced which will introduce biases in the model
training and cause a decrease in accuracy of federated learning applications.
In this paper, we demonstrate that the imbalanced distributed training data
will cause accuracy degradation in FL. To counter this problem, we build a
self-balancing federated learning framework call Astraea, which alleviates the
imbalances by 1) Global data distribution based data augmentation, and 2)
Mediator based multi-client rescheduling. The proposed framework relieves
global imbalance by runtime data augmentation, and for averaging the local
imbalance, it creates the mediator to reschedule the training of clients based
on Kullback-Leibler divergence (KLD) of their data distribution. Compared with
FedAvg, the state-of-the-art FL algorithm, Astraea shows +5.59% and +5.89%
improvement of top-1 accuracy on the imbalanced EMNIST and imbalanced CINIC-10
datasets, respectively. Meanwhile, the communication traffic of Astraea can be
82% lower than that of FedAvg.Comment: Published as a conference paper at IEEE 37th International Conference
on Computer Design (ICCD) 201
Reinforcement learning of adaptive online rescheduling timing and computing time allocation
Mathematical optimization methods have been developed to a vast variety of complex problems in the field of process systems engineering (e.g., the scheduling of chemical batch processes). However, the use of these methods in online scheduling is hindered by the stochastic nature of the processes and prohibitively long solution times when optimized over long time horizons. The following questions are raised: When to trigger a rescheduling, how much computing resources to allocate, what optimization strategy to use, and how far ahead to schedule? We propose an approach where a reinforcement learning agent is trained to make the first two decisions (i.e., rescheduling timing and computing time allocation). Using neuroevolution of augmenting topologies (NEAT) as the reinforcement learning algorithm, the approach yields, on average, better closed-loop solutions than conventional rescheduling methods on three out of four studied routing problems. We also reflect on expanding the agent's decision-making to all four decisions. (C) 2020 Elsevier Ltd. All rights reserved.Peer reviewe
Hierarchical workflow management system for life science applications
In modern laboratories, an increasing number of automated stations and instruments are applied as standalone automated systems such as biological high throughput screening systems, chemical parallel reactors etc. At the same time, the mobile robot transportation solution becomes popular with the development of robotic technologies. In this dissertation, a new superordinate control system, called hierarchical workflow management system (HWMS) is presented to manage and to handle both, automated laboratory systems and logistics systems.In modernen Labors werden immer mehr automatisierte Stationen und Instrumente als eigenständige automatisierte Systeme eingesetzt, wie beispielsweise biologische High-Throughput-Screening-Systeme und chemische Parallelreaktoren. Mit der Entwicklung der Robotertechnologien wird gleichzeitig die mobile Robotertransportlösung populär. In der vorliegenden Arbeit wurde ein hierarchisches Verwaltungssystem für Abeitsablauf, welches auch als HWMS bekannt ist, entwickelt. Das neue übergeordnete Kontrollsystem kann sowohl automatisierte Laborsysteme als auch Logistiksysteme verwalten und behandeln
Reinforcement Learning for Scalable Train Timetable Rescheduling with Graph Representation
Train timetable rescheduling (TTR) aims to promptly restore the original
operation of trains after unexpected disturbances or disruptions. Currently,
this work is still done manually by train dispatchers, which is challenging to
maintain performance under various problem instances. To mitigate this issue,
this study proposes a reinforcement learning-based approach to TTR, which makes
the following contributions compared to existing work. First, we design a
simple directed graph to represent the TTR problem, enabling the automatic
extraction of informative states through graph neural networks. Second, we
reformulate the construction process of TTR's solution, not only decoupling the
decision model from the problem size but also ensuring the generated scheme's
feasibility. Third, we design a learning curriculum for our model to handle the
scenarios with different levels of delay. Finally, a simple local search method
is proposed to assist the learned decision model, which can significantly
improve solution quality with little additional computation cost, further
enhancing the practical value of our method. Extensive experimental results
demonstrate the effectiveness of our method. The learned decision model can
achieve better performance for various problems with varying degrees of train
delay and different scales when compared to handcrafted rules and
state-of-the-art solvers
- …