13,210 research outputs found

    Rolling Horizon Workforce Schedule to Assign Workers to Tasks in a Learning/Forgetting Environment

    Get PDF
    The variability in demand across the planning horizon and the presence of heterogeneous workforces where workers learn and forget at different rates make the process of building and managing a workforce challenging. When integrating learning and forgetting functions of workers into workforce scheduling, the previous experience of a worker on a task can have significant impact on productivity. While making assignments over an infinite planning horizon is ideal, the learning/forgetting function significantly increases problem complexity and solution difficulty as the length of planning horizon increases. In this thesis, a multi-period rolling horizon worker-task assignment framework is developed to overcome computational challenges associated with longer planning horizons. The non-linear learning/forgetting function is converted into an equivalent linear form (using an existing technique) to further reduce problem complexity. We design experiments to analyze the optimal planning horizon and the factors that affect it, questions that remain unanswered in literature. After testing the model under different scenarios (varying staffing level, variation in demand, learning rate, forgetting rate and workforce heterogeneity), we conclude variation in demand and staffing level to be the most significant factors in determining the optimal planning horizon. We also see a significant improvement in performance when comparing our proposed multi-period framework against a myopic model, especially in scenarios with higher workforce heterogeneity, higher variation in demand, and faster forgetting rate

    When Does a Newcomer Contribute to a Better Performance? A Multi-Agent Study on Self-Organising Processes of Task Allocation

    Get PDF
    This paper describes how a work group and a newcomer mutually adapt. We study two types of simulated groups that need an extra worker, one group because a former employee had left the group and one group because of its workload. For both groups, we test three conditions, newcomers being specialists, newcomers being generalists, and a control condition with no newcomer. We hypothesise that the group that needs an extra worker because of its workload will perform the best with a newcomer being a generalist. The group that needs an extra worker because a former employee had left the group, will perform better with a specialist newcomer. We study the development of task allocation and performance, with expertise and motivation as process variables. We use two performance indicators, the performance time of the slowest agent that indicates the speed of the group and the sum of performance of all agents to indicate labour costs. Both are indicative for the potential benefit of the newcomer. Strictly spoken the results support our hypotheses although the differences between the groups with generalists and specialists are negligible. What really mattered was the possibility for a newcomer to fit in.Task Allocation, Group Processes, Psychological Theory, Small Groups, Self-Organisation

    Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents

    Full text link
    Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e.g. hours vs. days) because they parallelize better. However, many RL problems require directed exploration because they have reward functions that are sparse or deceptive (i.e. contain local optima), and it is unknown how to encourage such exploration with ES. Here we show that algorithms that have been invented to promote directed exploration in small-scale evolved neural networks via populations of exploring agents, specifically novelty search (NS) and quality diversity (QD) algorithms, can be hybridized with ES to improve its performance on sparse or deceptive deep RL tasks, while retaining scalability. Our experiments confirm that the resultant new algorithms, NS-ES and two QD algorithms, NSR-ES and NSRA-ES, avoid local optima encountered by ES to achieve higher performance on Atari and simulated robots learning to walk around a deceptive trap. This paper thus introduces a family of fast, scalable algorithms for reinforcement learning that are capable of directed exploration. It also adds this new family of exploration algorithms to the RL toolbox and raises the interesting possibility that analogous algorithms with multiple simultaneous paths of exploration might also combine well with existing RL algorithms outside ES

    νŒŒλΌλ―Έν„° ν•™μŠ΅ ν†΅ν•œ 데이터 작음 및 간섭극볡 연ꡬ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·컴퓨터곡학뢀, 2021. 2. 정ꡐ민.인곡신경망 λͺ¨λΈμ— λ‹€λŸ‰μ˜ 데이터λ₯Ό ν•™μŠ΅μ‹œν‚€λŠ” 방식은 컴퓨터 λΉ„μ „ 및 μžμ—°μ–΄ 처리 λΆ„μ•Όμ˜ λ¬Έμ œλ“€μ„ ν•΄κ²°ν•˜λŠ”λ° μƒˆλ‘œμš΄ νŒ¨λŸ¬λ‹€μž„μœΌλ‘œ μžλ¦¬λ§€κΉ€ν•˜μ˜€λ‹€. κΈ°μ‘΄ μ‚¬λžŒμ˜ μ§κ΄€μœΌλ‘œ λͺ¨λΈμ„ μ„€μ •ν•˜λŠ” 방식과 λΉ„κ΅ν•˜μ—¬ 높은 μ„±λŠ₯을 달성할 수 μžˆμ—ˆμœΌλ‚˜, ν•™μŠ΅λ°μ΄ν„°μ˜ μ–‘κ³Ό ν’ˆμ§ˆμ— λ”°λΌμ„œ κ·Έ μ„±λŠ₯이 크게 μ’Œμš°λœλ‹€. μ΄λ ‡κ²Œ 인곡 신경망을 효과적으둜 ν›ˆλ ¨ν•˜λ €λ©΄ λ§Žμ€ μ–‘μ˜ 데이터λ₯Ό λͺ¨μœΌλŠ” 것과 λ°μ΄ν„°μ˜ ν’ˆμ§ˆμ„ μ €ν•˜μ‹œν‚€λŠ” μš”μΈμ„ νŒŒμ•…ν•˜λŠ” 것이 μ€‘μš”ν•˜λ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ” 라벨링된 λ°μ΄ν„°μ˜ ν’ˆμ§ˆμ„ κ²°μ •ν•˜λŠ” μ£Όμš” μš”μΈμœΌλ‘œ μ•Œλ €μ Έ μžˆλŠ” 작음(Noise)κ³Ό κ°„μ„­(Interference)을 극볡할 수 μžˆλŠ” 기법을 μ œμ‹œν•œλ‹€. μ—°κ΅¬μžλ“€μ€ 일반적으둜 μ›ΉκΈ°λ°˜μ˜ ν¬λΌμš°λ“œ μ†Œμ‹±μ‹œμŠ€ν…œμ„ μ‚¬μš©ν•˜μ—¬ λ‹€μ–‘ν•œ μ‚¬λžŒλ“€λ‘œλΆ€ν„° 닡변을 μˆ˜μ§‘ν•˜μ—¬ 데이터그룹을 κ΅¬μ„±ν•œλ‹€\cite{simonyan2014very}. κ·ΈλŸ¬λ‚˜ μ‚¬λžŒλ“€μ˜ λ‹΅λ³€μœΌλ‘œ μ–»λŠ” λ°μ΄ν„°λŠ” μž‘μ—… 지침에 λŒ€ν•œ μ˜€ν•΄, μ±…μž„ λΆ€μ‘± 및 κ³ μœ ν•œ 였λ₯˜λ‘œ μΈν•΄μ„œ 데이터 μž…λ ₯(Input)κ³Ό 좜λ ₯(Target)사이에 작음이 ν¬ν•¨λœλ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ” μ΄λ ‡κ²Œ ν¬λΌμš°λ“œ μ†Œμ‹±μ„ 톡해 라벨링된 데이터에 μ‘΄μž¬ν•˜λŠ” μž‘μŒμ„ κ·Ήλ³΅ν•˜κΈ° μœ„ν•œ μΆ”λ‘  μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ•ˆν•œλ‹€. λ‘λ²ˆμ§Έλ‘œ, λͺ¨λΈμ˜ ν•™μŠ΅μ„±λŠ₯을 μ €ν•˜μ‹œν‚€λŠ” μš”μΈμΈ λ°μ΄ν„°κ°„μ˜ 간섭을 닀룬닀. 작음이 μ œκ±°λ˜μ–΄ μ •μ œλœ μž…λ ₯κ³Ό 좜λ ₯을 라벨링된 데이터 μƒ˜ν”Œμ΄λΌκ³  ν•˜λ©΄, ν•™μŠ΅μ‹œμ— μƒ˜ν”Œλ“€ μ‚¬μ΄μ˜ 관계λ₯Ό 생각할 수 μžˆλ‹€. μ‚¬λžŒ μˆ˜μ€€μ˜ 인곡지λŠ₯에 λ„λ‹¬ν•˜κΈ° μœ„ν•΄μ„œλŠ” ν•˜λ‚˜μ˜ λͺ¨λΈμ΄ ν•˜λ‚˜μ˜ λ¬Έμ œλ§Œμ„ ν•΄κ²°ν•˜λŠ” 것이 μ•„λ‹ˆλΌ μ‹œκ°„μƒ 순차적으둜 μ§λ©΄ν•˜λŠ” μ—¬λŸ¬ 문제λ₯Ό λ™μ‹œμ— ν•΄κ²°ν•  수 μžˆμ–΄μ•Ό ν•œλ‹€. μ΄λŸ¬ν•œ μƒν™©μ—μ„œ, μƒ˜ν”Œλ“€ 사이에 간섭이 λ°œμƒν•  수 있고, ν•™κ³„μ—μ„œλŠ” μ—°μ†ν•™μŠ΅(Continual Learning)μ—μ„œμ˜ "Catastrophic Forgetting"λ˜λŠ” "Semantic Drift"으둜 μ •μ˜ν•˜κ³  μžˆλ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ” μ΄λŸ¬ν•œ 간섭을 효과적으둜 κ·Ήλ³΅ν•˜κΈ° μœ„ν•œ 방법에 λŒ€ν•œ 연ꡬλ₯Ό 닀룬닀. μ•žμ„œ μ–ΈκΈ‰ν•œ 데이터 μž‘μŒμ„ κ·Ήλ³΅ν•˜κΈ° μœ„ν•΄μ„œ 첫 번째 μž₯μ—μ„œλŠ” ν¬λΌμš°λ“œ μ†Œμ‹± μ‹œμŠ€ν…œμ˜ 이산 객관식 및 μ‹€μˆ˜ 벑터 νšŒκ·€ μž‘μ—…μ— λŒ€ν•œ μƒˆλ‘œμš΄ μΆ”λ‘  μ•Œκ³ λ¦¬μ¦˜μ„ 각각 μ œμ•ˆν•œλ‹€. μ œμ•ˆ 된 μ•Œκ³ λ¦¬μ¦˜μ€ ν¬λΌμš°λ“œ μ†Œμ‹± λͺ¨λΈμ„ κ·Έλž˜ν”„ λͺ¨λΈ(Graphical Model)λ‘œμ„œ μƒμ •ν•˜κ³ , ν…ŒμŠ€ν¬μ™€ 닡변을 μ£ΌλŠ” μ‚¬λžŒλ“€κ°„μ˜ 두 가지 μœ ν˜•μ˜ λ©”μ‹œμ§€λ₯Ό 반볡적으둜 μ£Όκ³  λ°›μŒμœΌλ‘œμ¨ 각 μž‘μ—…μ˜ μ •λ‹΅κ³Ό 각 μž‘μ—…μžμ˜ 신뒰성을 μΆ”μ • ν•  수 μžˆλ‹€. λ˜ν•œ μ΄λ“€μ˜ 평균 μ„±λŠ₯은 ν™•λ₯ μ  ꡰ쀑 λͺ¨λΈμ„ μ΄μš©ν•˜μ—¬ λΆ„μ„ν•˜κ³  μž…μ¦ν•œλ‹€. μ΄λŸ¬ν•œ μ„±λŠ₯μ—λŸ¬ ν•œκ³„λŠ” μž‘μ—…λ‹Ή ν• λ‹Ήλ˜λŠ” μ‚¬λžŒλ“€μ˜ μˆ˜μ™€ μž‘μ—…μžμ˜ 평균 μ‹ λ’°μ„±μ˜ν•΄ κ²°μ •λœλ‹€. μ‚¬λžŒλ“€μ˜ 평균 신뒰도가 일정 μˆ˜μ€€μ„ λ„˜μ–΄μ„œλ©΄, μ œμ•ˆλœ μ•Œκ³ λ¦¬μ¦˜μ˜ 평균 μ„±λŠ₯은 λͺ¨λ“  μž‘μ—…μžμ˜ 신뒰성을 μ•Œκ³ μžˆλŠ” 였라클 μΆ”μ •κΈ° (이둠적인 ν•œκ³„)에 μˆ˜λ ΄ν•œλ‹€. μ‹€μ œ 데이터 μ„ΈνŠΈμ™€ ν•©μ„± 데이터 μ„ΈνŠΈ λͺ¨λ‘μ— λŒ€ν•œ κ΄‘λ²”μœ„ν•œ μ‹€ν—˜μ„ 톡해, μ œμ•ˆλœ μ•Œκ³ λ¦¬μ¦˜μ˜ μ‹€μ œ μ„±λŠ₯이 μ΄μ „μ˜ state-of-the-art μ•Œκ³ λ¦¬μ¦˜λ“€ 보닀 μš°μˆ˜ν•˜λ‹€λŠ” 것을 μž…μ¦ν•œλ‹€. λ…Όλ¬Έμ˜ 두 번째 μž₯μ—μ„œλŠ” μ—°μ†ν•™μŠ΅μƒν™©μ—μ„œ λ°μ΄ν„°μƒ˜ν”Œμ‚¬μ΄μ— λ°œμƒν•˜λŠ” 간섭을 ν•΄κ²°ν•˜κΈ° μœ„ν•΄, ν•­μƒμ„±κΈ°λ°˜μ˜ 메타 ν•™μŠ΅ ꡬ쑰 (Homeostatic Meta Model)λ₯Ό μ œμ•ˆν•œλ‹€. ꡬ체적으둜, 이전 ν…ŒμŠ€ν¬ μ€‘μš”ν•œ ν•™μŠ΅ λ³€μˆ˜λ₯Ό μ°Ύκ³  μ •κ·œν™”μ— μ„ λ³„μ μœΌλ‘œ μ μš©ν•˜λŠ” 방법을 μ‚¬μš©ν•˜λŠ”λ°, μ œμ•ˆλœ λͺ¨λΈμ€ μ΄λŸ¬ν•œ μ •κ·œν™”μ˜ 강도λ₯Ό μžλ™μœΌλ‘œ μ œμ–΄ν•œλ‹€. μ΄λŸ¬ν•œ 기법은 μƒˆλ‘œμš΄ ν•™μŠ΅μ„ 진행할 λ•Œ 이전에 νšλ“ν•œ 지식을 μ΅œμ†Œν•œμœΌλ‘œ μžƒμ–΄λ²„λ¦¬λ„λ‘ μΈκ³΅μ‹ κ²½λ§μ˜ ν•™μŠ΅μ„ μœ λ„ν•œλ‹€. λ‹€μ–‘ν•œ μœ ν˜•μ˜ 연속 ν•™μŠ΅ κ³Όμ œμ—μ„œ μ œμ•ˆλœ 방법을 κ²€μ¦ν•˜λŠ”λ°, μ‹€ν—˜μ μœΌλ‘œ μ œμ•ˆλœ 방법이 ν•™μŠ΅μ˜ κ°„μ„­μ™„ν™” μΈ‘λ©΄μ—μ„œ κΈ°μ‘΄ 방법보닀 μš°μˆ˜ν•˜λ‹€λŠ” 점을 보인닀.λ˜ν•œ κΈ°μ‘΄ μ‹œλƒ…μŠ€ κ°€μ†Œμ„± 기반 방법듀에 λΉ„ν•΄ μƒλŒ€μ μœΌλ‘œ 변화에 κ°•μΈν•˜λ‹€.μ œμ•ˆλœ λͺ¨λΈμ— μ˜ν•΄ μƒμ„±λœ μ •κ·œν™”μ˜ 강도 값은 μ‹œλƒ…μŠ€μ—μ„œ 항상성 의 음의 ν”Όλ“œλ°± λ©”μ»€λ‹ˆμ¦˜κ³Ό μœ μ‚¬ν•˜κ²Œ, νŠΉμ • λ²”μœ„ λ‚΄μ—μ„œ λŠ₯λ™μ μœΌλ‘œ μ œμ–΄λœλ‹€.Data-driven approaches based on neural networks have emerged as new paradigm to solve problems in computer vision and natural language processing fields. These approaches achieve better performance compared to existing human-design approaches (heuristic), however, these performance gains solely relies on a large amount of high quality labeled data. Accordingly, it is important to collect a large amount of data and improve the quality of data by analyzing degrading factors in order to well-train a model. In this dissertation, I propose iterative algorithms to relieve noise of labeled data in crowdsourcing system and meta architecture to alleviate interference among them in continual learning scenarios respectively. Researchers generally collect data using crowdsourcing system which utilizes human evaluations. However, human annotators' decisions may vary significantly due to misconceptions of task instructions, the lack of responsibility, and inherent noise. To relieve the noise in responses from crowd annotators, I propose novel inference algorithms for discrete multiple choice and real-valued vector regression tasks. Web-based crowdsourcing platforms are widely used for collecting large amount of labeled data. Due to low-paid workers and inherent noise, the quality of acquired data could be easily degraded. The proposed algorithms can overcome the noise by estimating the true answer of each task and a reliability of each worker updating two types of messages iteratively. For performance guarantee, the performances of the algorithms are theoretically proved under probabilistic crowd model. Interestingly, their performance bounds depend on the number of queries per task and the average quality of workers. Under a certain condition, each average performance becomes close to an oracle estimator which knows the reliability of every worker (theoretical upper bound). Through extensive experiments with both real-world and synthetic datasets, the practical performance of algorithms are verified. In fact, they are superior to other state-of-the-art algorithms. Second, when a model learns a sequence of tasks one by one (continual learning), previously learned knowledge may conflict with new knowledge. It is well-known phenomenon called "Catastrophic Forgetting" or "Semantic Drift". In this dissertation, we call the phenomena "Interference" since it occurs between two knowledge from labeled data separated in time. It is essential to control the amount of noise and interference for neural network to be well-trained. In the second part of dissertation, to solve the Interference among labeled data from consecutive tasks in continual learning scenario, a homeostasis-inspired meta learning architecture (HM) is proposed. The HM automatically controls the intensity of regularization (IoR) by capturing important parameters from the previous tasks and the current learning direction. By adjusting IoR, a learner can balance the amount of interference and degrees of freedom for its current learning. Experimental results are provided on various types of continual learning tasks. Those results show that the proposed method notably outperforms the conventional methods in terms of average accuracy and amount of the interference. In experiments, I verify that HM is relatively stable and robust compared to the existing Synaptic Plasticity based methods. Interestingly, the IoR generated by HM appears to be proactively controlled within a certain range, which resembles a negative feedback mechanism of homeostasis in synapses.Contents Abstract Contents List of Tables List of Figures 1 INTRODUCTION 1 2 Reliable multiple-choice iterative algorithm for crowdsourcing systems 6 2.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1 Task Allocation . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 Multiple Iterative Algorithm . . . . . . . . . . . . . . . . . . 8 2.2.3 Task Allocation for General Setting . . . . . . . . . . . . . . 10 2.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 Analysis of algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.1 Quality of workers . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.2 Bound on the Average Error Probability . . . . . . . . . . . . 18 2.4.3 Proof of the Theorem 1 . . . . . . . . . . . . . . . . . . . . . 20 2.4.4 Proof of Sub-Gaussianity . . . . . . . . . . . . . . . . . . . . 22 2.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 iii2.6 Related Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3 Reliable Aggregation Method for Vector Regression in Crowdsourcing 38 3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2 Inference Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.2.1 Task Message . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.2.2 Worker Message . . . . . . . . . . . . . . . . . . . . . . . . 40 3.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.3.1 Real crowdsourcing data . . . . . . . . . . . . . . . . . . . . 43 3.4 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.4.1 Dirichlet crowd model . . . . . . . . . . . . . . . . . . . . . 48 3.4.2 Error Bound . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.4.3 Optimality of Oracle Estimator . . . . . . . . . . . . . . . . . 51 3.4.4 Performance Proofs . . . . . . . . . . . . . . . . . . . . . . . 52 3.5 Related Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4 Homeostasis-Inspired Meta Continual Learning 60 4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.1.1 Continual Learning . . . . . . . . . . . . . . . . . . . . . . . 60 4.1.2 Meta Learning . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.2 Homeostatic Meta-Model . . . . . . . . . . . . . . . . . . . . . . . . 63 4.3 Preliminary Experiments and Findings . . . . . . . . . . . . . . . . . 66 4.3.1 Block-wise Permutation . . . . . . . . . . . . . . . . . . . . 67 4.3.2 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . 68 4.4 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.4.3 Overall Performance . . . . . . . . . . . . . . . . . . . . . . 70 4.5 Related Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 iv4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5 Conclusion 78 Abstract (In Korean) 89Docto

    Worker scheduling with induced learning in a semi-on-line setting

    Get PDF
    Scheduling is a widely researched area with many interesting fields. The presented research deals with a maintenance area in which preventative maintenance and emergency jobs enter the system. Each job has varying processing time and must be scheduled. Through learning the operators are able to expand their knowledge which enables them to accomplish more tasks in a limited time. Two MINLP models have been presented, one for preventative maintenance jobs alone, and another including emergency jobs. The emergency model is semi-on-line as the arrival time is unknown. A corresponding heuristic method has also been developed to decrease the computational time of the MINLP models. The models and heuristic were tested in several areas to determine their flexibility. It has been demonstrated that the inclusion of learning has greatly improved the efficiency of the workers and of the system

    Allocation of Workers Utilizing Models with Learning, Forgetting, and Various Work Structures

    Get PDF
    Much of the literature on cross-training and worker assignment problems focus on simulating production systems under cross-training methods. Many have found that for specific systems some methods of allocating workers are better performing than others in terms of overall productivity and ability to deal with change. This has lead researchers to create mathematical programming models with a goal of finding optimal levels of cross-training by changing worker allocations. Learning and forgetting curves have been a key method to improve the solutions produced by the optimization models, but learning curves are often nonlinear causing increased solving times. Because of this, most works have been restricted to modeling small, simple production systems. This thesis studies the expansion of worker allocation models with human learning and forgetting to include variable work structures, thus allowing the models to be used to address a larger set of problems than previously possible. A worker assignment model with flexible inventory constraints capable of representing different production structures is constructed to demonstrate the expansion. Utilizing a reformulation technique to counteract the increased solve times of learning curve incorporation, the scale of the production systems modeled in this work is larger than in similar works and closer to the scale of systems seen in industry. Production systems with multiple products and corresponding due dates are modeled to better represent the production environment in industry. Investigative tests including a 2^4 factorial experiment are included to understand the performance of the model. The output of the optimization model is a schedule of worker assignments for the planning horizon over all of the tasks in the modeled system. Production managers could apply the schedule to their existing lines or run what-if scenarios on line structure to better understand how alternative structures may affect worker training and line productivity over the planning horizon
    • …
    corecore