5,575 research outputs found
Demand fulfillment in customer hierarchies with stochastic demand
Supply scarcity, due to demand or supply fluctuations, is a common issue in make-to-stock production systems. To increase profits when customers are heterogeneous, firms need to decide whether to accept a customer order or reject it in anticipation of more profitable orders, and if accepted, which supplies to use in order to fulfill the order. Such issues are addressed by solving demand fulfillment problems. In order to provide a solution, firms commonly divide their customers into different segments, based on their respective profitability. The available supply is first allocated to the customer segments based on their projected demand information. Then, as customer orders materialize, the allocated quotas are consumed. The customer segments commonly have a multilevel hierarchical structure, which reflects the structure of the sales organization. In this thesis, we study the demand fulfillment problem in make-to-stock production systems, considering such customer hierarchies with stochastic demand.
In the hierarchical setting, the available supply is allocated level by level from top to bottom of the hierarchy by multiple planners on different levels. The planners on higher levels of the hierarchy need to make their allocation decisions based on aggregated information, since transmitting all detailed demand information from the bottom to the top of the hierarchy is not generally feasible. In practice, simplistic rules of thumb are applied to deal with this decentralized problem, which lead to sub-optimal results. We aim to provide more effective approaches that result in near-optimal solutions to this decentralized problem.
We first consider the single-period problem with a single supply replenishment and focus on identifying critical information for good, decentralized allocation decisions. We propose two decentralized allocation methods, namely a stochastic Theil index approximation and a clustering approach, which provide near-optimal results even for large, complicated hierarchies. Both methods transmit aggregated information about profit heterogeneity and demand uncertainty in the hierarchy, which is missing in the current simplistic rules.
Subsequently, we expand our analysis to a multi-period setting, in which periodic supply replenishments are considered and periods are interconnected by inventory or backlog. We consider a periodic setting, meaning that in each period we allow multiple orders from multiple customer segments. We first formalize the centralized problem as a two-stage stochastic dynamic program. Due to the curse of dimensionality, the problem is computationally intractable. Therefore, we propose an approximate dynamic programming heuristic. For the decentralized case, we consider our proposed clustering method and modify it to fit the multi-period setting, relying on the approximate dynamic programming heuristic. Our results show that the proposed heuristics lead to profits very close to the ex-post optimal solution for both centralized and decentralized problems.
Finally, we look into the order promising stage and compare different consumption functions, namely partitioned, rule-based nested, and bid price methods. Our results show that nesting leads to performance improvements compared to partitioned consumption.
However, for decentralized problems, the improvement resulting from nesting cannot mitigate the profit loss from considerable mis-allocations made by simplistic rules, except for cases with high demand uncertainty or low profit heterogeneity. Moreover, among the nested consumption functions, the bid price approach, which integrates the allocation and consumption stages, leads to a higher performance than the rule-based consumption methods.
Altogether, our proposed decentralized methods lead to drastic profit improvements compared to the current simplistic rules for demand fulfillment in customer hierarchies, except for cases with very low shortage or for largely homogeneous customers, where simplistic rules perform similarly well. Applying our advanced methods is especially important when the shortage rate is high or customers are more heterogeneous. Regarding order promising, nesting is more crucial when demand uncertainty is high.
The research presented in this thesis was undertaken as part of the project âdemand fulfillment in customer hierarchiesâ. It was funded by the German Research Foundation (DFG) under grant FL738/2-1
Early Neanderthal social and behavioural complexity during the Purfleet Interglacial: handaxes in the latest Lower Palaeolithic.
Only a handful of âflagshipâ sites from the Purfleet Interglacial (Marine Isotope Stage 9, c. 350-290,000 years ago) have been properly examined, but the archaeological succession at the proposed type-site at Purfleet suggests a period of complexity and transition, with three techno-cultural groups represented in Britain. The first was a simple toolkit lacking handaxes (the Clactonian), and
the last a more sophisticated technology presaging the coming Middle Palaeolithic (simple prepared core or proto-Levallois technology). Sandwiched between were Acheulean groups, whose handaxes comprise the great majority of the extant archaeological record of the period â these are the focus of this study. It has previously been suggested that some features of the Acheulean in the Purfleet Interglacial were chronologically restricted, particularly the co-occurrence of ficrons and cleavers. These distinctive forms may have exceeded pure functionality and were perhaps imbued with a deeper social and cultural meaning. This study supports both the previously suggested preference for narrow, pointed morphologies, and the chronologically restricted pairing of ficrons and cleavers. By drawing on a wide spatial and temporal range of sites these patterns could be identified beyond the handful of âflagshipâ sites
previously studied. Hypertrophic âgiantsâ have now also been identified as a chronologically restricted form. Greater metrical variability was found than had been anticipated, leading to the creation of two new sub-groups (IA and IB) which are tentatively suggested to represent spatial and
perhaps temporal patterning. The picture in the far west of Britain remains unclear, but the possibility of different Acheulean groups operating in the Solent area, and a late survival of the Acheulean, are both suggested. Handaxes with backing and macroscopic asymmetry may represent prehensile or ergonomic considerations not commonly found on handaxes from earlier interglacial periods. It is argued that these forms anticipate similar developments in the Late Middle Palaeolithic in an example of convergent evolution
AIUCD 2022 - Proceedings
Lâundicesima edizione del Convegno Nazionale dellâAIUCD-Associazione di Informatica Umanistica ha per titolo Culture digitali. Intersezioni: filosofia, arti, media. Nel titolo è presente, in maniera esplicita, la richiesta di una riflessione, metodologica e teorica, sullâinterrelazione tra tecnologie digitali, scienze dellâinformazione, discipline filosofiche, mondo delle arti e cultural studies
Graphical scaffolding for the learning of data wrangling APIs
In order for students across the sciences to avail themselves of modern data streams, they must first know how to wrangle data: how to reshape ill-organised, tabular data into another format, and how to do this programmatically, in languages such as Python and R. Despite the cross-departmental demand and the ubiquity of data wrangling in analytical workflows, the research on how to optimise the instruction of it has been minimal. Although data wrangling as a programming domain presents distinctive challenges - characterised by on-the-fly syntax lookup and code example integration - it also presents opportunities. One such opportunity is how tabular data structures are easily visualised. To leverage the inherent visualisability of data wrangling, this dissertation evaluates three types of graphics that could be employed as scaffolding for novices: subgoal graphics, thumbnail graphics, and parameter graphics. Using a specially built e-learning platform, this dissertation documents a multi-institutional, randomised, and controlled experiment that investigates the pedagogical effects of these. Our results indicate that the graphics are well-received, that subgoal graphics boost the completion rate, and that thumbnail graphics improve navigability within a command menu. We also obtained several non-significant results, and indications that parameter graphics are counter-productive. We will discuss these findings in the context of general scaffolding dilemmas, and how they fit into a wider research programme on data wrangling instruction
Network Geometry
Networks are finite metric spaces, with distances defined by the shortest paths between nodes. However, this is not the only form of network geometry: two others are the geometry of latent spaces underlying many networks and the effective geometry induced by dynamical processes in networks. These three approaches to network geometry are intimately related, and all three of them have been found to be exceptionally efficient in discovering fractality, scale invariance, self-similarity and other forms of fundamental symmetries in networks. Network geometry is also of great use in a variety of practical applications, from understanding how the brain works to routing in the Internet. We review the most important theoretical and practical developments dealing with these approaches to network geometry and offer perspectives on future research directions and challenges in this frontier in the study of complexity
Automating C++ Execution Exploration to Solve the Out-of-thin-air Problem
Modern computers are marvels of engineering. Customisable reasoning engines which can be programmed to complete complex mathematical tasks at incredible speed. Decades of engineering has taken computers from room sized machines to near invisible devices in all aspects of life. With this engineering has come more complex and ornate design, a substantial leap forward being multiprocessing. Modern processors can execute threads of program logic in parallel, coordinating shared resources like memory and device access. Parallel computation leads to significant scaling of compute power, but yields a substantial complexity cost for both processors designers and programmers. Parallel access to shared memory requires coordination on which thread can use a particular fragment of memory at a given time. Simple mechanisms like locks and mutexes ensure only one process at a time can access memory gives an easy to use programming model, but they eschew the benefits of parallel computation. Instead, processors today have complex mechanisms to permit concurrent shared memory access. These mechanisms prevent simple programmer reasoning and require complex formal descriptions to define: memory models. Early memory model research focused on weak memory behaviours which are observable because of hardware design; over time it has become obvious that not only hardware but compilers are capable of making new weak behaviours observable. Substantial and rapid success has been achieved formalising the behaviour of these machines: researchers refined new specifications for shared-memory concurrency and used mechanisation to automate validation of their models. As the models were refined and new behaviours of the hardware were discovered, researchers also began working with processor vendors â helping to inform design choices in new processor designs to keep the weak behaviours within some sensible bounds. Unfortunately when reasoning about shared memory accesses of highly optimised programming languages like C and C++, deep questions are still left open about how best to describe the behaviour of shared memory accesses in the presence of dependency removing compiler optimisations. Until very recently it has not been possible to properly specify the behaviours of these programs without forbidding ii optimisations which are used and observable, or allowing program behaviours which are nonsense and never observable. In this thesis I explore the development of memory models through the lens of tooling: taking at first an industrial approach, and then exploring memory models for highly optimised programming languages. I show that taming the complexity of these models with automated tools aids bug finding even where formal evaluation has not. Further, building tools creates a focus on the computational complexity of the memory model which in turn can steer development of the model towards simpler designs. We will look at 3 case studies: the first is an industrial hardware model of NVIDIA GPUs which we extend to encompass more hardware features than before. This extension was validated using an automated testing process generating tests of finite size, and then verified against the original memory model in Coq. The second case study is an exploration of the first memory model for an optimised programming language which takes proper account of dependencies. We build a tool to automate execution of this model over a series of tests, and in the process discovered subtleties in the definitions which were unexpected â leading to refinement of the model. In the final case study, we develop a memory model that gives a direct definition for compiler preserved dependencies. This model is the first model that can be integrated with relative ease into the C/C++ programming language standard. We built this model alongside its own tooling, yielding a fast tool for giving determinations on a large number of litmus tests â a novelty for this sort of memory model. This model fits well with the existing C/C++ specifications, and we are working with the International Standards Organisation to understand how best to fit this model in the standard
Development of Evidence Based Factors to Enhance Safety Behaviour in Oil and Gas Industry in the Niger Delta Region of Nigeria
The Nigeriaâs economy has benefited enormously from its oil and gas sector. However, despite the economic impact accrued to the nation from the sector, there are still safety and health concerns among its employees. Unsafe behaviour mainly influenced by poor safety culture escalates the risk of injuries and accidents at workplaces and requires proper management. This study aimed to evaluate safety behaviour and its impact on safety performance among Nigeria oil and gas workers. To achieve this aim, both quantitative and qualitative research methods were adopted. Validated questionnaires used in studies 1 and 2 were distributed via JISC Survey (Bristol online survey) platform and self-administration. For the first study, 462 frontline employees participated in the survey. Structural equation modelling was perform using the data set and, the out from the analysis found safety management practices had a significant relationship with both safety participation (p 0.109).
Quantitative survey was undertaken to achieve the second study objectives. At the end of the survey, 1004 frontline workers drawn from 14 oil and gas companies took part in the survey. Data were analysed by performing structural equation modelling and findings revealed that management commitment, safety communication, safety motivation, and employee involvement were found to have a significant relationship (p = <0.005) to both safety compliance and safety participation, except for only safety training and safety participation where no relationship was established.
A qualitative face to face interview was conducted among twelve different employees (both management and frontline employees) drawn from the industries. Thematic analysis of the qualitative data based on the interview reveal gaps around the different elements of safety culture in the industry and how it impacts safety performance of employees.
Based on the paucity of data within the study area, the work has advanced up to date data around the impact of safety culture on workers behaviour in oil and gas industry in Nigeria. In addition, the study is considered timely and relevant. Based on internet research, it has covered the research gap within the region and advanced tangible recommendation around the establishment of safety leading indicators for employeeâs safety performance in Nigeria oil and gas industry.
The study concludes that timely strengthening of workplace safety culture and visible management approach to safety is important for occupational safety improvement in the industry. In addition to this, there is the need for policy realignment to aid in the promotion of safety culture and adherence to safety standards within the industry as practiced in other countries
Anytime algorithms for ROBDD symmetry detection and approximation
Reduced Ordered Binary Decision Diagrams (ROBDDs) provide a dense and memory efficient representation of Boolean functions. When ROBDDs are applied in logic synthesis, the problem arises of detecting both classical and generalised symmetries. State-of-the-art in symmetry detection is represented by Mishchenko's algorithm. Mishchenko showed how to detect symmetries in ROBDDs without the need for checking equivalence of all co-factor pairs. This work resulted in a practical algorithm for detecting all classical symmetries in an ROBDD in O(|G|³) set operations where |G| is the number of nodes in the ROBDD. Mishchenko and his colleagues subsequently extended the algorithm to find generalised symmetries. The extended algorithm retains the same asymptotic complexity for each type of generalised symmetry. Both the classical and generalised symmetry detection algorithms are monolithic in the sense that they only return a meaningful answer when they are left to run to completion. In this thesis we present efficient anytime algorithms for detecting both classical and generalised symmetries, that output pairs of symmetric variables until a prescribed time bound is exceeded. These anytime algorithms are complete in that given sufficient time they are guaranteed to find all symmetric pairs. Theoretically these algorithms reside in O(n³+n|G|+|G|³) and O(n³+n²|G|+|G|³) respectively, where n is the number of variables, so that in practice the advantage of anytime generality is not gained at the expense of efficiency. In fact, the anytime approach requires only very modest data structure support and offers unique opportunities for optimisation so the resulting algorithms are very efficient. The thesis continues by considering another class of anytime algorithms for ROBDDs that is motivated by the dearth of work on approximating ROBDDs. The need for approximation arises because many ROBDD operations result in an ROBDD whose size is quadratic in the size of the inputs. Furthermore, if ROBDDs are used in abstract interpretation, the running time of the analysis is related not only to the complexity of the individual ROBDD operations but also the number of operations applied. The number of operations is, in turn, constrained by the number of times a Boolean function can be weakened before stability is achieved. This thesis proposes a widening that can be used to both constrain the size of an ROBDD and also ensure that the number of times that it is weakened is bounded by some given constant. The widening can be used to either systematically approximate an ROBDD from above (i.e. derive a weaker function) or below (i.e. infer a stronger function). The thesis also considers how randomised techniques may be deployed to improve the speed of computing an approximation by avoiding potentially expensive ROBDD manipulation
- âŚ