802 research outputs found

    Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    Get PDF
    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented

    A Decomposition Approach for the Multi-Modal, Resource-Constrained, Multi-Project Scheduling Problem with Generalized Precedence and Expediting Resources

    Get PDF
    The field of project scheduling has received a great deal of study for many years with a steady evolution of problem complexity and solution methodologies. As solution methodologies and technologies improve, increasingly complex, real-world problems are addressed, presenting researchers a continuing challenge to find ever more effective means for approaching project scheduling. This dissertation introduces a project scheduling problem which is applicable across a broad spectrum of real-world situations. The problem is based on the well-known Resource-Constrained Project Scheduling Problem, extended to include multiple modes, generalized precedence, and expediting resources. The problem is further extended to include multiple projects which have generalized precedence, renewable and nonrenewable resources, and expediting resources at the program level. The problem presented is one not previously addressed in the literature nor is it one to which the existing specialized project scheduling methodologies can be directly applied. This dissertation presents a decomposition approach for solving the problem, including algorithms for solving the decomposed subproblems and the master problem. This dissertation also describes a methodology for generating instances of the new problem, extending the way existing problem generators describe and construct network structures and this class of problem. The methodologies presented are demonstrated through extensive empirical testing

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques

    Theory and Engineering of Scheduling Parallel Jobs

    Get PDF
    Scheduling is very important for an efficient utilization of modern parallel computing systems. In this thesis, four main research areas for scheduling are investigated: the interplay and distribution of decision makers, the efficient schedule computation, efficient scheduling for the memory hierarchy and energy-efficiency. The main result is a provably fast and efficient scheduling algorithm for malleable jobs. Experiments show the importance and possibilities of scheduling considering the memory hierarchy

    A Framework for Approximate Optimization of BoT Application Deployment in Hybrid Cloud Environment

    Get PDF
    We adopt a systematic approach to investigate the efficiency of near-optimal deployment of large-scale CPU-intensive Bag-of-Task applications running on cloud resources with the non-proportional cost to performance ratios. Our analytical solutions perform in both known and unknown running time of the given application. It tries to optimize users' utility by choosing the most desirable tradeoff between the make-span and the total incurred expense. We propose a schema to provide a near-optimal deployment of BoT application regarding users' preferences. Our approach is to provide user with a set of Pareto-optimal solutions, and then she may select one of the possible scheduling points based on her internal utility function. Our framework can cope with uncertainty in the tasks' execution time using two methods, too. First, an estimation method based on a Monte Carlo sampling called AA algorithm is presented. It uses the minimum possible number of sampling to predict the average task running time. Second, assuming that we have access to some code analyzer, code profiling or estimation tools, a hybrid method to evaluate the accuracy of each estimation tool in certain interval times for improving resource allocation decision has been presented. We propose approximate deployment strategies that run on hybrid cloud. In essence, proposed strategies first determine either an estimated or an exact optimal schema based on the information provided from users' side and environmental parameters. Then, we exploit dynamic methods to assign tasks to resources to reach an optimal schema as close as possible by using two methods. A fast yet simple method based on First Fit Decreasing algorithm, and a more complex approach based on the approximation solution of the transformed problem into a subset sum problem. Extensive experiment results conducted on a hybrid cloud platform confirm that our framework can deliver a near optimal solution respecting user's utility function

    Scalable processing of aggregate functions for data streams in resource-constrained environments

    Get PDF
    The fast evolution of data analytics platforms has resulted in an increasing demand for real-time data stream processing. From Internet of Things applications to the monitoring of telemetry generated in large datacenters, a common demand for currently emerging scenarios is the need to process vast amounts of data with low latencies, generally performing the analysis process as close to the data source as possible. Devices and sensors generate streams of data across a diversity of locations and protocols. That data usually reaches a central platform that is used to store and process the streams. Processing can be done in real time, with transformations and enrichment happening on-the-fly, but it can also happen after data is stored and organized in repositories. In the former case, stream processing technologies are required to operate on the data; in the latter batch analytics and queries are of common use. Stream processing platforms are required to be malleable and absorb spikes generated by fluctuations of data generation rates. Data is usually produced as time series that have to be aggregated using multiple operators, being sliding windows one of the most common abstractions used to process data in real-time. To satisfy the above-mentioned demands, efficient stream processing techniques that aggregate data with minimal computational cost need to be developed. However, data analytics might require to aggregate extensive windows of data. Approximate computing has been a central paradigm for decades in data analytics in order to improve the performance and reduce the needed resources, such as memory, computation time, bandwidth or energy. In exchange for these improvements, the aggregated results suffer from a level of inaccuracy that in some cases can be predicted and constrained. This doctoral thesis aims to demonstrate that it is possible to have constant-time and memory efficient aggregation functions with approximate computing mechanisms for constrained environments. In order to achieve this goal, the work has been structured in three research challenges. First we introduce a runtime to dynamically construct data stream processing topologies based on user-supplied code. These dynamic topologies are built on-the-fly using a data subscription model de¿ned by the applications that consume data. The subscription-based programing model enables multiple users to deploy their own data-processing services. On top of this runtime, we present the Amortized Monoid Tree Aggregator general sliding window aggregation framework, which seamlessly combines the following features: amortized O(1) time complexity and a worst-case of O(log n) between insertions; it provides both a window aggregation mechanism and a window slide policy that are user programmable; the enforcement of the window sliding policy exhibits amortized O(1) computational cost for single evictions and supports bulk evictions with cost O(log n); and it requires a local memory space of O(log n). The framework can compute aggregations over multiple data dimensions, and has been designed to support decoupling computation and data storage through the use of distributed Key-Value Stores to keep window elements and partial aggregations. Specially motivated by edge computing scenarios, we contribute Approximate and Amortized Monoid Tree Aggregator (A2MTA). It is, to our knowledge, the first general purpose sliding window programable framework that combines constant-time aggregations with error bounded approximate computing techniques. A2MTA uses statistical analysis of the stream data in order to perform inaccurate aggregations, providing a critical reduction of needed resources for massive stream data aggregation, and an improvement of performance.La ràpida evolució de les plataformes d'anàlisi de dades ha resultat en un increment de la demanda de processament de fluxos continus de dades en temps real. Des de la internet de les coses fins al monitoratge de telemetria generada en grans servidors, una demanda recurrent per escenaris emergents es la necessitat de processar grans quantitats de dades amb latències molt baixes, generalment fent el processat de les dades tant a prop dels origines com sigui possible. Les dades son generades com a fluxos continus per dispositius que utilitzen una varietat de localitzacions i protocols. Aquests processat de les dades s pot fer en temps real amb les transformacions efectuant-se al vol, i en aquest cas la utilització de plataformes de processat d'streams és necessària. Les plataformes de processat d'streams cal que absorbeixin pics de freqüència de dades. Les dades es generen com a series temporals que s'agreguen fent servir multiples operadors, on les finestres són l'abstracció més habitual. Per a satisfer les baixes latències i maleabilitat requerides, els operadors necesiten tenir un cost computacional mínim, inclús amb extenses finestres de dades per a agregar. La computació aproximada ha sigut durant decades un paradigma rellevant per l'anàlisi de dades on cal millorar el rendiment de diferents algorismes i reduir-ne el temps de computació, la memòria requerida, l'ample de banda o el consum energètic. A canvi d'aquestes millores, els resultats poden patir d'una falta d'exactitud que pot ser estimada i controlada. Aquesta tesi doctoral vol demostrar que es posible tenir funcions d'agregació pel processat d'streams que tinc un cost de temps constant, sigui eficient en termes de memoria i faci ús de computació aproximada. Per aconseguir aquests objectius, aquesta tesi està dividida en tres reptes. Primer presentem un entorn per a la construcció dinàmica de topologies de computació d'streams de dades utilitzant codi d'usuari. Aquestes topologies es construeixen fent servir un model de subscripció a streams, en el que les aplicación consumidores de dades amplien les topologies mentre s'estan executant. Aquest entorn permet multiples entitats ampliant una mateixa topologia. A sobre d'aquest entorn, presentem un framework de propòsit general per a l'agregació de finestres de dades anomenat AMTA (Amortized Monoid Tree Aggregator). Aquest framework combina: temps amortitzat constant per a totes les operacions, amb un cas pitjor logarítmic; programable tant en termes d'agregació com en termes d'expulsió d'elements de la finestra. L'expulsió massiva d'elements de la finestra es considera una operació atòmica, amb un cost amortitzat constant; i requereix espai en memoria local per a O(log n) elements de la finestra. Aquest framework pot computar agregacions sobre multiples dimensions de dades, i ha estat dissenyat per desacoplar la computació de les dades del seu desat, podent tenir els continguts de la finestra distribuits en diferents màquines. Motivats per la computació en l'edge (edge computing), hem contribuit A2MTA (Approximate and Amortized Monoid Tree Aggregator). Des de el nostre coneixement, es el primer framework de propòsit general per a la computació de finestres que combina un cost constant per a totes les seves operacions amb tècniques de computació aproximada amb control de l'error. A2MTA fa us d'anàlisis estadístics per a poder fer agregacions amb error limitat, reduint críticament els recursos necessaris per a la computació de grans quantitats de dades

    A Simulation Based Optimization Approach for Stochastic Resource Constrained Project Management with Milestones

    Get PDF
    Project managers are challenged to continuously make decisions throughout the development of a project attempting to minimize the overall project cost and, at the same time, seeking to accomplish a pre-established deadline. These time-cost tradeoff decisions are made even more complex when resource constraints, caused by limited available resources, are added to the equation. The goal of this research is to design a simulation-based optimization approach to solve the resource constrained project scheduling problem (RCPSP) under uncertainty. Two methods are proposed: the Total Cost Resource Constrained Method (TCRCM) and the Earned Value Resource Constrained Method (EVRCM). The TCRCM seeks to minimize the total project cost, including activity and penalty costs due to lateness at project completion, while considering the RCPSP with stochastic activity times and costs in terms of resource alternatives as well as precedence relationships for activities sharing resources. The EVRCM, which is based on earned value management, not only considers penalty costs at project completion, but at several project milestones along the execution of the project. Both methods can be implemented in two phases: Phase I, and Phase II. Phase I is implemented prior the start of the project to determine the optimal resource configuration for the entire project based on the specified performance measure (e.g. total project cost). Phase II is implemented as the project progresses to determine the optimal resource configuration for the remaining activities of the project. The robustness of both methods is evaluated through a set of experiments. Lastly, the methods are integrated into Microsoft Excel

    The other art of computer programming: A visual alternative to communicate computational thinking

    Get PDF
    The thesis will explore the implications of teaching computer science through visual communication. This study aims to define a framework for using pictures within learning computer science. Visual communication materials for teaching computer science were created and tested with Year 8 students. Along with a recent commercial and political focus on the introduction of coding to adolescents, it appears that the computer industry has a large shortfall of programmers. Accompanying this shortfall is a rise among adolescents in the preference for visual communication (Brumberger, 2011; Coats, 2006; Oblinger et al., 2005; Prensky, 2001; Tapscott, 1998) while textual communication currently dominates the teaching materials in the computing discipline. This study looks at the learning process and utilises the ideas of Gibson, Dewey and Piaget to consider the role of visual design in teaching programming. According to Piagetian theory Year 8 is the time a child begins to understand abstract thought. This research investigated through co-creation and prototyping how to creatively support cognition within the learning process. Visual communication theories, comprising the fields of graphic and information design, were employed to communicate computer science to approximately 60 junior high school students across eight schools. Literature in a range of visual communication fields is reviewed along with the psychology of perception and cognition to help create a prototype lesson plan for the target audience of Year 8 students. The history of computer science is reviewed to illustrate the mental imagery within the discipline and also to explore computational thinking concepts. These concepts are . . . the metaphors and structures that underlie all areas of science and engineering (Guzdial, 2008). The participants’ attitudes increased toward learning programming through visual communication. Quantitative questionnaires were used to gather data on cognition and measure the effectiveness of the learning process. Thirteen hypotheses were established concerning learning programming through pictures from the quantitative data. Focus groups further triangulated data gathered in the quantitative stage. Approximately seventy percent of the participants understood seventy percent of the information within the instrumentation. Models of intent to learn programming through pictures were established using structural equation modelling (SEM). Outcomes of the exegesis are a framework for using pictures that demonstrates computational thinking and explains the research

    Simulation and optimization model for the construction of electrical substations

    Get PDF
    One of the most complex construction projects is electrical substations. An electrical substation is an auxiliary station of an electricity generation, transmission and distribution system where voltage is transformed from high to low or the reverse using transformers. Construction of electrical substation includes civil works and electromechanical works. The scope of civil works includes construction of several buildings/components divided into parallel and overlapped working phases that require variety of resources and are generally quite costly and consume a considerable amount of time. Therefore, construction of substations faces complicated time-cost-resource optimization problems. On another hand, the construction industry is turning out to be progressively competitive throughout the years, whereby the need to persistently discover approaches to enhance construction performance. To address the previously stated afflictions, this dissertation makes the underlying strides and introduces a simulation and optimization model for the execution processes of civil works for an electrical substation based on database excel file for input data entry. The input data include bill of quantities, maximum available resources, production rates, unit cost of resources and indirect cost. The model is built on Anylogic software using discrete event simulation method. The model is divided into three zones working in parallel to each other. Each zone includes a group of buildings related to the same construction area. Each zone-model describes the execution process schedule for each building in the zone, the time consumed, percentage of utilization of equipment and manpower crews, amount of materials consumed and total direct and indirect cost. The model is then optimized to mainly minimize the project duration using parameter variation experiment and genetic algorithm java code implemented using Anylogic platform. The model used allocated resource parameters as decision variables and available resources as constraints. The model is verified on real case studies in Egypt and sensitivity analysis studies are incorporated. The model is also validated using a real case study and proves its efficiency by attaining a reduction in model time units between simulation and optimization experiments of 10.25% and reduction in total cost of 4.7%. Also, by comparing the optimization results by the actual data of the case study, the model attains a reduction in time and cost by 13.6% and 6.3% respectively. An analysis to determine the effect of each resource on reduction in cost is also presented
    corecore