9,964 research outputs found

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio

    Unified Concept of Bottleneck

    Get PDF
    The term `bottleneck` has been extensively used in operations management literature. Management paradigms like the Theory of Constraints focus on the identification and exploitation of bottlenecks. Yet, we show that the term has not been rigorously defined. We provide a classification of bottleneck definitions available in literature and discuss several myths associated with the concept of bottleneck. The apparent diversity of definitions raises the question whether it is possible to have a single bottleneck definition which has as much applicability in high variety job shops as in mass production environments. The key to the formulation of an unified concept of bottleneck lies in relating the concept of bottleneck to the concept of shadow price of resources. We propose an universally applicable bottleneck definition based on the concept of average shadow price. We discuss the procedure for determination of bottleneck values for diverse production environments. The Law of Diminishing Returns is shown to be a sufficient but not necessary condition for the equivalence of the average and the marginal shadow price. The equivalence of these two prices is proved for several environments. Bottleneck identification is the first step in resource acquisition decisions faced by managers. The definition of bottleneck presented in the paper has the potential to not only reduce ambiguity regarding the meaning of the term but also open a new window to the formulation and analysis of a rich set of problems faced by managers.

    Capacity Planning and Leadtime management

    Get PDF
    In this paper we discuss a framework for capacity planning and lead time management in manufacturing companies, with an emphasis on the machine shop. First we show how queueing models can be used to find approximations of the mean and the variance of manufacturing shop lead times. These quantities often serve as a basis to set a fixed planned lead time in an MRP-controlled environment. A major drawback of a fixed planned lead time is the ignorance of the correlation between actual work loads and the lead times that can be realized under a limited capacity flexibility. To overcome this problem, we develop a method that determines the earliest possible completion time of any arriving job, without sacrificing the delivery performance of any other job in the shop. This earliest completion time is then taken to be the delivery date and thereby determines a workload-dependent planned lead time. We compare this capacity planning procedure with a fixed planned lead time approach (as in MRP), with a procedure in which lead times are estimated based on the amount of work in the shop, and with a workload-oriented release procedure. Numerical experiments so far show an excellent performance of the capacity planning procedure

    Clips: a capacity and lead time integrated procedure for scheduling.

    Get PDF
    We propose a general procedure to address real life job shop scheduling problems. The shop typically produces a variety of products, each with its own arrival stream, its own route through the shop and a given customer due date. The procedure first determines the manufacturing lot sizes for each product. The objective is to minimize the expected lead time and therefore we model the production environment as a queueing network. Given these lead times, release dates are set dynamically. This in turn creates a time window for every manufacturing order in which the various operations have to be sequenced. The sequencing logic is based on a Extended Shifting Bottleneck Procedure. These three major decisions are next incorporated into a four phase hierarchical operational implementation scheme. A small numerical example is used to illustrate the methodology. The final objective however is to develop a procedure that is useful for large, real life shops. We therefore report on a real life application.Model; Models; Applications; Product; Scheduling;

    Sensitivity of multi-product two-stage economic lotsizing models and their dependency on change-over and product cost ratio's

    Get PDF
    This study considers the production and inventory management problem of a two-stage semi-process production system. In case both production stages are physically connected it is obvious that materials are forced to flow. The economic lotsize depends on the holding cost of the end-product and the combined change-over cost of both production stages. On the other hand this 'flow shop' is forced to produce at the speed of the slowest stage. The benefit of this approach is the low amount of Work In Process inventory. When on the other hand, the involved stages are physically disconnected, a stock of intermediates acts as a decoupling point. Typically for the semi-process industry are high change-over costs for the process oriented first stage, which results in large lotsize differences for the different production stages. Using the stock of intermediates as a decoupling point avoids the complexity of synchronising operations but is an additional reason to augment the intermediate stock position. The disadvantage of this model is the high amount of Work-In-Process inventory. This paper proposes the 'synchronised planning model' realising a global optimum instead of the combination of two locally optimised settings. The mathematical model proves (for a two-stage single-product setting) that the optimal two-stage production frequency corresponds with the single EOQ solution for the first stage. A sensitivity study reveals, within these two-stage lotsizing models, the economical cost dependency on product and change-over cost ratio‟s. The purpose of this paper is to understand under which conditions the „joined setup‟ or the „two-stage individual eoq model‟ remain close to the optimal model. Numerical examples prove that the conclusions about the optimal settings remain valid when extending the model to a two-stage multi-product setting. The research reveals that two-stage individually optimized EOQ lotsizing should only be used when the end-product stage has a high added value and small change-over costs, compared to the first stage. Physically connected operations should be used when the end-product stage has a small added value and low change-over costs, or high added value and large change-over costs compared to the first production stage. The paper concludes with suggesting a practical common cycle approach to tackle a two-stage multi-product production and inventory management problem. The common cycle approach brings the benefit of a repetitive and predictable production schedule

    Moderating effects of cross-cultural dimensions on the relationship between persuasive smartphone application's design and acceptance-loyalty

    Get PDF
    Applying persuasive system design to different cultures has been a focus of many researchers as the global medium of communication has been centered within Smartphone via applications (apps). This is due to the vast proliferation of the Smartphone and the personal attachment of users to this device in various cultures. This led designers to search for ultimate ways to target users in specific regions of the world. The basic purpose of this study was to determine the relevance of cross-cultural factors to persuasive technologies, and the acceptance and loyalty of Smartphone apps. This was achieved by examining the moderating effects of Hofstede’s six cross-cultural dimensions on the relationship between Oinas-Kukkonen and Harjumaa’s Persuasive System Design (PSD), and acceptance and loyalty. By evaluating elements of persuasive systems design and cross-cultural dimensions, from user’s perspective, against a globally popular application like WhatsApp, an instrument was devised to investigate the cross-cultural adoption and continued use of Smartphone apps. Using this instrument, surveys were conducted for this research study to identify the influencing factors that have motivated the users from Malaysia, Netherlands, Germany, and the Kingdom of Saudi Arabia to adopt and continue using this application on a daily basis. These surveys, which included responses from 488 participants, further investigated if there is one cross-cultural dimension that has more moderating effects per country. The findings indicate an agreement among WhatsApp users from all four countries about their reasons for adopting and using this app, namely: social influence (93.7 percent), reliability (83.4 percent), dialog-support via feedback (76.4 percent), ease of use (90.5 percent) and small cost (87.7 percent). The results put new perspective that the gap among cultures is narrowing. Persuasive design strategies are particularly relevant to cultures across the globe. This study can aid the research community in investing efforts into enhancing the persuasive design framework for Smartphone apps
    corecore