3,281 research outputs found

    Value Stream Mapping and Simulation Modelling for Healthcare Transactional Process Improvement

    Get PDF
    Lean management philosophy was originated in Japan from the Toyota production system. The main idea is to determine and eliminate waste. The concept of end-to-end value allows organizations to achieve competitive advantage through best quality product and services through minimum operational cost. These days there is more to be achieved by applying lean to services and transactional processes floors. Lean facilitators are facing challenges when trying to transform an organization to be a lean enterprise because it is possible in production systems, but that is not easier in the services and transactional sectors, which means there are challenges that should be considered. Some of the challenges for the service sector are; complex and mixed value streams, information and people are processed instead of parts and human interaction is a major part of the service sector

    Governance for sustainability: learning from VSM practice

    Get PDF
    Purpose – While there is some agreement on the usefulness of systems and complexity approaches to tackle the sustainability challenges facing the organisations and governments in the twenty-first century, less is clear regarding the way such approaches can inspire new ways of governance for sustainability. The purpose of this paper is to progress ongoing research using the Viable System Model (VSM) as a meta-language to facilitate long-term sustainability in business, communities and societies, using the “Methodology to support self-transformation”, by focusing on ways of learning about governance for sustainability. Design/methodology/approach – It summarises core self-governance challenges for long-term sustainability, and the organisational capabilities required to face them, at the “Framework for Assessing Sustainable Governance”. This tool is then used to analyse capabilities for governance for sustainability at three real situations where the mentioned Methodology inspired bottom up processes of self-organisation. It analyses the transformations decided from each organisation, in terms of capabilities for sustainable governance, using the suggested Framework. Findings – Core technical lessons learned from using the framework are discussed, include the usefulness of using a unified language and tool when studying governance for sustainability in differing types and scales of case study organisations. Research limitations/implications – As with other exploratory research, it reckons the convenience for further development and testing of the proposed tools to improve their reliability and robustness. Practical implications – A final conclusion suggests that the suggested tools offer a useful heuristic path to learn about governance for sustainability, from a VSM perspective; the learning from each organisational self-transformation regarding governance for sustainability is insightful for policy and strategy design and evaluation; in particular the possibility of comparing situations from different scales and types of organisations. Originality/value – There is very little coherence in the governance literature and the field of governance for sustainability is an emerging field. This piece of exploratory research is valuable as it presents an effective tool to learn about governance for sustainability, based in the “Methodology for Self-Transformation”; and offers reflexions on applications of the methodology and the tool, that contribute to clarify the meaning of governance for sustainability in practice, in organisations from different scales and types

    Estimating Infection Sources in Networks Using Partial Timestamps

    Full text link
    We study the problem of identifying infection sources in a network based on the network topology, and a subset of infection timestamps. In the case of a single infection source in a tree network, we derive the maximum likelihood estimator of the source and the unknown diffusion parameters. We then introduce a new heuristic involving an optimization over a parametrized family of Gromov matrices to develop a single source estimation algorithm for general graphs. Compared with the breadth-first search tree heuristic commonly adopted in the literature, simulations demonstrate that our approach achieves better estimation accuracy than several other benchmark algorithms, even though these require more information like the diffusion parameters. We next develop a multiple sources estimation algorithm for general graphs, which first partitions the graph into source candidate clusters, and then applies our single source estimation algorithm to each cluster. We show that if the graph is a tree, then each source candidate cluster contains at least one source. Simulations using synthetic and real networks, and experiments using real-world data suggest that our proposed algorithms are able to estimate the true infection source(s) to within a small number of hops with a small portion of the infection timestamps being observed.Comment: 15 pages, 15 figures, accepted by IEEE Transactions on Information Forensics and Securit

    A compiler extension for parallelizing arrays automatically on the cell heterogeneous processor

    Get PDF
    This paper describes the approaches taken to extend an array programming language compiler using a Virtual SIMD Machine (VSM) model for parallelizing array operations on Cell Broadband Engine heterogeneous machine. This development is part of ongoing work at the University of Glasgow for developing array compilers that are beneficial for applications in many areas such as graphics, multimedia, image processing and scientific computation. Our extended compiler, which is built upon the VSM interface, eases the parallelization processes by allowing automatic parallelisation without the need for any annotations or process directives. The preliminary results demonstrate significant improvement especially on data-intensive applications

    Therblig-embedded value stream mapping method for lean energy machining

    Get PDF
    To improve energy efficiency, extensive studies have focused on the cutting parameters optimization in the machining process. Actually, non-cutting activities (NCA) occur frequently during machining and this is a promising way to save energy through optimizing NCA without changing the cutting parameters. However, it is difficult for the existing methods to accurately determine and reduce the energy wastes (EW) in NCA. To fill this gap, a novel Therblig-embedded Value Stream Mapping (TVSM) method is proposed to improve the energy transparency and clearly show and reduce the EW in NCA. The Future-State-Map (FSM) of TVSM can be built by minimizing non-cutting activities and Therbligs. By implementing the FSM, time and energy efficiencies can be improved without decreasing the machining quality, which is consistent with the goal of lean energy machining. The method is validated by a machining case study, the results show that the total energy is reduced by 7.65%, and the time efficiency of the value-added activities is improved by 8.12% , and the energy efficiency of value-added activities and Therbligs are raised by 4.95% and 1.58%, respectively. This approach can be applied to reduce the EW of NCA, to support designers to design high energy efficiency machining processes during process planning

    Waste reduction in production processes through simulation and VSM

    Get PDF
    Corporate managers often face the need to choose the optimal configurations of production processes to reduce waste. Research has shown that simulation is an effective tool among those conceived to support the manager's decisions. Nevertheless, the use of simulation at the company level remains limited due to the complexity in the design phase. In this context, the Value Stream Map (VSM)-a tool of the Lean philosophy-is here exploited as a link between the strategic needs of the management and the operational aspect of the simulation process in order to approach sustainability issues. The presented approach is divided into two main parts: a set of criteria for expanding the VSM are identified in order to increase the level of details of the represented processes; then, data categories required for the inputs and outputs of each sub-process modeling are defined, including environmental indicators. Specifically, an extended version of the classical VSM (X-VSM), conceived to support process simulation, is here proposed: the X-VSM is used to guide the design of the simulation so that the management decisions, in terms of waste reduction, can be easily evaluated. The proposal was validated on a production process of a large multinational manufacturing company

    Improving productivity of road surfacing operations using value stream mapping and discrete event simulation

    Get PDF
    The Highways Infrastructure is one of the most valuable asset owned by the public sector. Efficient operations of Highways have the success of national and local economies as well as the quality of life of the general public, dependent on it. Ensuring smooth traffic operations requires maintenance and improvements of the highest standard. This research investigates integration of Discrete Event Simulation (DES) and Value Stream mapping (VSM) to enhance the productivity of the delivery of road surfacing operations by achieving higher production rates and minimum road closure times. Research approach involved use of primary data, collected from direct observation, interviews, review of archival records and productivity databases. Based on this, process maps and value stream maps were developed, which were subsequently used to produce discrete event simulation models, for exploration of different optimisation scenarios

    Sparse Matrix-based Random Projection for Classification

    Full text link
    As a typical dimensionality reduction technique, random projection can be simply implemented with linear projection, while maintaining the pairwise distances of high-dimensional data with high probability. Considering this technique is mainly exploited for the task of classification, this paper is developed to study the construction of random matrix from the viewpoint of feature selection, rather than of traditional distance preservation. This yields a somewhat surprising theoretical result, that is, the sparse random matrix with exactly one nonzero element per column, can present better feature selection performance than other more dense matrices, if the projection dimension is sufficiently large (namely, not much smaller than the number of feature elements); otherwise, it will perform comparably to others. For random projection, this theoretical result implies considerable improvement on both complexity and performance, which is widely confirmed with the classification experiments on both synthetic data and real data
    corecore