41 research outputs found

    Algoritam planiranja operacija "flow shop" u cilju smanjivanja vremena izvršenja kod problema n-poslova i m-strojeva

    Get PDF
    In multi stage job problems, simple priority dispatching rules such as shortest processing time (SPT) and earliest due date (EDD) can be used to obtain solutions of minimum total processing time, but may not sometimes give sequences as expected that are close to optimal. The Johnson\u27s algorithm is especially popular among analytical approaches that are used for solving n-jobs, 2-machines sequence problem. In this paper the presented algorithm is based on converting an m-machine problem to a 2-machine problem. Based on testing and comparison with other relevant methods, the proposed algorithm is offered as a competitive alternative for practical application when solving n-jobs and m-machines problems.U problemima posla s više faza, mogu se koristiti jednostavna prioritetna dispečerska pravila kao što su najkraće vrijeme obrade (PT) i najraniji datum dospijeća (EDD) za dobivanje rješenja najmanjega ukupnog vremena obrade. Međutim, ona ponekad ne daju slijed za koji se očekuje da je blizu optimalnom. Johnsonov algoritam je posebno popularan među analitičkim pristupima koji se koriste za rješavanje problema slijeda n-poslova i 2-stroja. Algoritam prikazan u ovom radu se temelji na pretvaranju problema m-strojeva u problem 2-stroja. Na temelju ispitivanja i usporedbe s drugim relevantnim metodama, predloženi algoritam se nudi kao konkurentna alternativa za praktičnu primjenu pri rješavanju problema n-poslova i m-strojeva

    Methods for measuring transmission parameters of data networks

    Get PDF
    Úkolem bakalářské práce bylo prostudování a popsání známých metod pro testování kvality přenosových parametrů datových sítí. Vycházel jsem ze standardů RFC, které jsou základními doporučeními v internetových sítích. Na základě prostudování RFC 2544 jsem otestoval webové aplikace pro měření parametrů datových sítí a také jsem otestoval softwarové utility pro operační systém Linux. Na základě získaných poznatků z testování jsem navrhl webovou aplikaci, která umožňuje měřit základní parametry v datových sítích (přenosový tok k účastníkovi od serveru, přenosový tok od účastníka k serveru, zpoždění a stabilitu). Aplikace je vytvořena pomocí serverového jazyka PHP spolu s databází MySQL a klientského jazyka JavaScript. Aplikace byla testována také na „chytrých telefonech“ a tabletech. Umožňuje zobrazení provedených měření, prezentaci výsledků do grafu.Study and benchmarking methodology description of methods for measuring transmission parameters of data networks were the objectives of bachelor’s thesis. My sources were the RFC standards. Those are the basic recommendations of internet networks. I tested web applications for measuring transmission parameters. The test was based on the study of RFC 2544 and also I tested utility for the Linux operating system. I designed a web application for measuring basic parameters of data networks. Basic parameters are transmission stream from the server to the user, the transmission stream from the user to the server, delay and stability. Design of the application was based on experiences from testing web applications and utility. The application is using PHP server’s language along with MySQL database and JavaScript client’s language. The application also works on smart phones and tablets. The measured data should be presented by graphs.

    Supplier evaluation and selection: a fuzzy novel multi-criteria group decision-making approach

    Get PDF
    Suppliers’ evaluation and selection is a subject widely explored through many different kinds of approaches and multi-criteria decision methods, and more recently also through group decision making ones. This paper addresses these problems by proposing an easy-going two-phase supplier selection decision model that uses a scientific approach and incorporates performance criteria in screening and selecting the potential suppliers for further optimal supplier selection. The first phase of the model determines the performance of the suppliers on both quantitative and qualitative criteria and the relative importance weights of the criteria. Fuzzy set theory is utilized to deal with the imprecision and vagueness involved with the subjective judgment of both the qualitative data of the decision-matrix and the relative importance weights of the criteria. In the second phase, the suppliers are screened using their efficiencies and an agreed threshold. Then, the optimal supplier for corporation is selected from the limited potential suppliers set. To illustrate the applicability and validate the proposed model, a case study of a beverage producing company located in Ghana, the Sub-Saharan Africa is proposed. The results of the study can provide valuable clues and guidelines to decision-makers and analyst in pre-contract negotiations. The proposed model will assist practicing managers to effectively reduce their supply-base and efficiently select the optimal supplier for corporation. Implications of the study to the theory and practice and future research directions are also outlined.info:eu-repo/semantics/publishedVersio

    Simulation-Based Calibration Checking for Bayesian Computation: The Choice of Test Quantities Shapes Sensitivity

    Full text link
    Simulation-based calibration checking (SBC) is a practical method to validate computationally-derived posterior distributions or their approximations. In this paper, we introduce a new variant of SBC to alleviate several known problems. Our variant allows the user to in principle detect any possible issue with the posterior, while previously reported implementations could never detect large classes of problems including when the posterior is equal to the prior. This is made possible by including additional data-dependent test quantities when running SBC. We argue and demonstrate that the joint likelihood of the data is an especially useful test quantity. Some other types of test quantities and their theoretical and practical benefits are also investigated. We support our recommendations with numerical case studies on a multivariate normal example and theoretical analysis of SBC, thereby providing a more complete understanding of the underlying statistical mechanisms. From the theoretical side, we also bring attention to a relatively common mistake in the literature and clarify the difference between SBC and checks based on the data-averaged posterior. The SBC variant introduced in this paper is implemented in the SBC R package.Comment: 42 pages, 10 figure

    Manufacturing technology of composite materials-principles of modification of polymer composite materials technology based on polytetrafluoroethylene

    Get PDF
    The results of the investigations into the technological formation of new wear-resistant polymer composites based on polytetrafluoroethylene (PTFE) filled with disperse synthetic and natural compounds are presented. The efficiency of using PTFE composites reinforced with carbon fibers depends on many factors, which influence the significant improvement of physicomechanical characteristics. The results of this research allow stating that interfacial and surface phenomena of the polymer-solid interface and composition play a decisive role in PTFE composites properties. Fillers hinder the relative movement of the PTFE molecules past one another and, in this way, reduce creep or deformation of the parts, reducing the wear rate of parts used in dynamic applications as well as the coefficient of thermal expansion. The necessary structural parameters of such polymer composites are provided by regimes of process equipment.Web of Science104art. no. 37

    Ecosystem-inspired enterprise modelling framework for collaborative and networked manufacturing systems

    Get PDF
    Rapid changes in the open manufacturing environment are imminent due to the increase of customer demand, global competition, and digital fusion. This has exponentially increased both complexity and uncertainty in the manufacturing landscape, creating serious challenges for competitive enterprises. For enterprises to remain competitive, analysing manufacturing activities and designing systems to address emergent needs, in a timely and efficient manner, is understood to be crucial. However, existing analysis and design approaches adopt a narrow diagnostic focus on either managerial or engineering aspects and neglect to consider the holistic complex behaviour of enterprises in a collaborative manufacturing network (CMN). It has been suggested that reflecting upon ecosystem theory may bring a better understanding of how to analyse the CMN. The research presented in this paper draws on a theoretical discussion with aim to demonstrate a facilitating approach to those analysis and design tasks. This approach was later operationalised using enterprise modelling (EM) techniques in a novel, developed framework that enhanced systematic analysis, design, and business-IT alignment. It is expected that this research view is opening a new field of investigation

    Clinical prediction models for mortality in patients with covid-19: external validation and individual participant data meta-analysis

    Get PDF
    OBJECTIVE: To externally validate various prognostic models and scoring rules for predicting short term mortality in patients admitted to hospital for covid-19. DESIGN: Two stage individual participant data meta-analysis. SETTING: Secondary and tertiary care. PARTICIPANTS: 46 914 patients across 18 countries, admitted to a hospital with polymerase chain reaction confirmed covid-19 from November 2019 to April 2021. DATA SOURCES: Multiple (clustered) cohorts in Brazil, Belgium, China, Czech Republic, Egypt, France, Iran, Israel, Italy, Mexico, Netherlands, Portugal, Russia, Saudi Arabia, Spain, Sweden, United Kingdom, and United States previously identified by a living systematic review of covid-19 prediction models published in The BMJ, and through PROSPERO, reference checking, and expert knowledge. MODEL SELECTION AND ELIGIBILITY CRITERIA: Prognostic models identified by the living systematic review and through contacting experts. A priori models were excluded that had a high risk of bias in the participant domain of PROBAST (prediction model study risk of bias assessment tool) or for which the applicability was deemed poor. METHODS: Eight prognostic models with diverse predictors were identified and validated. A two stage individual participant data meta-analysis was performed of the estimated model concordance (C) statistic, calibration slope, calibration-in-the-large, and observed to expected ratio (O:E) across the included clusters. MAIN OUTCOME MEASURES: 30 day mortality or in-hospital mortality. RESULTS: Datasets included 27 clusters from 18 different countries and contained data on 46 914patients. The pooled estimates ranged from 0.67 to 0.80 (C statistic), 0.22 to 1.22 (calibration slope), and 0.18 to 2.59 (O:E ratio) and were prone to substantial between study heterogeneity. The 4C Mortality Score by Knight et al (pooled C statistic 0.80, 95% confidence interval 0.75 to 0.84, 95% prediction interval 0.72 to 0.86) and clinical model by Wang et al (0.77, 0.73 to 0.80, 0.63 to 0.87) had the highest discriminative ability. On average, 29% fewer deaths were observed than predicted by the 4C Mortality Score (pooled O:E 0.71, 95% confidence interval 0.45 to 1.11, 95% prediction interval 0.21 to 2.39), 35% fewer than predicted by the Wang clinical model (0.65, 0.52 to 0.82, 0.23 to 1.89), and 4% fewer than predicted by Xie et al's model (0.96, 0.59 to 1.55, 0.21 to 4.28). CONCLUSION: The prognostic value of the included models varied greatly between the data sources. Although the Knight 4C Mortality Score and Wang clinical model appeared most promising, recalibration (intercept and slope updates) is needed before implementation in routine care

    Web application for getting location of IP nodes

    Get PDF
    Thesis deal with geolocation in internet network. There are described possibilities of geolocation and thesis is mainly focused on passive geolocation methods. Under passive geolocation belongs location databases which there are described as in theoretical way as used in practical part of thesis. In practical part there is created complex system for geolocation in internet environment which used paid and free geolocation databases. Another used database is WHOIS. Data from paid databases is processed and accuracy of databases is evaluated
    corecore