295 research outputs found

    Forecasting using the T-method

    Get PDF
    The T-method is a technique developed by Genichi Taguchi to calculate an overall prediction based on the signal-to-noise ratio without the use of Gram-Schmidt orthogonalization. The Taguchi Methods, also known as robust design principles, is used to determine the optimal levels of control factors through planning and conducting experiments, and evaluating their results. The primary goal of Robust Design is to minimize variance in the presence of noise factors to achieve a robust process. T-Method is one of the techniques evolved from Taguchi Methods. This thesis illustrates the use of the T-method and outlines it steps using four forecasting case studies from various areas with a univariate response to illustrate the technique. The methodology used to forecast in the case study is explained and the results obtained are demonstrated. In addition, a basic comparison with the Mahalanobis-Taguchi system is provided --Abstract, page iii

    OPTIMIZATION OF TENOXICAM LOADED NIOSOMES USING QUADRATIC DESIGN

    Get PDF
    Objective: The objective of the present study was to obtain an optimized formula of Tenoxicam (TNX) niosomes using Quadratic Design. Methods: TNX niosomes were prepared by Organic Solvent Injection method and all vehicles were evaluated for their entrapment efficiency (EE%), and Particle Size(nm). Results: EE% was found to be between 77.88 and 89.98. Percentage entrapment efficiency was significantly affected by the applied processing variables such as the concentration of span 60 as well as cholesterol. The mean vesicle size of drug loaded niosomes of the different batches ranged between 79-190 nm. Vesicle size of drug loaded niosomal batches was found to decrease as the concentration of span increases. The effects of all the tested independent variables have P-values<0.05. Conclusion: Quadratic design succeeded in the optimization of the formulation ingredients on EE% and particle size of Tenoxicam niosomes. Finally the optimization process provides a formula having the optimum level of factors

    Skolem Functions for Factored Formulas

    Full text link
    Given a propositional formula F(x,y), a Skolem function for x is a function \Psi(y), such that substituting \Psi(y) for x in F gives a formula semantically equivalent to \exists F. Automatically generating Skolem functions is of significant interest in several applications including certified QBF solving, finding strategies of players in games, synthesising circuits and bit-vector programs from specifications, disjunctive decomposition of sequential circuits etc. In many such applications, F is given as a conjunction of factors, each of which depends on a small subset of variables. Existing algorithms for Skolem function generation ignore any such factored form and treat F as a monolithic function. This presents scalability hurdles in medium to large problem instances. In this paper, we argue that exploiting the factored form of F can give significant performance improvements in practice when computing Skolem functions. We present a new CEGAR style algorithm for generating Skolem functions from factored propositional formulas. In contrast to earlier work, our algorithm neither requires a proof of QBF satisfiability nor uses composition of monolithic conjunctions of factors. We show experimentally that our algorithm generates smaller Skolem functions and outperforms state-of-the-art approaches on several large benchmarks.Comment: Full version of FMCAD 2015 conference publicatio

    Left ventricular dysfunction by strain echocardiography in thalassemia patients: a pilot study

    Get PDF
    Background: To evaluate the myocardial function and its correlation with serum ferritin and the number of transfusions in beta-thalassemia major patients by using standard echocardiography and left ventricular strain imaging.Methods: This was a cross-sectional exploration study comprised of 56 beta-thalassemia patients conducted at a tertiary-care center in India between September 2016 and August 2017. Patients with age less than 18 years, diagnosed with thalassemia major, recipients of >20 units of blood transfusions, and normal Left Ventricular (LV) function by 2D-echocardiography were included in the study. Severity of iron overload was determined by using serum ferritin levels and LV strain imaging parameters were evaluated by using strain values of 17 LV segments.Results: A total of 56 beta-thalassemia patients were included in the study. Of these, 29(51.8%) patients were boys and 27(48.2%) patients were girls with a mean age of 7.8±1.84 years. Average serum ferritin level was found to be 4089.83 ng/dl. Strain values of the basal lateral wall of the left ventricle were significantly abnormal in patients who received more (>80) transfusions compared with those who received lesser transfusions (p=0.025 and p=0.045), respectively. Patients with serum ferritin >6000 ng/ml had impaired strain (p=0.03).Conclusions: Conventional echocardiographic parameters and Left Ventricular Ejection Fraction (LVEF) do not provide adequate information about LV dysfunction. Systolic strain index imaging of the LV indicated the presence of early LV systolic dysfunction in patients who received a greater number of blood transfusions and patients with higher serum ferritin levels

    HRotatE: Hybrid Relational Rotation Embedding for Knowledge Graph

    Get PDF
    Knowledge Graph (KG) represents the real world\u27s information in the form of triplets (head, relation, and tail). However, most KGs are generated manually or semi-automatically, which resulted in an enormous number of missing information in a KG. The goal of a Knowledge-Graph Completion task is to predict missing links in a given Knowledge Graph. Various approaches exist to predict a missing link in a KG. However, the most prominent approaches are based on tensor factorization and Knowledge-Graph embeddings, such as RotatE and SimplE. The RotatE model depicts each relation as a rotation from the source entity (Head) to the target entity (Tail) via a complex vector space. In RotatE, the head and tail entities are derived from one embedding-generation class, resulting in a relatively low prediction score. SimplE is primarily based on a Canonical Polyadic (CP) decomposition. SimplE enhances the CP approach by adding the inverse relation where head embedding and tail embedding are taken from the different embedding-generation classes, but they are still dependent on each other. However, SimplE is not able to predict composition patterns very well. This paper presents a new, hybridized variant (HRotatE) of the existent RotatE approach. Essentially, HRotatE is hybridized from RotatE and SimplE. We have used the principle of inverse embedding (from the SimplE model) in a bid to improve the prediction scores of HRotatE. Hence, our results have proven to be better than the native RotatE. Also, HRotatE outperforms several state-of-the-art models on different datasets. Conclusively, our proposed approach (HRotatE) is relatively efficient such that it utilizes half the number of training steps required by RotatE, and it generates approximately the same result as RotatE

    ENTERPRISE ANALYTICS: METHOD AND APPARATUS FOR SDA PACKET DEBUGGING, FLOW VISIBILITY AND MONITORING

    Get PDF
    The present techniques perform software defined access (SDA) packet visibility, monitoring, and analytics without impacting performance overhead. An Application Specific Integrated Circuit (ASIC) may be used to perform these processes at line rate without involvement from a central processing unit

    Impact of spatial distribution on the sensory properties of multiphase 3D-printed food configurations

    Get PDF
    The rise of 3D-printing technology is opening up new possibilities for arranging two or more sensorily distinct phases in a specific manner, and thus potentially creating new sensory experiences. Particularly interesting is the spatial configuration of multiple phases for adjusting flavor and texture perception without changing the overall composition, as such configuration would represent a step towards individualization. In the present study, different 3D configurations of two rheologically and texturally very distinct phases were investigated as to their effect on mechanical properties and sensory perception. Chocolate and cream cheese masses were arranged three-dimensionally (cube-in-cube; layered) by additive manufacturing and characterized by measuring penetration resistance as well as by hedonic, descriptive, and temporal dominance of sensation (TDS) methodologies. By comparing samples with identical phase ratios, three characteristic texture profiles could be generated. How much the samples were liked depended significantly on perceived mouthfeel/texture and product hardness. The mouthfeel was in turn determined by the 3D configuration of the phases. TDS characterization showed either two or three dominance areas of one of the phases, depending on whether chocolate or cream cheese was perceived initially. While the dominance time of chocolate increased with increasing chocolate fraction in samples with chocolate as the external phase, the dominance time of cream cheese in samples with cream cheese as the external phase hardly changed with increasing phase fraction. This was mainly attributed to the very different rheological phase properties of cream cheese and chocolate. Based on the TDS evolution at the later stages of consumption that is rather independent of the initial configuration, the renewal of the relevant interface in the oral cavity was mainly determined by the mixing kinetics of both phases, and secondarily by what phase was perceived to be dominant before a phase dominance change took place. This study shows that in defining the 3D configuration of phases with differing rheological properties, there is considerable potential for adjusting the sensory properties. This is a step towards broader coverage of consumer needs through 3D product design without the need for formulation adjustments
    • …
    corecore