185 research outputs found

    Optimization of cholesterol removal, growth and fermentation patterns of Lactobacillus acidophilus ATCC 4962 in the presence of mannitol, fructo-oligosaccharide and inulin: a response surface methodology approach

    Get PDF
    Aims: To optimize cholesterol removal by Lactobacillus acidophilus ATCC 4962 in the presence of prebiotics, and study the growth and fermentation patterns of the prebiotics. Methods and Results: Lactobacillus acidophilus ATCC 4962 was screened in the presence of six prebiotics, namely sorbitol, mannitol, maltodextrin, hi-amylose maize, fructo-oligosaccharide (FOS) and inulin in order to determine the best combination for highest level of cholesterol removal. The first-order model showed that the combination of inoculum size, mannitol, FOS and inulin was best for removal of cholesterol. The second-order polynomial regression model estimated the optimum condition of the factors for cholesterol removal by L. acidophilus ATCC 4962 to be 2.64% w/v inoculum size, 4.13% w/v mannitol, 3.29% w/v FOS and 5.81% w/v inulin. Analyses of growth, mean doubling time and short-chain fatty acid (SCFA) production using quadratic models indicated that cholesterol removal and the production of SCFA were growth associated. Conclusions: Optimum cholesterol removal was obtained from the fermentation of L. acidophilus ATCC 4962 in the presence of mannitol, FOS and inulin. Cholesterol removal and the production of SCFA appeared to be growth associated and highly influenced by the prebiotics. Significance and Impact of the Study: Response surface methodology proved reliable in developing the model, optimizing factors and analysing interaction effects. The results provide better understanding on the interactions between probiotic and prebiotics for the removal of cholesterol

    Experimental designs for multiple-level responses, with application to a large-scale educational intervention

    Full text link
    Educational research often studies subjects that are in naturally clustered groups of classrooms or schools. When designing a randomized experiment to evaluate an intervention directed at teachers, but with effects on teachers and their students, the power or anticipated variance for the treatment effect needs to be examined at both levels. If the treatment is applied to clusters, power is usually reduced. At the same time, a cluster design decreases the probability of contamination, and contamination can also reduce power to detect a treatment effect. Designs that are optimal at one level may be inefficient for estimating the treatment effect at another level. In this paper we study the efficiency of three designs and their ability to detect a treatment effect: randomize schools to treatment, randomize teachers within schools to treatment, and completely randomize teachers to treatment. The three designs are compared for both the teacher and student level within the mixed model framework, and a simulation study is conducted to compare expected treatment variances for the three designs with various levels of correlation within and between clusters. We present a computer program that study designers can use to explore the anticipated variances of treatment effects under proposed experimental designs and settings.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS216 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Agnostic notes on regression adjustments to experimental data: Reexamining Freedman's critique

    Full text link
    Freedman [Adv. in Appl. Math. 40 (2008) 180-193; Ann. Appl. Stat. 2 (2008) 176-196] critiqued ordinary least squares regression adjustment of estimated treatment effects in randomized experiments, using Neyman's model for randomization inference. Contrary to conventional wisdom, he argued that adjustment can lead to worsened asymptotic precision, invalid measures of precision, and small-sample bias. This paper shows that in sufficiently large samples, those problems are either minor or easily fixed. OLS adjustment cannot hurt asymptotic precision when a full set of treatment-covariate interactions is included. Asymptotically valid confidence intervals can be constructed with the Huber-White sandwich standard error estimator. Checks on the asymptotic approximations are illustrated with data from Angrist, Lang, and Oreopoulos's [Am. Econ. J.: Appl. Econ. 1:1 (2009) 136--163] evaluation of strategies to improve college students' achievement. The strongest reasons to support Freedman's preference for unadjusted estimates are transparency and the dangers of specification search.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS583 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Proving the performance of a new revenue management system

    Get PDF
    Revenue management (RM) is a complicated business process that can best be described as control of sales (using prices, restrictions, or capacity), usually using software as a tool to aid decisions. RM software can play a mere informative role, supplying analysts with formatted and summarized data who use it to make control decisions (setting a price or allocating capacity for a price point), or, play a deeper role, automating the decisions process completely, at the other extreme. The RM models and algorithms in the academic literature by and large concentrate on the latter, completely automated, level of functionality. A firm considering using a new RM model or RM system needs to evaluate its performance. Academic papers justify the performance of their models using simulations, where customer booking requests are simulated according to some process and model, and the revenue perfor- mance of the algorithm compared to an alternate set of algorithms. Such simulations, while an accepted part of the academic literature, and indeed providing research insight, often lack credibility with management. Even methodologically, they are usually awed, as the simula- tions only test \within-model" performance, and say nothing as to the appropriateness of the model in the first place. Even simulations that test against alternate models or competition are limited by their inherent necessity on fixing some model as the universe for their testing. These problems are exacerbated with RM models that attempt to model customer purchase behav- ior or competition, as the right models for competitive actions or customer purchases remain somewhat of a mystery, or at least with no consensus on their validity. How then to validate a model? Putting it another way, we want to show that a particular model or algorithm is the cause of a certain improvement to the RM process compared to the existing process. We take care to emphasize that we want to prove the said model as the cause of performance, and to compare against a (incumbent) process rather than against an alternate model. In this paper we describe a \live" testing experiment that we conducted at Iberia Airlines on a set of flights. A set of competing algorithms control a set of flights during adjacent weeks, and their behavior and results are observed over a relatively long period of time (9 months). In parallel, a group of control flights were managed using the traditional mix of manual and algorithmic control (incumbent system). Such \sandbox" testing, while common at many large internet search and e-commerce companies is relatively rare in the revenue management area. Sandbox testing has an undisputable model of customer behavior but the experimental design and analysis of results is less clear. In this paper we describe the philosophy behind the experiment, the organizational challenges, the design and setup of the experiment, and outline the analysis of the results. This paper is a complement to a (more technical) related paper that describes the econometrics and statistical analysis of the results.Revenue management, airlines, sandbox testing,econometric analysis.
    • …
    corecore