29 research outputs found

    Application of Mass Spectroscopy in Pharmaceutical and Biomedical Analysis

    Get PDF
    Mass spectrometry (MS) is a powerful analytical tool with many applications in pharmaceutical and biomedical field. The increase in sensitivity and resolution of the instrument has opened new dimensions in analysis of pharmaceuticals and complex metabolites of biological systems. Compared with other techniques, mass spectroscopy is only the technique for molecular weight determination, through which we can predict the molecular formula. It is based on the conversion of the sample into ionized state, with or without fragmentation which are then identified by their mass-to-charge ratios (m/e). Mass spectroscopy provides rich elemental information, which is an important asset to interpret complex mixture components. Thus, it is an important tool for structure elucidation of unknown compounds. Mass spectroscopy also helps in quantitative elemental analysis, that is, the intensity of a mass spectra signal is directly proportional to the percentage of corresponding element. It is also a noninvasive tool that permits in vivo studies in humans. Recent research has looked into the possible applications of mass spectrometers in biomedical field. It is also used as a sensitive detector for chromatographic techniques like LC–MS, GC–MS and LC/MS/MS. These recent hyphenated technological developments of the technique have significantly improved its applicability in pharmaceutical and biomedical analyses

    Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States

    Get PDF
    Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks

    The United States COVID-19 Forecast Hub dataset

    Get PDF
    Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Nations within a nation: variations in epidemiological transition across the states of India, 1990–2016 in the Global Burden of Disease Study

    Get PDF
    18% of the world's population lives in India, and many states of India have populations similar to those of large countries. Action to effectively improve population health in India requires availability of reliable and comprehensive state-level estimates of disease burden and risk factors over time. Such comprehensive estimates have not been available so far for all major diseases and risk factors. Thus, we aimed to estimate the disease burden and risk factors in every state of India as part of the Global Burden of Disease (GBD) Study 2016

    Optimization for online platforms

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, February, 2021Cataloged from the official PDF of thesis.Includes bibliographical references (pages 179-189).In the last decade, there has been a surge in online platforms for providing a wide variety of services. These platforms face an array of challenges that can be mitigated with appropriate modeling and the use of optimization tools. In this thesis, we examine, model, and provide solutions to some of the key challenges. First, we focus on the problem of intelligent SMS routing faced by several online platforms today. In a dynamically changing environment, platforms need to carefully choose SMS aggregators to have a high number of text messages being delivered to users at a low cost. To model this problem, we consider a novel variant of the multi-armed bandit (MAB) problem, MAB with cost subsidy, which models many real-life applications where the learning agent has to pay to select an arm and is concerned about optimizing cumulative costs and rewards.We show that naive generalizations of existing MAB algorithms like Upper Confidence Bound and Thompson Sampling do not perform well for the SMS routing problem. For an instance with K arms and time horizon T, we then establish a fundamental lower bound of [omega](K¹/³T²/³) on the performance of any online learning algorithm for this problem, highlighting the hardness of our problem in comparison to the classical MAB problem. We also present a simple variant of explore-then-commit and establish near-optimal regret bounds for this algorithm. Lastly, we perform numerical simulations to understand the behavior of a suite of algorithms for various instances and recommend a practical guide to employ different algorithms. Second, we focus on the problem of making real-time personalized recommendations which are now needed in just about every online setting, ranging from media platforms to e-commerce to social networks.While the challenge of estimating user preferences has garnered significant attention, the operational problem of using such preferences to construct personalized offer sets to users is still a challenge, particularly in modern settings with a massive number of items and a millisecond response time requirement. Thus motivated, we propose an algorithm for personalized offer set optimization that runs in time sub-linear in the number of items while enjoying a uniform performance guarantee. Our algorithm works for an extremely general class of problems and models of user choice that includes the mixed multinomial logit model as a special case. Our algorithm can be entirely data-driven and empirical evaluation on a massive content discovery dataset shows that our implementation indeed runs fast and with increased performance relative to existing fast heuristics.Third, we study the problem of modeling purchase of multiple items (in online and offline settings) and utilizing it to display optimized recommendations, which can lead to significantly higher revenues as compared to capturing purchase of only a single product in each transaction. We present a parsimonious multi-purchase family of choice models called the BundleMVL-K family, and develop a binary search based iterative strategy that efficiently computes optimized recommendations for this model. We establish the hardness of computing optimal recommendation sets and characterize structural properties of the optimal solution. The efficacy of our modeling and optimization techniques compared to competing solutions is shown using several real-world datasets on multiple metrics such as model fit, expected revenue gains, and run-time reductions. Fourth, we study the problem of A-B testing for online platforms.Unlike traditional offline A-B testing, online platforms face some unique challenges such as sequential allocation of users into treatment groups, large number of user covariates to balance, and limited number of users available for each experiment, making randomization inefficient. We consider the problem of optimally allocating test subjects to either treatment with a view to maximize the precision of our estimate of the treatment effect. Our main contribution is a tractable algorithm for this problem in the online setting, where subjects must be assigned as they arrive, with covariates drawn from an elliptical distribution with finite second moment. We further characterize the gain in precision afforded by optimized allocations relative to randomized allocations and show that this gain grows large as the number of covariates grows.by Deeksha Sinha.Ph. D.Ph.D. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Cente
    corecore