29 research outputs found

    An Image-Based High-Content Screening Assay for Compounds Targeting Intracellular Leishmania donovani Amastigotes in Human Macrophages

    Get PDF
    Leishmaniasis is a tropical disease threatening 350 million people from endemic regions. The available drugs for treatment are inadequate, with limitations such as serious side effects, parasite resistance or high cost. Driven by this need for new drugs, we developed a high-content, high-throughput image-based screening assay targeting the intracellular amastigote stage of different species of Leishmania in infected human macrophages. The in vitro infection protocol was adapted to a 384-well-plate format, enabling acquisition of a large amount of readouts by automated confocal microscopy. The reading method was based on DNA staining and required the development of a customized algorithm to analyze the images, which enabled the use of non-modified parasites. The automated analysis generated parameters used to quantify compound activity, including infection ratio as well as the number of intracellular amastigote parasites and yielded cytotoxicity information based on the number of host cells. Comparison of this assay with one that used the promastigote form to screen 26,500 compounds showed that 50% of the hits selected against the intracellular amastigote were not selected in the promastigote screening. These data corroborate the idea that the intracellular amastigote form of the parasite is the most appropriate to be used in primary screening assay for Leishmania

    Can AI help in crowdsourcing? A theory-based model for idea screening in crowdsourcing contests

    No full text
    Crowdsourcing generates thousands of ideas. The selection of best ideas is costly because of the limited number, objectivity, and attention of experts. Using a dataset of 21 crowdsourcing contests that include 4191 ideas, we test how AI can assist experts in screening ideas. The authors have three major findings. First, while even the best previously published theory-based models cannot mimic human experts in choosing the best ideas, a simple model using LASSO can efficiently screen out ideas considered bad by experts. In an additional 22nd hold-out contest with internal and external experts, the simple model does better than external experts in predicting the ideas selected by internal experts. Second, the authors develop an Idea Screening Efficiency curve that trades off the False Negative Rate against the total ideas screened. Managers can choose the desired point on this curve given their loss function. The best model specification can screen out 44% of ideas, while sacrificing only 14% of good ideas. Alternatively, for those unwilling to lose any winners, a novel two-step approach screens out 21% of ideas without sacrificing a single 1st-place-winner. Third, a new predictor, Word Atypicality, is simple and efficient in screening. Theoretically, this predictor screens out atypical ideas and keeps inclusive and rich ideas.</p

    Can AI help in crowdsourcing? testing alternate algorithms for idea screening in crowdsourcing contests

    No full text
    Crowdsourcing, while a boon to ideation, generates thousands of ideas. Screening these ideas to select a few winners is a major challenge because of the limited number, expertise, objectivity, and attention of judges. This paper compares original and extended versions of three recently published theory-based algorithms from marketing to evaluate ideas in crowdsourcing contests: Word Colocation, Content Atypicality, and Inspiration Redundancy. Each algorithm suggests predictors of winning ideas. The authors extend these predictors using two methods for searching parsimonious predictors: least average shrinkage and selection operator (LASSO) and K-sparse Exhaustive Search, for K &#x2264; 5. The authors test the algorithms in-sample and out-of-sample on 21 different real-world crowdsourcing contests conducted for large firms. The standard provided by management is "drop the worst 25% of ideas without sacrificing more than 15% of good ideas," as ranked by experts. Results are the following. First, of the three original algorithms, Inspiration Redundancy performs best out-of-sample, but fails to meet the 15% threshold. Second, for two of the three algorithms, the extended versions outperform the original. In particular, Topic Overlap Atypicality, a new measure, emerges as the most robust predictor. Third, when the best versions of the algorithms are used, all three contribute to the important out-of-sample prediction accuracy. Fourth, using extended versions of all three algorithms, we are able to meet Hyve&#x2019;s threshold
    corecore