29,634 research outputs found
Nanoscale integration of single cell biologics discovery processes using optofluidic manipulation and monitoring.
The new and rapid advancement in the complexity of biologics drug discovery has been driven by a deeper understanding of biological systems combined with innovative new therapeutic modalities, paving the way to breakthrough therapies for previously intractable diseases. These exciting times in biomedical innovation require the development of novel technologies to facilitate the sophisticated, multifaceted, high-paced workflows necessary to support modern large molecule drug discovery. A high-level aspiration is a true integration of "lab-on-a-chip" methods that vastly miniaturize cellulmical experiments could transform the speed, cost, and success of multiple workstreams in biologics development. Several microscale bioprocess technologies have been established that incrementally address these needs, yet each is inflexibly designed for a very specific process thus limiting an integrated holistic application. A more fully integrated nanoscale approach that incorporates manipulation, culture, analytics, and traceable digital record keeping of thousands of single cells in a relevant nanoenvironment would be a transformative technology capable of keeping pace with today's rapid and complex drug discovery demands. The recent advent of optical manipulation of cells using light-induced electrokinetics with micro- and nanoscale cell culture is poised to revolutionize both fundamental and applied biological research. In this review, we summarize the current state of the art for optical manipulation techniques and discuss emerging biological applications of this technology. In particular, we focus on promising prospects for drug discovery workflows, including antibody discovery, bioassay development, antibody engineering, and cell line development, which are enabled by the automation and industrialization of an integrated optoelectronic single-cell manipulation and culture platform. Continued development of such platforms will be well positioned to overcome many of the challenges currently associated with fragmented, low-throughput bioprocess workflows in biopharma and life science research
High-throughput Binding Affinity Calculations at Extreme Scales
Resistance to chemotherapy and molecularly targeted therapies is a major
factor in limiting the effectiveness of cancer treatment. In many cases,
resistance can be linked to genetic changes in target proteins, either
pre-existing or evolutionarily selected during treatment. Key to overcoming
this challenge is an understanding of the molecular determinants of drug
binding. Using multi-stage pipelines of molecular simulations we can gain
insights into the binding free energy and the residence time of a ligand, which
can inform both stratified and personal treatment regimes and drug development.
To support the scalable, adaptive and automated calculation of the binding free
energy on high-performance computing resources, we introduce the High-
throughput Binding Affinity Calculator (HTBAC). HTBAC uses a building block
approach in order to attain both workflow flexibility and performance. We
demonstrate close to perfect weak scaling to hundreds of concurrent multi-stage
binding affinity calculation pipelines. This permits a rapid time-to-solution
that is essentially invariant of the calculation protocol, size of candidate
ligands and number of ensemble simulations. As such, HTBAC advances the state
of the art of binding affinity calculations and protocols
A practical, bioinformatic workflow system for large data sets generated by next-generation sequencing
Transcriptomics (at the level of single cells, tissues and/or whole organisms) underpins many fields of biomedical science, from understanding the basic cellular function in model organisms, to the elucidation of the biological events that govern the development and progression of human diseases, and the exploration of the mechanisms of survival, drug-resistance and virulence of pathogens. Next-generation sequencing (NGS) technologies are contributing to a massive expansion of transcriptomics in all fields and are reducing the cost, time and performance barriers presented by conventional approaches. However, bioinformatic tools for the analysis of the sequence data sets produced by these technologies can be daunting to researchers with limited or no expertise in bioinformatics. Here, we constructed a semi-automated, bioinformatic workflow system, and critically evaluated it for the analysis and annotation of large-scale sequence data sets generated by NGS. We demonstrated its utility for the exploration of differences in the transcriptomes among various stages and both sexes of an economically important parasitic worm (Oesophagostomum dentatum) as well as the prediction and prioritization of essential molecules (including GTPases, protein kinases and phosphatases) as novel drug target candidates. This workflow system provides a practical tool for the assembly, annotation and analysis of NGS data sets, also to researchers with a limited bioinformatic expertise. The custom-written Perl, Python and Unix shell computer scripts used can be readily modified or adapted to suit many different applications. This system is now utilized routinely for the analysis of data sets from pathogens of major socio-economic importance and can, in principle, be applied to transcriptomics data sets from any organism
A practical, bioinformatic workflow system for large data sets generated by next-generation sequencing
Transcriptomics (at the level of single cells, tissues and/or whole organisms) underpins many fields of biomedical science, from understanding the basic cellular function in model organisms, to the elucidation of the biological events that govern the development and progression of human diseases, and the exploration of the mechanisms of survival, drug-resistance and virulence of pathogens. Next-generation sequencing (NGS) technologies are contributing to a massive expansion of transcriptomics in all fields and are reducing the cost, time and performance barriers presented by conventional approaches. However, bioinformatic tools for the analysis of the sequence data sets produced by these technologies can be daunting to researchers with limited or no expertise in bioinformatics. Here, we constructed a semi-automated, bioinformatic workflow system, and critically evaluated it for the analysis and annotation of large-scale sequence data sets generated by NGS. We demonstrated its utility for the exploration of differences in the transcriptomes among various stages and both sexes of an economically important parasitic worm (Oesophagostomum dentatum) as well as the prediction and prioritization of essential molecules (including GTPases, protein kinases and phosphatases) as novel drug target candidates. This workflow system provides a practical tool for the assembly, annotation and analysis of NGS data sets, also to researchers with a limited bioinformatic expertise. The custom-written Perl, Python and Unix shell computer scripts used can be readily modified or adapted to suit many different applications. This system is now utilized routinely for the analysis of data sets from pathogens of major socio-economic importance and can, in principle, be applied to transcriptomics data sets from any organism
B2B Infrastructures in the Process of Drug Discovery and Healthcare
In this paper we describe a demonstration of an innovative B2B infrastructure which can be used to support collaborations in the pharmaceutical industry to achieve the drug discovery goal. Based on experience gained in a wide range of collaborative projects in the areas of grid technology, semantics and data management we show future work and new topics in B2B infrastructures which arise when considering the use of patient records in the process of drug discovery and in healthcare applications
Recommended from our members
2100 AI: Reflections on the mechanisation of scientific discovery
The pace of research is nowadays extremely intensive, with datasets and publications being published at an unprecedented rate. In this context data science, artificial intelligence, machine learning and big data analytics are providing researchers with new automatic techniques which not only help them to manage this flow of information but are also able to identify automatically interesting patterns and insights in this vast sea of information. However, the emergence of mechanised scientific discovery is likely to dramatically change the way we do science, thus introducing and amplifying serious societal implications on the role of researchers themselves, which need to be analysed thoroughly
The benefits of in silico modeling to identify possible small-molecule drugs and their off-target interactions
Accepted for publication in a future issue of Future Medicinal Chemistry.The research into the use of small molecules as drugs continues to be a key driver in the development of molecular databases, computer-aided drug design software and collaborative platforms. The evolution of computational approaches is driven by the essential criteria that a drug molecule has to fulfill, from the affinity to targets to minimal side effects while having adequate absorption, distribution, metabolism, and excretion (ADME) properties. A combination of ligand- and structure-based drug development approaches is already used to obtain consensus predictions of small molecule activities and their off-target interactions. Further integration of these methods into easy-to-use workflows informed by systems biology could realize the full potential of available data in the drug discovery and reduce the attrition of drug candidates.Peer reviewe
- …