5,138 research outputs found
Formation of oligopeptides in high yield under simple programmable conditions
Many high-yielding reactions for forming peptide bonds have been developed but these are complex, requiring activated amino-acid precursors and heterogeneous supports. Herein we demonstrate the programmable one-pot dehydration–hydration condensation of amino acids forming oligopeptide chains in around 50% yield. A digital recursive reactor system was developed to investigate this process, performing these reactions with control over parameters such as temperature, number of cycles, cycle duration, initial monomer concentration and initial pH. Glycine oligopeptides up to 20 amino acids long were formed with very high monomer-to-oligomer conversion, and the majority of these products comprised three amino acid residues or more. Having established the formation of glycine homo-oligopeptides, we then demonstrated the co-condensation of glycine with eight other amino acids (Ala, Asp, Glu, His, Lys, Pro, Thr and Val), incorporating a range of side-chain functionality
Extending fragment-based free energy calculations with library Monte Carlo simulation: Annealing in interaction space
Pre-calculated libraries of molecular fragment configurations have previously
been used as a basis for both equilibrium sampling (via "library-based Monte
Carlo") and for obtaining absolute free energies using a polymer-growth
formalism. Here, we combine the two approaches to extend the size of systems
for which free energies can be calculated. We study a series of all-atom
poly-alanine systems in a simple dielectric "solvent" and find that precise
free energies can be obtained rapidly. For instance, for 12 residues, less than
an hour of single-processor is required. The combined approach is formally
equivalent to the "annealed importance sampling" algorithm; instead of
annealing by decreasing temperature, however, interactions among fragments are
gradually added as the molecule is "grown." We discuss implications for future
binding affinity calculations in which a ligand is grown into a binding site
Networking chemical robots for reaction multitasking
The development of the internet of things has led to an explosion in the number of networked devices capable of control and computing. However, whilst common place in remote sensing, these approaches have not impacted chemistry due to difficulty in developing systems flexible enough for experimental data collection. Herein we present a simple and affordable (<$500) chemistry capable robot built with a standard set of hardware and software protocols that can be networked to coordinate many chemical experiments in real time. We demonstrate how multiple processes can be done with two internet connected robots collaboratively, exploring a set of azo-coupling reactions in a fraction of time needed for a single robot, as well as encoding and decoding information into a network of oscillating reactions. The system can also be used to assess the reproducibility of chemical reactions and discover new reaction outcomes using game playing to explore a chemical space
The future of laboratory medicine - A 2014 perspective.
Predicting the future is a difficult task. Not surprisingly, there are many examples and assumptions that have proved to be wrong. This review surveys the many predictions, beginning in 1887, about the future of laboratory medicine and its sub-specialties such as clinical chemistry and molecular pathology. It provides a commentary on the accuracy of the predictions and offers opinions on emerging technologies, economic factors and social developments that may play a role in shaping the future of laboratory medicine
Exploring Information Technologies to Support Shotgun Proteomics
Shotgun proteomics refers to the direct analysis of complex protein mixtures to create a profile of the proteins present in the cell. These profiles can be used to study the underlying biological basis for cancer development. Closely studying the profiles as the cancer proliferates reveals the molecular interactions in the cell. They provide clues to researchers on potential drug targets to treat the disease. A little more than a decade old, shotgun proteomics is a relatively new form of discovery, one that is data intensive and requires complex data analysis. Early studies indicated a gap between the ability to analyze biological samples with a mass spectrometer and the information systems available to process and analyze this data. This thesis reflects on an automated proteomic information system at the University of Colorado Central Analytical Facility. Investigators there are using cutting edge proteomic techniques to analyze melanoma cell lines responsible for skin cancer in patients. The paper will provide insight on key design processes in the development of an Oracle relational database and automation system to support high-throughput shotgun proteomics in the facility. It will also discuss significant contributions, technologies, software, a data standard, and leaders in the field developing solutions and products in proteomics
- …