85 research outputs found
Polymeric Nanocapsule from Silica Nanoparticle@Cross-linked Polymer Nanoparticles via One-Pot Approach
A facile strategy was developed here to prepare cross-linked polymeric nanocapsules (CP nanocapsules) with silica nanoparticles as templates. The silica nanoparticle@cross-linked polymer nanoparticles were prepared by the encapsulation of the silica nanoparticles by the one-pot approach via surface-initiated atom transfer radical polymerization of hydroxyethyl acrylate in the presence ofN,N′-methylenebisacrylamide as a cross-linker from the initiator-modified silica nanoparticles. After the silica nanoparticle templates were etched with hydrofluoric acid, the CP nanocapsules with particle size of about 100 nm were obtained. The strategy developed was confirmed with Fourier transform infrared, thermogravimetric analysis and transmission electron microscopy
Self-healing materials for soft-matter machines and electronics
The emergence of soft machines and electronics creates new opportunities to engineer robotic systems that are mechanically compliant, deformable, and safe for physical interaction with the human body. Progress, however, depends on new classes of soft multifunctional materials that can operate outside of a hard exterior and withstand the same real-world conditions that human skin and other soft biological materials are typically subjected to. As with their natural counterparts, these materials must be capable of self-repair and healing when damaged to maintain the longevity of the host system and prevent sudden or permanent failure. Here, we provide a perspective on current trends and future opportunities in self-healing soft systems that enhance the durability, mechanical robustness, and longevity of soft-matter machines and electronics
IMPECCABLE: Integrated Modeling PipelinE for COVID Cure by Assessing Better LEads
The drug discovery process currently employed in the pharmaceutical industry typically requires about 10 years and $2–3 billion to deliver one new drug. This is both too expensive and too slow, especially in emergencies like the COVID-19 pandemic. In silico methodologies need to be improved both to select better lead compounds, so as to improve the efficiency of later stages in the drug discovery protocol, and to identify those lead compounds more quickly. No known methodological approach can deliver this combination of higher quality and speed. Here, we describe an Integrated Modeling PipEline for COVID Cure by Assessing Better LEads (IMPECCABLE) that employs multiple methodological innovations to overcome this fundamental limitation. We also describe the computational framework that we have developed to support these innovations at scale, and characterize the performance of this framework in terms of throughput, peak performance, and scientific results. We show that individual workflow components deliver 100 × to 1000 × improvement over traditional methods, and that the integration of methods, supported by scalable infrastructure, speeds up drug discovery by orders of magnitudes. IMPECCABLE has screened ∼ 1011 ligands and has been used to discover a promising drug candidate. These capabilities have been used by the US DOE National Virtual Biotechnology Laboratory and the EU Centre of Excellence in Computational Biomedicine
JARVIS-Leaderboard: a large scale benchmark of materials design methods
Lack of rigorous reproducibility and validation are significant hurdles for scientific development across many fields. Materials science, in particular, encompasses a variety of experimental and theoretical approaches that require careful benchmarking. Leaderboard efforts have been developed previously to mitigate these issues. However, a comprehensive comparison and benchmarking on an integrated platform with multiple data modalities with perfect and defect materials data is still lacking. This work introduces JARVIS-Leaderboard, an open-source and community-driven platform that facilitates benchmarking and enhances reproducibility. The platform allows users to set up benchmarks with custom tasks and enables contributions in the form of dataset, code, and meta-data submissions. We cover the following materials design categories: Artificial Intelligence (AI), Electronic Structure (ES), Force-fields (FF), Quantum Computation (QC), and Experiments (EXP). For AI, we cover several types of input data, including atomic structures, atomistic images, spectra, and text. For ES, we consider multiple ES approaches, software packages, pseudopotentials, materials, and properties, comparing results to experiment. For FF, we compare multiple approaches for material property predictions. For QC, we benchmark Hamiltonian simulations using various quantum algorithms and circuits. Finally, for experiments, we use the inter-laboratory approach to establish benchmarks. There are 1281 contributions to 274 benchmarks using 152 methods with more than 8 million data points, and the leaderboard is continuously expanding. The JARVIS-Leaderboard is available at the website: https://pages.nist.gov/jarvis_leaderboard
Large Scale Benchmark of Materials Design Methods
Lack of rigorous reproducibility and validation are major hurdles for
scientific development across many fields. Materials science in particular
encompasses a variety of experimental and theoretical approaches that require
careful benchmarking. Leaderboard efforts have been developed previously to
mitigate these issues. However, a comprehensive comparison and benchmarking on
an integrated platform with multiple data modalities with both perfect and
defect materials data is still lacking. This work introduces
JARVIS-Leaderboard, an open-source and community-driven platform that
facilitates benchmarking and enhances reproducibility. The platform allows
users to set up benchmarks with custom tasks and enables contributions in the
form of dataset, code, and meta-data submissions. We cover the following
materials design categories: Artificial Intelligence (AI), Electronic Structure
(ES), Force-fields (FF), Quantum Computation (QC) and Experiments (EXP). For
AI, we cover several types of input data, including atomic structures,
atomistic images, spectra, and text. For ES, we consider multiple ES
approaches, software packages, pseudopotentials, materials, and properties,
comparing results to experiment. For FF, we compare multiple approaches for
material property predictions. For QC, we benchmark Hamiltonian simulations
using various quantum algorithms and circuits. Finally, for experiments, we use
the inter-laboratory approach to establish benchmarks. There are 1281
contributions to 274 benchmarks using 152 methods with more than 8 million
data-points, and the leaderboard is continuously expanding. The
JARVIS-Leaderboard is available at the website:
https://pages.nist.gov/jarvis_leaderboar
- …