410 research outputs found
Reply Brief for Petitioners, Gonzalez v. Google, 143 S.Ct. 1191 (2023) (No. 21-1333)
QUESTION PRESENTED: Section 203(c)(1) of the Communications Decency Act immunizes an âinteractive computer serviceâ (such as YouTube, Google, Facebook and Twitter) for âpublish[ ing] ... information provided by anotherâ âinformation content providerâ (such as someone who posts a video on YouTube or a statement on Facebook). This is the most recent of three court of appealsâ decisions regarding whether section 230(c)(1) immunizes an interactive computer service when it makes targeted recommendations of information provided by such another party. Five courts of appeals judges have concluded that section 230(c)(1) creates such immunity. Three court of appeals judges have rejected such immunity. One appellate judge has concluded only that circuit precedent precludes liability for such recommendations.
The question presented is: Does section 230(c)(1) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information
Petition for a Writ of Certiorari, Gonzalez v. Google, 143 S.Ct. 1191 (2023) (No. 21-1333)
QUESTION PRESENTED: Section 203(c)(1) of the Communications Decency Act immunizes an âinteractive computer serviceâ (such as YouTube, Google, Facebook and Twitter) for âpublish[ ing] ... information provided by anotherâ âinformation content providerâ (such as someone who posts a video on YouTube or a statement on Facebook). This is the most recent of three court of appealsâ decisions regarding whether section 230(c)(1) immunizes an interactive computer service when it makes targeted recommendations of information provided by such another party. Five courts of appeals judges have concluded that section 230(c)(1) creates such immunity. Three court of appeals judges have rejected such immunity. One appellate judge has concluded only that circuit precedent precludes liability for such recommendations.
The question presented is: Does section 230(c)(1) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information
Hole transfer equilibrium in rigidly linked bichromophoric molecules
Two bichromophoric molecules consisting of anthracene and diphenylpolyene moieties linked by two fused norbornyl bridges undergo photoionization upon ultraviolet (UV) pulsed laser irradiation. The simultaneous observation of the cation radicals of both anthracene and polyene groups points to a rapid (nanosecond or faster) intramolecular hole transfer equilibrium between the two chromophores. The existence of an equilibrium is supported by the results of one- and two-laser transient absorption and electrochemical experiments. Equilibrium constants (293 K) were determined by both transient absorption and cyclic voltammetry measurements and were independent of the method used within experimental error. For A-sp-VB, which contains anthracene and vinyldiphenylbutadiene chromophores, Keq = 4.0 ? 2 (transient absorption) and 3.2 ? 2 (electrochemical), favoring the anthracene cation radical. For A-sp-VS, containing anthracene and vinylstilbene groups, Keq = 70 ? 30 (transient absorption) and 105 ? 50 (electrochemical), favoring the anthracene cation radical.Peer reviewed: YesNRC publication: Ye
Interpreting linear support vector machine models with heat map molecule coloring
<p>Abstract</p> <p>Background</p> <p>Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity.</p> <p>Results</p> <p>We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor.</p> <p>Conclusions</p> <p>In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor.</p
Self-organizing ontology of biochemically relevant small molecules
<p>Abstract</p> <p>Background</p> <p>The advent of high-throughput experimentation in biochemistry has led to the generation of vast amounts of chemical data, necessitating the development of novel analysis, characterization, and cataloguing techniques and tools. Recently, a movement to publically release such data has advanced biochemical structure-activity relationship research, while providing new challenges, the biggest being the curation, annotation, and classification of this information to facilitate useful biochemical pattern analysis. Unfortunately, the human resources currently employed by the organizations supporting these efforts (e.g. ChEBI) are expanding linearly, while new useful scientific information is being released in a seemingly exponential fashion. Compounding this, currently existing chemical classification and annotation systems are not amenable to automated classification, formal and transparent chemical class definition axiomatization, facile class redefinition, or novel class integration, thus further limiting chemical ontology growth by necessitating human involvement in curation. Clearly, there is a need for the automation of this process, especially for novel chemical entities of biological interest.</p> <p>Results</p> <p>To address this, we present a formal framework based on Semantic Web technologies for the automatic design of chemical ontology which can be used for automated classification of novel entities. We demonstrate the automatic self-assembly of a structure-based chemical ontology based on 60 MeSH and 40 ChEBI chemical classes. This ontology is then used to classify 200 compounds with an accuracy of 92.7%. We extend these structure-based classes with molecular feature information and demonstrate the utility of our framework for classification of functionally relevant chemicals. Finally, we discuss an iterative approach that we envision for future biochemical ontology development.</p> <p>Conclusions</p> <p>We conclude that the proposed methodology can ease the burden of chemical data annotators and dramatically increase their productivity. We anticipate that the use of formal logic in our proposed framework will make chemical classification criteria more transparent to humans and machines alike and will thus facilitate predictive and integrative bioactivity model development.</p
Predicting Phospholipidosis Using Machine Learning
Phospholipidosis is an adverse effect caused by numerous cationic amphiphilic drugs and can affect many cell types. It is characterized by the excess accumulation of phospholipids and is most reliably identified by electron microscopy of cells revealing the presence of lamellar inclusion bodies. The development of phospholipidosis can cause a delay in the drug development process, and the importance of computational approaches to the problem has been well documented. Previous work on predictive methods for phospholipidosis showed that state of the art machine learning methods produced the best results. Here we extend this work by looking at a larger data set mined from the literature. We find that circular fingerprints lead to better models than either E-Dragon descriptors or a combination of the two. We also observe very similar performance in general between Random Forest and Support Vector Machine models.</p
Efficient Reconstruction of Metabolic Pathways by Bidirectional Chemical Search
One of the main challenges in systems biology is the establishment of the metabolome: a catalogue of the metabolites and biochemical reactions present in a specific organism. Current knowledge of biochemical pathways as stored in public databases such as KEGG, is based on carefully curated genomic evidence for the presence of specific metabolites and enzymes that activate particular biochemical reactions. In this paper, we present an efficient method to build a substantial portion of the artificial chemistry defined by the metabolites and biochemical reactions in a given metabolic pathway, which is based on bidirectional chemical search. Computational results on the pathways stored in KEGG reveal novel biochemical pathways
Evolutionarily Conserved Substrate Substructures for Automated Annotation of Enzyme Superfamilies
The evolution of enzymes affects how well a species can adapt to new environmental conditions. During enzyme evolution, certain aspects of molecular function are conserved while other aspects can vary. Aspects of function that are more difficult to change or that need to be reused in multiple contexts are often conserved, while those that vary may indicate functions that are more easily changed or that are no longer required. In analogy to the study of conservation patterns in enzyme sequences and structures, we have examined the patterns of conservation and variation in enzyme function by analyzing graph isomorphisms among enzyme substrates of a large number of enzyme superfamilies. This systematic analysis of substrate substructures establishes the conservation patterns that typify individual superfamilies. Specifically, we determined the chemical substructures that are conserved among all known substrates of a superfamily and the substructures that are reacting in these substrates and then examined the relationship between the two. Across the 42 superfamilies that were analyzed, substantial variation was found in how much of the conserved substructure is reacting, suggesting that superfamilies may not be easily grouped into discrete and separable categories. Instead, our results suggest that many superfamilies may need to be treated individually for analyses of evolution, function prediction, and guiding enzyme engineering strategies. Annotating superfamilies with these conserved and reacting substructure patterns provides information that is orthogonal to information provided by studies of conservation in superfamily sequences and structures, thereby improving the precision with which we can predict the functions of enzymes of unknown function and direct studies in enzyme engineering. Because the method is automated, it is suitable for large-scale characterization and comparison of fundamental functional capabilities of both characterized and uncharacterized enzyme superfamilies
- âŠ