209 research outputs found
Enhancing Ligand Pose Sampling for Molecular Docking
Deep learning promises to dramatically improve scoring functions for
molecular docking, leading to substantial advances in binding pose prediction
and virtual screening. To train scoring functions-and to perform molecular
docking-one must generate a set of candidate ligand binding poses.
Unfortunately, the sampling protocols currently used to generate candidate
poses frequently fail to produce any poses close to the correct, experimentally
determined pose, unless information about the correct pose is provided. This
limits the accuracy of learned scoring functions and molecular docking. Here,
we describe two improved protocols for pose sampling: GLOW (auGmented sampLing
with sOftened vdW potential) and a novel technique named IVES (IteratiVe
Ensemble Sampling). Our benchmarking results demonstrate the effectiveness of
our methods in improving the likelihood of sampling accurate poses, especially
for binding pockets whose shape changes substantially when different ligands
bind. This improvement is observed across both experimentally determined and
AlphaFold-generated protein structures. Additionally, we present datasets of
candidate ligand poses generated using our methods for each of around 5,000
protein-ligand cross-docking pairs, for training and testing scoring functions.
To benefit the research community, we provide these cross-docking datasets and
an open-source Python implementation of GLOW and IVES at
https://github.com/drorlab/GLOW_IVES .Comment: Published at the Machine Learning for Structural Biology Workshop,
NeurIPS 202
Characterization of the Distortion-Perception Tradeoff for Finite Channels with Arbitrary Metrics
Whenever inspected by humans, reconstructed signals should not be
distinguished from real ones. Typically, such a high perceptual quality comes
at the price of high reconstruction error, and vice versa. We study this
distortion-perception (DP) tradeoff over finite-alphabet channels, for the
Wasserstein- distance induced by a general metric as the perception index,
and an arbitrary distortion matrix. Under this setting, we show that computing
the DP function and the optimal reconstructions is equivalent to solving a set
of linear programming problems. We provide a structural characterization of the
DP tradeoff, where the DP function is piecewise linear in the perception index.
We further derive a closed-form expression for the case of binary sources
Perceptual Kalman Filters: Online State Estimation under a Perfect Perceptual-Quality Constraint
Many practical settings call for the reconstruction of temporal signals from
corrupted or missing data. Classic examples include decoding, tracking, signal
enhancement and denoising. Since the reconstructed signals are ultimately
viewed by humans, it is desirable to achieve reconstructions that are pleasing
to human perception. Mathematically, perfect perceptual-quality is achieved
when the distribution of restored signals is the same as that of natural
signals, a requirement which has been heavily researched in static estimation
settings (i.e. when a whole signal is processed at once). Here, we study the
problem of optimal causal filtering under a perfect perceptual-quality
constraint, which is a task of fundamentally different nature. Specifically, we
analyze a Gaussian Markov signal observed through a linear noisy
transformation. In the absence of perceptual constraints, the Kalman filter is
known to be optimal in the MSE sense for this setting. Here, we show that
adding the perfect perceptual quality constraint (i.e. the requirement of
temporal consistency), introduces a fundamental dilemma whereby the filter may
have to "knowingly" ignore new information revealed by the observations in
order to conform to its past decisions. This often comes at the cost of a
significant increase in the MSE (beyond that encountered in static settings).
Our analysis goes beyond the classic innovation process of the Kalman filter,
and introduces the novel concept of an unutilized information process. Using
this tool, we present a recursive formula for perceptual filters, and
demonstrate the qualitative effects of perfect perceptual-quality estimation on
a video reconstruction problem
Assessing the number of ancestral alternatively spliced exons in the human genome
BACKGROUND: It is estimated that between 35% and 74% of all human genes undergo alternative splicing. However, as a gene that undergoes alternative splicing can have between one and dozens of alternative exons, the number of alternatively spliced genes by itself is not informative enough. An additional parameter, which was not addressed so far, is therefore the number of human exons that undergo alternative splicing. We have previously described an accurate machine-learning method allowing the detection of conserved alternatively spliced exons without using ESTs, which relies on specific features of the exon and its genomic vicinity that distinguish alternatively spliced exons from constitutive ones. RESULTS: In this study we use the above-described approach to calculate that 7.2% (± 1.1%) of all human exons that are conserved in mouse are alternatively spliced in both species. CONCLUSION: This number is the first estimation for the extent of ancestral alternatively spliced exons in the human genome
Surface Reflectance Recognition and Real-World Illumination Statistics
Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illuminination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image. Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance
Surface reflectance recognition and real-world illumination statistics
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2003.Includes bibliographical references (p. 141-150).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illumination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image.(cont.) Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance.by Ron O. Dror.Ph.D
Recognition of Surface Reflectance Properties from a Single Image under Unknown Real-World Illumination
This paper describes a machine vision system that classifies reflectance properties of surfaces such as metal, plastic, or paper, under unknown real-world illumination. We demonstrate performance of our algorithm for surfaces of arbitrary geometry. Reflectance estimation under arbitrary omnidirectional illumination proves highly underconstrained. Our reflectance estimation algorithm succeeds by learning relationships between surface reflectance and certain statistics computed from an observed image, which depend on statistical regularities in the spatial structure of real-world illumination. Although the algorithm assumes known geometry, its statistical nature makes it robust to inaccurate geometry estimates
Surface Reflectance Estimation and Natural Illumination Statistics
Humans recognize optical reflectance properties of surfaces such as metal, plastic, or paper from a single image without knowledge of illumination. We develop a machine vision system to perform similar recognition tasks automatically. Reflectance estimation under unknown, arbitrary illumination proves highly underconstrained due to the variety of potential illumination distributions and surface reflectance properties. We have found that the spatial structure of real-world illumination possesses some of the statistical regularities observed in the natural image statistics literature. A human or computer vision system may be able to exploit this prior information to determine the most likely surface reflectance given an observed image. We develop an algorithm for reflectance classification under unknown real-world illumination, which learns relationships between surface reflectance and certain features (statistics) computed from a single observed image. We also develop an automatic feature selection method
- …