826 research outputs found
Optimizing traffic signs and lights visibility for the teleoperation of autonomous vehicles through ROI compression
Autonomous vehicles are a promising solution to traffic congestion, air
pollution, accidents, and wasted time and resources. However, remote driver
intervention may be necessary for extreme situations to ensure safe roadside
parking or complete remote takeover. In such cases, high-quality real-time
video streaming is crucial for practical remote driving. In a preliminary
study, we already presented a region of interest (ROI) HEVC data compression
where the image was segmented into two categories of ROI and background,
allocating more bandwidth to the ROI, yielding an improvement in the visibility
of the classes that essential for driving while transmitting the background
with lesser quality. However, migrating bandwidth to the large ROI portion of
the image doesn't substantially improve the quality of traffic signs and
lights. This work categorized the ROIs into either background, weak ROI, or
strong ROI. The simulation-based approach uses a photo-realistic driving
scenario database created with the Cognata self-driving car simulation
platform. We use semantic segmentation to categorize the compression quality of
a Coding Tree Unit (CTU) according to each pixel class. A background CTU can
contain only sky, trees, vegetation, or building classes. Essentials for remote
driving include significant classes such as roads, road marks, cars, and
pedestrians. And most importantly, traffic signs and traffic lights. We apply
thresholds to decide if the number of pixels in a CTU of a particular category
is enough to declare it as belonging to the strong or weak ROI. Then, we
allocate the bandwidth according to the CTU categories. Our results show that
the perceptual quality of traffic signs, especially textual signs and traffic
lights, improves significantly by up to 5.5 dB compared to the only background
and foreground partition, while the weak ROI classes at least retain their
original quality.Comment: 14 pages, 7 figure
Enhancing Ligand Pose Sampling for Molecular Docking
Deep learning promises to dramatically improve scoring functions for
molecular docking, leading to substantial advances in binding pose prediction
and virtual screening. To train scoring functions-and to perform molecular
docking-one must generate a set of candidate ligand binding poses.
Unfortunately, the sampling protocols currently used to generate candidate
poses frequently fail to produce any poses close to the correct, experimentally
determined pose, unless information about the correct pose is provided. This
limits the accuracy of learned scoring functions and molecular docking. Here,
we describe two improved protocols for pose sampling: GLOW (auGmented sampLing
with sOftened vdW potential) and a novel technique named IVES (IteratiVe
Ensemble Sampling). Our benchmarking results demonstrate the effectiveness of
our methods in improving the likelihood of sampling accurate poses, especially
for binding pockets whose shape changes substantially when different ligands
bind. This improvement is observed across both experimentally determined and
AlphaFold-generated protein structures. Additionally, we present datasets of
candidate ligand poses generated using our methods for each of around 5,000
protein-ligand cross-docking pairs, for training and testing scoring functions.
To benefit the research community, we provide these cross-docking datasets and
an open-source Python implementation of GLOW and IVES at
https://github.com/drorlab/GLOW_IVES .Comment: Published at the Machine Learning for Structural Biology Workshop,
NeurIPS 202
A Note on the Non-Existence of Functors
We consider several cases of non-existence theorems for functors. For
example, there are no nontrivial functors from the category of sets, (or the
category of groups, or vector spaces) to any small category. See 2.3. Another
kind of nonexistence is that of (co-)augmented functors. For example, every
augmented functor from groups to abelian groups, is trivial, i.e. has a trivial
augmentation map. Every surjective co-augmented functor from groups to perfect
groups or to free groups is also trivial
Recognition of Surface Reflectance Properties from a Single Image under Unknown Real-World Illumination
This paper describes a machine vision system that classifies reflectance properties of surfaces such as metal, plastic, or paper, under unknown real-world illumination. We demonstrate performance of our algorithm for surfaces of arbitrary geometry. Reflectance estimation under arbitrary omnidirectional illumination proves highly underconstrained. Our reflectance estimation algorithm succeeds by learning relationships between surface reflectance and certain statistics computed from an observed image, which depend on statistical regularities in the spatial structure of real-world illumination. Although the algorithm assumes known geometry, its statistical nature makes it robust to inaccurate geometry estimates
Surface Reflectance Estimation and Natural Illumination Statistics
Humans recognize optical reflectance properties of surfaces such as metal, plastic, or paper from a single image without knowledge of illumination. We develop a machine vision system to perform similar recognition tasks automatically. Reflectance estimation under unknown, arbitrary illumination proves highly underconstrained due to the variety of potential illumination distributions and surface reflectance properties. We have found that the spatial structure of real-world illumination possesses some of the statistical regularities observed in the natural image statistics literature. A human or computer vision system may be able to exploit this prior information to determine the most likely surface reflectance given an observed image. We develop an algorithm for reflectance classification under unknown real-world illumination, which learns relationships between surface reflectance and certain features (statistics) computed from a single observed image. We also develop an automatic feature selection method
How do Humans Determine Reflectance Properties under Unknown Illumination?
Under normal viewing conditions, humans find it easy to distinguish between objects made out of different materials such as plastic, metal, or paper. Untextured materials such as these have different surface reflectance properties, including lightness and gloss. With single isolated images and unknown illumination conditions, the task of estimating surface reflectance is highly underconstrained, because many combinations of reflection and illumination are consistent with a given image. In order to work out how humans estimate surface reflectance properties, we asked subjects to match the appearance of isolated spheres taken out of their original contexts. We found that subjects were able to perform the task accurately and reliably without contextual information to specify the illumination. The spheres were rendered under a variety of artificial illuminations, such as a single point light source, and a number of photographically-captured real-world illuminations from both indoor and outdoor scenes. Subjects performed more accurately for stimuli viewed under real-world patterns of illumination than under artificial illuminations, suggesting that subjects use stored assumptions about the regularities of real-world illuminations to solve the ill-posed problem
Surface Reflectance Recognition and Real-World Illumination Statistics
Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illuminination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image. Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance
Gi- and Gs-coupled GPCRs show different modes of G-protein binding.
More than two decades ago, the activation mechanism for the membrane-bound photoreceptor and prototypical G protein-coupled receptor (GPCR) rhodopsin was uncovered. Upon light-induced changes in ligand-receptor interaction, movement of specific transmembrane helices within the receptor opens a crevice at the cytoplasmic surface, allowing for coupling of heterotrimeric guanine nucleotide-binding proteins (G proteins). The general features of this activation mechanism are conserved across the GPCR superfamily. Nevertheless, GPCRs have selectivity for distinct G-protein family members, but the mechanism of selectivity remains elusive. Structures of GPCRs in complex with the stimulatory G protein, Gs, and an accessory nanobody to stabilize the complex have been reported, providing information on the intermolecular interactions. However, to reveal the structural selectivity filters, it will be necessary to determine GPCR-G protein structures involving other G-protein subtypes. In addition, it is important to obtain structures in the absence of a nanobody that may influence the structure. Here, we present a model for a rhodopsin-G protein complex derived from intermolecular distance constraints between the activated receptor and the inhibitory G protein, Gi, using electron paramagnetic resonance spectroscopy and spin-labeling methodologies. Molecular dynamics simulations demonstrated the overall stability of the modeled complex. In the rhodopsin-Gi complex, Gi engages rhodopsin in a manner distinct from previous GPCR-Gs structures, providing insight into specificity determinants
- …