1,220 research outputs found
Computational structure‐based drug design: Predicting target flexibility
The role of molecular modeling in drug design has experienced a significant revamp in the last decade. The increase in computational resources and molecular models, along with software developments, is finally introducing a competitive advantage in early phases of drug discovery. Medium and small companies with strong focus on computational chemistry are being created, some of them having introduced important leads in drug design pipelines. An important source for this success is the extraordinary development of faster and more efficient techniques for describing flexibility in three‐dimensional structural molecular modeling. At different levels, from docking techniques to atomistic molecular dynamics, conformational sampling between receptor and drug results in improved predictions, such as screening enrichment, discovery of transient cavities, etc. In this review article we perform an extensive analysis of these modeling techniques, dividing them into high and low throughput, and emphasizing in their application to drug design studies. We finalize the review with a section describing our Monte Carlo method, PELE, recently highlighted as an outstanding advance in an international blind competition and industrial benchmarks.We acknowledge the BSC-CRG-IRB Joint Research Program in Computational Biology. This work was supported by a grant
from the Spanish Government CTQ2016-79138-R.J.I. acknowledges support from SVP-2014-068797, awarded by the Spanish Government.Peer ReviewedPostprint (author's final draft
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Integrated smoothed location model and data reduction approaches for multi variables classification
Smoothed Location Model is a classification rule that deals with mixture of continuous variables and binary variables simultaneously. This rule discriminates groups in a parametric form using conditional distribution of the continuous variables given each pattern of the binary variables. To conduct a practical
classification analysis, the objects must first be sorted into the cells of a multinomial table generated from the binary variables. Then, the parameters in each cell will be estimated using the sorted objects. However, in many situations, the estimated parameters are poor if the number of binary is large relative to the size of sample. Large binary variables will create too many multinomial cells which are empty, leading to high sparsity problem and finally give exceedingly poor performance for
the constructed rule. In the worst case scenario, the rule cannot be constructed. To
overcome such shortcomings, this study proposes new strategies to extract adequate variables that contribute to optimum performance of the rule. Combinations of two extraction techniques are introduced, namely 2PCA and PCA+MCA with new cutpoints of eigenvalue and total variance explained, to determine adequate extracted
variables which lead to minimum misclassification rate. The outcomes from these
extraction techniques are used to construct the smoothed location models, which then produce two new approaches of classification called 2PCALM and 2DLM. Numerical evidence from simulation studies demonstrates that the computed misclassification rate indicates no significant difference between the extraction
techniques in normal and non-normal data. Nevertheless, both proposed approaches are slightly affected for non-normal data and severely affected for highly overlapping groups. Investigations on some real data sets show that the two approaches are competitive with, and better than other existing classification methods. The overall findings reveal that both proposed approaches can be
considered as improvement to the location model, and alternatives to other classification methods particularly in handling mixed variables with large binary size
Proceedings of the Fifth NASA/NSF/DOD Workshop on Aerospace Computational Control
The Fifth Annual Workshop on Aerospace Computational Control was one in a series of workshops sponsored by NASA, NSF, and the DOD. The purpose of these workshops is to address computational issues in the analysis, design, and testing of flexible multibody control systems for aerospace applications. The intention in holding these workshops is to bring together users, researchers, and developers of computational tools in aerospace systems (spacecraft, space robotics, aerospace transportation vehicles, etc.) for the purpose of exchanging ideas on the state of the art in computational tools and techniques
Augmentation is AUtO-Net: Augmentation-Driven Contrastive Multiview Learning for Medical Image Segmentation
The utilisation of deep learning segmentation algorithms that learn complex
organs and tissue patterns and extract essential regions of interest from the
noisy background to improve the visual ability for medical image diagnosis has
achieved impressive results in Medical Image Computing (MIC). This thesis
focuses on retinal blood vessel segmentation tasks, providing an extensive
literature review of deep learning-based medical image segmentation approaches
while comparing the methodologies and empirical performances. The work also
examines the limitations of current state-of-the-art methods by pointing out
the two significant existing limitations: data size constraints and the
dependency on high computational resources. To address such problems, this work
proposes a novel efficient, simple multiview learning framework that
contrastively learns invariant vessel feature representation by comparing with
multiple augmented views by various transformations to overcome data shortage
and improve generalisation ability. Moreover, the hybrid network architecture
integrates the attention mechanism into a Convolutional Neural Network to
further capture complex continuous curvilinear vessel structures. The result
demonstrates the proposed method validated on the CHASE-DB1 dataset, attaining
the highest F1 score of 83.46% and the highest Intersection over Union (IOU)
score of 71.62% with UNet structure, surpassing existing benchmark UNet-based
methods by 1.95% and 2.8%, respectively. The combination of the metrics
indicates the model detects the vessel object accurately with a highly
coincidental location with the ground truth. Moreover, the proposed approach
could be trained within 30 minutes by consuming less than 3 GB GPU RAM, and
such characteristics support the efficient implementation for real-world
applications and deployments
Recommended from our members
Politics by Other Means: Economic Expertise, Power, and Global Development Finance Reform
This dissertation investigates how economic expertise influences development governance by examining how state economists establish methods for decision-making in global development finance. It contributes to debates over expert power by taking a science studies approach to address two problems in existing theories and accounts of experts. First, social reformers, heterodox planning theorists, and development critics from both the left and the right treat rationality and politics asymmetrically. When experts fail, politics has triumphed. When experts succeed, the credit goes to rationality, not politics. Second, within this asymmetrical approach, investigations and explanations of expert power neglect a principal conduit of expert influence: their methods. This dissertation turns the focus to economists’ efforts to establish their methods as governing rationales and the effects these methods engender. Doing so allows us to approach particular forms of state rationality such as neoliberalism or managerialism not as processes of depoliticization, of intellectual rationality prevailing over political interests and values, but as explicit political accomplishments with both the power to bring about political effects and the susceptibility to being challenged.
State economists’ efforts to establish three paradigmatic development economic methods in particular—governance indicators, growth diagnostics, and randomized controlled trials—and these methods’ effects on power relations, decision-making, and the distribution of resources were assessed using an embedded case study design of their use for decision-making in administering a new development finance fund, the United States Millennium Challenge Account. A mixed methods approach using interviews, documents, and various datasets found that economists could not realize the power of their intellectual rationality without exercising power thought to be the reserve of politicos. Economists had to employ various strategies of power both to gain autonomy from bureaucratic authorities and overcome opposition from expert groups holding alternative rationalities. This involved enrolling bystanders and opponents in their entrepreneurial efforts to establish methods. The more opposition economists faced, the more power they had to exercise and allies they had to enroll. Once enrollment was successful, economists’ status was elevated and their methods became indispensable to particular decision-making processes. These new ways of making decisions introduced different biases that elevated economists’ concerns, objectives, and ways of knowing. They also impacted the distribution of development finance in ways that exacerbated inequality in at least the short to medium term.
This dissertation’s focus on economists’ political work and methods has implications for planning practice because it opens up new political possibilities. Rather than treating state expertise and public participation as antagonistic, zero-sum confrontations, planners can pursue democratic values by both “opening up the state” and “getting inside” methods. If orthodox economists had to overcome opposition from groups of opposing experts with competing rationalities then other experts can likewise use political strategies to establish their methods as governing rationales. Even in situations where this is not possible or desirable, understanding methods’ political effects can instigate reflective practice and possible change
Recommended from our members
Politics by Other Means: Economic Expertise, Power, and Global Development Finance Reform
This dissertation investigates how economic expertise influences development governance by examining how state economists establish methods for decision-making in global development finance. It contributes to debates over expert power by taking a science studies approach to address two problems in existing theories and accounts of experts. First, social reformers, heterodox planning theorists, and development critics from both the left and the right treat rationality and politics asymmetrically. When experts fail, politics has triumphed. When experts succeed, the credit goes to rationality, not politics. Second, within this asymmetrical approach, investigations and explanations of expert power neglect a principal conduit of expert influence: their methods. This dissertation turns the focus to economists’ efforts to establish their methods as governing rationales and the effects these methods engender. Doing so allows us to approach particular forms of state rationality such as neoliberalism or managerialism not as processes of depoliticization, of intellectual rationality prevailing over political interests and values, but as explicit political accomplishments with both the power to bring about political effects and the susceptibility to being challenged.
State economists’ efforts to establish three paradigmatic development economic methods in particular—governance indicators, growth diagnostics, and randomized controlled trials—and these methods’ effects on power relations, decision-making, and the distribution of resources were assessed using an embedded case study design of their use for decision-making in administering a new development finance fund, the United States Millennium Challenge Account. A mixed methods approach using interviews, documents, and various datasets found that economists could not realize the power of their intellectual rationality without exercising power thought to be the reserve of politicos. Economists had to employ various strategies of power both to gain autonomy from bureaucratic authorities and overcome opposition from expert groups holding alternative rationalities. This involved enrolling bystanders and opponents in their entrepreneurial efforts to establish methods. The more opposition economists faced, the more power they had to exercise and allies they had to enroll. Once enrollment was successful, economists’ status was elevated and their methods became indispensable to particular decision-making processes. These new ways of making decisions introduced different biases that elevated economists’ concerns, objectives, and ways of knowing. They also impacted the distribution of development finance in ways that exacerbated inequality in at least the short to medium term.
This dissertation’s focus on economists’ political work and methods has implications for planning practice because it opens up new political possibilities. Rather than treating state expertise and public participation as antagonistic, zero-sum confrontations, planners can pursue democratic values by both “opening up the state” and “getting inside” methods. If orthodox economists had to overcome opposition from groups of opposing experts with competing rationalities then other experts can likewise use political strategies to establish their methods as governing rationales. Even in situations where this is not possible or desirable, understanding methods’ political effects can instigate reflective practice and possible change
- …