38 research outputs found

    Two for One: Diffusion Models and Force Fields for Coarse-Grained Molecular Dynamics

    Full text link
    Coarse-grained (CG) molecular dynamics enables the study of biological processes at temporal and spatial scales that would be intractable at an atomistic resolution. However, accurately learning a CG force field remains a challenge. In this work, we leverage connections between score-based generative models, force fields and molecular dynamics to learn a CG force field without requiring any force inputs during training. Specifically, we train a diffusion generative model on protein structures from molecular dynamics simulations, and we show that its score function approximates a force field that can directly be used to simulate CG molecular dynamics. While having a vastly simplified training setup compared to previous work, we demonstrate that our approach leads to improved performance across several small- to medium-sized protein simulations, reproducing the CG equilibrium distribution, and preserving dynamics of all-atom simulations such as protein folding events

    A study of the optimal allocation of tolerances and clearances in planar linkage mechanisms

    Get PDF
    PhD ThesisThe work falls into two separate parts, involving respectively kinematic and dynamic aspects of planar linkage mechanisms. The first and major part reported in Part I concerns the development of a procedure for optimal allocation of tolerances and clearances in plane linkage mechanisms. The theory developed takes into account the sensitivity of the mechanism output to small deviations in the parameter dimensions and the cost-tolerance relationships for the parameters. A procedure is then derived from the theory and incorporated into a computer program to allocate tollerances to linear dimensions and angles, and clearances to the joints in the mechanism. To demonstrate the applicability of the method to a wide range of planar linkage mechanisms, a number of examples are given which include 4-, 6-, 8- and 10-bar linkages. Part II describes the investigation of possible methods for maintaining contact in the joints of a plane four-bar mechanism by means of mass redistribution, the aim being to reduce or eliminate vibration due to impact in joints with clearance. An optimization routine is used with constraints upon the magnitude of the joint forces and the rate at which those forces change direction based on a 'no-clearance' analysis. The method was applied to several examples with little success due to inherent limitations of the analysis method used.Kartoum Polytechnic

    Temporal Markov Decision Problems : Formalization and Resolution

    Get PDF
    This thesis addresses the question of planning under uncertainty within a time-dependent changing environment. Original motivation for this work came from the problem of building an autonomous agent able to coordinate with its uncertain environment; this environment being composed of other agents communicating their intentions or non-controllable processes for which some discrete-event model is available. We investigate several approaches for modeling continuous time-dependency in the framework of Markov Decision Processes (MDPs), leading us to a definition of Temporal Markov Decision Problems. Then our approach focuses on two separate paradigms. First, we investigate time-dependent problems as \emph{implicit-event} processes and describe them through the formalism of Time-dependent MDPs (TMDPs). We extend the existing results concerning optimality equations and present a new Value Iteration algorithm based on piecewise polynomial function representations in order to solve a more general class of TMDPs. This paves the way to a more general discussion on parametric actions in hybrid state and action spaces MDPs with continuous time. In a second time, we investigate the option of separately modeling the concurrent contributions of exogenous events. This approach of \emph{explicit-event} modeling leads to the use of Generalized Semi-Markov Decision Processes (GSMDP). We establish a link between the general framework of Discrete Events Systems Specification (DEVS) and the formalism of GSMDP, allowing us to build sound discrete-event compatible simulators. Then we introduce a simulation-based Policy Iteration approach for explicit-event Temporal Markov Decision Problems. This algorithmic contribution brings together results from simulation theory, forward search in MDPs, and statistical learning theory. The implicit-event approach was tested on a specific version of the Mars rover planning problem and on a drone patrol mission planning problem while the explicit-event approach was evaluated on a subway network control problem

    Rethinking Consistency Management in Real-time Collaborative Editing Systems

    Get PDF
    Networked computer systems offer much to support collaborative editing of shared documents among users. Increasing concurrent access to shared documents by allowing multiple users to contribute to and/or track changes to these shared documents is the goal of real-time collaborative editing systems (RTCES); yet concurrent access is either limited in existing systems that employ exclusive locking or concurrency control algorithms such as operational transformation (OT) may be employed to enable concurrent access. Unfortunately, such OT based schemes are costly with respect to communication and computation. Further, existing systems are often specialized in their functionality and require users to adopt new, unfamiliar software to enable collaboration. This research discusses our work in improving consistency management in RTCES. We have developed a set of deadlock-free multi-granular dynamic locking algorithms and data structures that maximize concurrent access to shared documents while minimizing communication cost. These algorithms provide a high level of service for concurrent access to the shared document and integrate merge-based or OT-based consistency maintenance policies locally among a subset of the users within a subsection of the document – thus reducing the communication costs in maintaining consistency. Additionally, we have developed client-server and P2P implementations of our hierarchical document management algorithms. Simulations results indicate that our approach achieves significant communication and computation cost savings. We have also developed a hierarchical reduction algorithm that can minimize the space required of RTCES, and this algorithm may be pipelined through our document tree. Further, we have developed an architecture that allows for a heterogeneous set of client editing software to connect with a heterogeneous set of server document repositories via Web services. This architecture supports our algorithms and does not require client or server technologies to be modified – thus it is able to accommodate existing, favored editing and repository tools. Finally, we have developed a prototype benchmark system of our architecture that is responsive to users’ actions and minimizes communication costs

    Fifth Conference on Artificial Intelligence for Space Applications

    Get PDF
    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration

    Fourth Conference on Artificial Intelligence for Space Applications

    Get PDF
    Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming

    Beyond the Formalism Debate: Expert Reasoning, Fuzzy Logic, and Complex Statutes

    Get PDF
    Formalists and antiformalists continue to debate the utility of using legislative history and current social values to interpret statutes. Lost in the debate, however, is a clear model of how judges actually make decisions. Rather than focusing on complex problems presented by actual judicial decisions, formalists and antiformalists concentrate on stylized examples of simple statutes. In this Article, Professors Adams and Farber construct a more functional model of judicial decisionmaking by focusing on complex problems. They use cognitive psychological research on expert reasoning and techniques from an emerging area in the field of artificial intelligence, fuzzy logic, to construct their model. To probe the complex interactions between judicial interpretation, the business and legal communities, and the legislature, the authors apply their model to two important bankruptcy cases written by prominent formalist judges. Professors Adams and Farber demonstrate how cognitive psychology and fuzzy logic can reveal the reasoning processes that both formalist and antiformalist judges use to interpret \u27complex statutes. To apply formalist rules, judges need to recognize the aspects of a case that trigger relevant rules. Cognitive psychologists have researched expert reasoning using this type of diagnostic process. Once the judge identifies the appropriate rules, she will often find they point in conflicting directions. Fuzzy logic provides a model of how to analyze such conflicts. Next, Professors Adams and Farber consider how these models of judicial decisionmaking inform efforts to improve statutory interpretation of complex statutes. They reason that expert decisionmaking builds on pattern recognition skills and fuzzy maps, both the result of intensive repeated experience. The authors explain that cases involving complex statutory interpretation frequently involve competing considerations, and that the implicit understandings of field insiders tend to be entrenched and difficult to displace. Consequently, Professors Adams and Farber argue that judges in specialty courts, such as the Bankruptcy Courts, are probably in a better position than generalist appellate judges to interpret complex statutes. Generalist judges should approach complex statutory issues with a strong degree of deference to the local culture of the field. Professors Adams and Farber conclude the Article with speculation on how fuzzy logic could be used in a more quantitative way to model legal problems. They note that computer modeling may ultimately provide insight into the subtle process of judicial practical reasoning, moving away from the false dichotomy often drawn between formalist and antiformalist approaches to practical judicial decision- making

    Beyond the Formalism Debate: Expert Reasoning, Fuzzy Logic, and Complex Statutes

    Get PDF
    Formalists and antiformalists continue to debate the utility of using legislative history and current social values to interpret statutes. Lost in the debate, however, is a clear model of how judges actually make decisions. Rather than focusing on complex problems presented by actual judicial decisions, formalists and antiformalists concentrate on stylized examples of simple statutes.In this Article, Professors Adams and Farber construct a more functional model of judicial decisionmaking by focusing on complex problems. They use cognitive psychological research on expert reasoning and techniques from an emerging area in the field of artificial intelligence, fuzzy logic, to construct their model. To probe the complex interactions between judicial interpretation, the business and legal communities, and the legislature, the authors apply their model to two important bankruptcy cases written by prominent formalist judges.Professors Adams and Farber demonstrate how cognitive psychology and fuzzy logic can reveal the reasoning processes that both formalist and antiformalist judges use to interpret complex statutes. To apply formalist rules, judges need to recognize the aspects of a case that trigger relevant rules. Cognitive psychologists have researched expert reasoning using this type of diagnostic process. Once the judge identifies the appropriate rules, she will often find they point in conflicting directions. Fuzzy logic provides a model of how to analyze such conflicts.Next, Professors Adams and Farber consider how these models of judicial decisionmaking inform efforts to improve statutory interpretation of complex statutes. They reason that expert decisionmaking builds on pattern recognition skills and fuzzy maps, both the result of intensive repeated experience. The authors explain that cases involving complex statutory interpretation frequently involve competing considerations, and that the implicit understandings of field insiders tend to be entrenched and difficult to displace. Consequently, Professors Adams and Farber argue that judges in specialty courts, such as the Bankruptcy Courts, are probably in a better position than generalist appellate judges to interpret complex statutes. Generalist judges should approach complex statutory issues with a strong degree of deference to the local culture of the field.Professors Adams and Farber conclude the Article with speculation on how fuzzy logic could be used in a more quantitative way to model legal problems. They note that computer modeling may ultimately provide insight into the subtle process of judicial practical reasoning, moving away from the false dichotomy often drawn between formalist and antiformalist approaches to practical judicial decisionmaking. Formalist, Antiformalist, Fuzzy Logic, Statutory Interpretatio

    Process software simulation model of Lean-Kanban Approach

    Get PDF
    Software process simulation is important for reducing errors, helping analysis of the risks and for improving software quality. In recent years, the Lean-Kanban approach has been widely applied in software practice including software development and maintenance. The Lean-Kanban approach minimizes the Work-In-Progress (WIP), which is the number of items that are worked on by the team at any given time. It has been demonstrated that such approach can help to improve software maintenance and development processes in industrial environments. The goal of the simulation model itself is to increase the understanding and to support decisions for planning such kind of projects. Considering the threats to validity of the study, the accuracy and reliability of the simulation model could be shown and the simulation model implementation allows for deriving hypothesis on the impact of distribution on parameters such as throughput. In this thesis, we describe our simulation studies, which show that the Lean-Kanban approach can indeed help to reduce the average time needed to complete maintenance or development issues. This simulation model can simulate existing maintenance and development processes that does not use a WIP limit, as well as a maintenance and development processes that adopt a WIP limit. We performed some case studies using real data collected from different projects. The results confirmthat the WIP-limited process as advocated by the Lean- Kanban approach could be useful to increase the efficiency of software maintenance and development, as reported in previous industrial practices

    Process software simulation model of Lean-Kanban Approach

    Get PDF
    Software process simulation is important for reducing errors, helping analysis of the risks and for improving software quality. In recent years, the Lean-Kanban approach has been widely applied in software practice including software development and maintenance. The Lean-Kanban approach minimizes the Work-In-Progress (WIP), which is the number of items that are worked on by the team at any given time. It has been demonstrated that such approach can help to improve software maintenance and development processes in industrial environments. The goal of the simulation model itself is to increase the understanding and to support decisions for planning such kind of projects. Considering the threats to validity of the study, the accuracy and reliability of the simulation model could be shown and the simulation model implementation allows for deriving hypothesis on the impact of distribution on parameters such as throughput. In this thesis, we describe our simulation studies, which show that the Lean-Kanban approach can indeed help to reduce the average time needed to complete maintenance or development issues. This simulation model can simulate existing maintenance and development processes that does not use a WIP limit, as well as a maintenance and development processes that adopt a WIP limit. We performed some case studies using real data collected from different projects. The results confirmthat the WIP-limited process as advocated by the Lean- Kanban approach could be useful to increase the efficiency of software maintenance and development, as reported in previous industrial practices
    corecore