2,145 research outputs found
Two polymorphisms facilitate differences in plasticity between two chicken major histocompatibility complex class I proteins
Major histocompatibility complex class I molecules (MHC I) present peptides to cytotoxic T-cells at the surface of almost all nucleated cells. The function of MHC I molecules is to select high affinity peptides from a large intracellular pool and they are assisted in this process by co-factor molecules, notably tapasin. In contrast to mammals, MHC homozygous chickens express a single MHC I gene locus, termed BF2, which is hypothesised to have co-evolved with the highly polymorphic tapasin within stable haplotypes. The BF2 molecules of the B15 and B19 haplotypes have recently been shown to differ in their interactions with tapasin and in their peptide selection properties. This study investigated whether these observations might be explained by differences in the protein plasticity that is encoded into the MHC I structure by primary sequence polymorphisms. Furthermore, we aimed to demonstrate the utility of a complimentary modelling approach to the understanding of complex experimental data. Combining mechanistic molecular dynamics simulations and the primary sequence based technique of statistical coupling analysis, we show how two of the eight polymorphisms between BF2*15:01 and BF2*19:01 facilitate differences in plasticity. We show that BF2*15:01 is intrinsically more plastic than BF2*19:01, exploring more conformations in the absence of peptide. We identify a protein sector of contiguous residues connecting the membrane bound ?3 domain and the heavy chain peptide binding site. This sector contains two of the eight polymorphic residues. One is residue 22 in the peptide binding domain and the other 220 is in the ?3 domain, a putative tapasin binding site. These observations are in correspondence with the experimentally observed functional differences of these molecules and suggest a mechanism for how modulation of MHC I plasticity by tapasin catalyses peptide selection allosterically
In Silico Vaccine Design for Multidrug-Resistant Staphylococcus Aureus Clumping Factor A (ClfA)
Staphylococcus aureus a facultative anaerobic multidrug-resistant bacterium can cause a range of illnesses, from minor skin infections, such as pimples, boils, impetigo, folliculitis, cellulitis, carbuncles, scalded skin syndrome, and abscesses and life-threatening diseases such as meningitis, pneumonia, bacteremia, sepsis, osteomyelitis, endocarditis and toxic shock syndrome. Pathogenic strains often promote infections by producing virulence factors and the expression of cell-surface proteins that bind and inactivate antibodies. The emergence of antibiotic-resistant strains of S. aureus such as methicillin resistant S. aureus (MRSA) is a worldwide problem in clinical medicine. In spite of immense research and development, not much progress has been made with regard to an epitope based vaccine and till date there is no approved vaccine for S. aureus. This study aims to analyze and predict the possibility of designing a vaccine that could make humans immune to S. aureus. The surface protein ClfA is highly antigenic among the virulence factors of S. aureus which act as an adhesin often essential for infection was collected from a protein database and in silico tools were used to predict the T-cell epitopes by NetCTL 1.2 and B-cell epitopes by Bepipred from IEDB (Immune Epitope Database). Further, MHC Class I and Class II binding peptides were predicted using TepiTool from IEDB analysis resource. The peptide KPNTDSNAL was found as the most potential B-cell and T-cell epitope. The epitope was further tested for binding against the HLA molecule by computational docking techniques to verify the HLA and epitope interaction. However, the in silico designed epitope-based peptide vaccine against S. aureus need to be validated by in vitro and in vivo experiments
Towards Distributed Petascale Computing
In this chapter we will argue that studying such multi-scale multi-science
systems gives rise to inherently hybrid models containing many different
algorithms best serviced by different types of computing environments (ranging
from massively parallel computers, via large-scale special purpose machines to
clusters of PC's) whose total integrated computing capacity can easily reach
the PFlop/s scale. Such hybrid models, in combination with the by now
inherently distributed nature of the data on which the models `feed' suggest a
distributed computing model, where parts of the multi-scale multi-science model
are executed on the most suitable computing environment, and/or where the
computations are carried out close to the required data (i.e. bring the
computations to the data instead of the other way around). We presents an
estimate for the compute requirements to simulate the Galaxy as a typical
example of a multi-scale multi-physics application, requiring distributed
Petaflop/s computational power.Comment: To appear in D. Bader (Ed.) Petascale, Computing: Algorithms and
Applications, Chapman & Hall / CRC Press, Taylor and Francis Grou
Persons Versus Brains: Biological Intelligence in Human Organisms
I go deep into the biology of the human organism to argue that the psychological features and functions of persons are realized by cellular and molecular parallel distributed processing networks dispersed throughout the whole body. Persons supervene on the computational processes of nervous, endocrine, immune, and genetic networks. Persons do not go with brains
A multiphysics and multiscale software environment for modeling astrophysical systems
We present MUSE, a software framework for combining existing computational
tools for different astrophysical domains into a single multiphysics,
multiscale application. MUSE facilitates the coupling of existing codes written
in different languages by providing inter-language tools and by specifying an
interface between each module and the framework that represents a balance
between generality and computational efficiency. This approach allows
scientists to use combinations of codes to solve highly-coupled problems
without the need to write new codes for other domains or significantly alter
their existing codes. MUSE currently incorporates the domains of stellar
dynamics, stellar evolution and stellar hydrodynamics for studying generalized
stellar systems. We have now reached a "Noah's Ark" milestone, with (at least)
two available numerical solvers for each domain. MUSE can treat multi-scale and
multi-physics systems in which the time- and size-scales are well separated,
like simulating the evolution of planetary systems, small stellar associations,
dense stellar clusters, galaxies and galactic nuclei.
In this paper we describe three examples calculated using MUSE: the merger of
two galaxies, the merger of two evolving stars, and a hybrid N-body simulation.
In addition, we demonstrate an implementation of MUSE on a distributed computer
which may also include special-purpose hardware, such as GRAPEs or GPUs, to
accelerate computations. The current MUSE code base is publicly available as
open source at http://muse.liComment: 24 pages, To appear in New Astronomy Source code available at
http://muse.l
Recommended from our members
A grid computing framework for commercial simulation packages
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.An increased need for collaborative research among different organizations, together with continuing advances in communication technology and computer hardware, has facilitated the development of distributed systems that can provide users non-trivial access to geographically dispersed computing resources (processors, storage, applications, data, instruments, etc.) that are administered in multiple computer domains. The term grid computing or grids is popularly used to refer to such distributed systems. A broader definition of grid computing includes the use of computing resources within an organization for running organization-specific applications. This research is in the context of using grid computing within an enterprise to maximize the use of available hardware and software resources for processing enterprise applications. Large scale scientific simulations have traditionally been the primary benefactor of grid computing. The application of this technology to simulation in industry has, however, been negligible. This research investigates how grid technology can be effectively exploited by simulation practitioners using Windows-based commercially available simulation packages to model simulations in industry. These packages are commonly referred to as Commercial Off-The-Shelf (COTS) Simulation Packages (CSPs). The study identifies several higher level grid services that could be potentially used to support the practise of simulation in industry. It proposes a grid computing framework to investigate these services in the context of CSP-based simulations. This framework is called the CSP-Grid Computing (CSP-GC) Framework. Each identified higher level grid service in this framework is referred to as a CSP-specific service. A total of six case studies are presented to experimentally evaluate how grid computing technologies can be used together with unmodified simulation packages to support some of the CSP-specific services. The contribution of this thesis is the CSP-GC framework that identifies how simulation practise in industry may benefit from the use of grid technology. A further contribution is the recognition of specific grid computing software (grid middleware) that can possibly be used together with existing CSPs to provide grid support. With its focus on end-users and end-user tools, it is intended that this research will encourage wider adoption of grid computing in the workplace and that simulation users will derive benefit from using this technology
- …