28,598 research outputs found
Revisiting the Training of Logic Models of Protein Signaling Networks with a Formal Approach based on Answer Set Programming
A fundamental question in systems biology is the construction and training to
data of mathematical models. Logic formalisms have become very popular to model
signaling networks because their simplicity allows us to model large systems
encompassing hundreds of proteins. An approach to train (Boolean) logic models
to high-throughput phospho-proteomics data was recently introduced and solved
using optimization heuristics based on stochastic methods. Here we demonstrate
how this problem can be solved using Answer Set Programming (ASP), a
declarative problem solving paradigm, in which a problem is encoded as a
logical program such that its answer sets represent solutions to the problem.
ASP has significant improvements over heuristic methods in terms of efficiency
and scalability, it guarantees global optimality of solutions as well as
provides a complete set of solutions. We illustrate the application of ASP with
in silico cases based on realistic networks and data
Trusted Computing and Secure Virtualization in Cloud Computing
Large-scale deployment and use of cloud computing in industry
is accompanied and in the same time hampered by concerns regarding protection of
data handled by cloud computing providers. One of the consequences of moving
data processing and storage off company premises is that organizations have
less control over their infrastructure. As a result, cloud service (CS) clients
must trust that the CS provider is able to protect their data and
infrastructure from both external and internal attacks. Currently however, such
trust can only rely on organizational processes declared by the CS
provider and can not be remotely verified and validated by an external party.
Enabling the CS client to verify the integrity of the host where the
virtual machine instance will run, as well as to ensure that the virtual
machine image has not been tampered with, are some steps towards building
trust in the CS provider. Having the tools to perform such
verifications prior to the launch of the VM instance allows the CS
clients to decide in runtime whether certain data should be stored- or calculations
should be made on the VM instance offered by the CS provider.
This thesis combines three components -- trusted computing, virtualization technology
and cloud computing platforms -- to address issues of trust and
security in public cloud computing environments. Of the three components,
virtualization technology has had the longest evolution and is a cornerstone
for the realization of cloud computing. Trusted computing is a recent
industry initiative that aims to implement the root of trust in a hardware
component, the trusted platform module. The initiative has been formalized
in a set of specifications and is currently at version 1.2. Cloud computing
platforms pool virtualized computing, storage and network resources in
order to serve a large number of customers customers that use a multi-tenant
multiplexing model to offer on-demand self-service over broad network.
Open source cloud computing platforms are, similar to trusted computing, a
fairly recent technology in active development.
The issue of trust in public cloud environments is addressed
by examining the state of the art within cloud computing security and
subsequently addressing the issues of establishing trust in the launch of a
generic virtual machine in a public cloud environment. As a result, the thesis
proposes a trusted launch protocol that allows CS clients
to verify and ensure the integrity of the VM instance at launch time, as
well as the integrity of the host where the VM instance is launched. The protocol
relies on the use of Trusted Platform Module (TPM) for key generation and data protection.
The TPM also plays an essential part in the integrity attestation of the
VM instance host. Along with a theoretical, platform-agnostic protocol,
the thesis also describes a detailed implementation design of the protocol
using the OpenStack cloud computing platform.
In order the verify the implementability of the proposed protocol, a prototype
implementation has built using a distributed deployment of OpenStack.
While the protocol covers only the trusted launch procedure using generic
virtual machine images, it presents a step aimed to contribute towards
the creation of a secure and trusted public cloud computing environment
Eigenvector Centrality Distribution for Characterization of Protein Allosteric Pathways
Determining the principal energy pathways for allosteric communication in
biomolecules, that occur as a result of thermal motion, remains challenging due
to the intrinsic complexity of the systems involved. Graph theory provides an
approach for making sense of such complexity, where allosteric proteins can be
represented as networks of amino acids. In this work, we establish the
eigenvector centrality metric in terms of the mutual information, as a mean of
elucidating the allosteric mechanism that regulates the enzymatic activity of
proteins. Moreover, we propose a strategy to characterize the range of the
physical interactions that underlie the allosteric process. In particular, the
well known enzyme, imidazol glycerol phosphate synthase (IGPS), is utilized to
test the proposed methodology. The eigenvector centrality measurement
successfully describes the allosteric pathways of IGPS, and allows to pinpoint
key amino acids in terms of their relevance in the momentum transfer process.
The resulting insight can be utilized for refining the control of IGPS
activity, widening the scope for its engineering. Furthermore, we propose a new
centrality metric quantifying the relevance of the surroundings of each
residue. In addition, the proposed technique is validated against experimental
solution NMR measurements yielding fully consistent results. Overall, the
methodologies proposed in the present work constitute a powerful and cost
effective strategy to gain insight on the allosteric mechanism of proteins
Signed Network Modeling Based on Structural Balance Theory
The modeling of networks, specifically generative models, have been shown to
provide a plethora of information about the underlying network structures, as
well as many other benefits behind their construction. Recently there has been
a considerable increase in interest for the better understanding and modeling
of networks, but the vast majority of this work has been for unsigned networks.
However, many networks can have positive and negative links(or signed
networks), especially in online social media, and they inherently have
properties not found in unsigned networks due to the added complexity.
Specifically, the positive to negative link ratio and the distribution of
signed triangles in the networks are properties that are unique to signed
networks and would need to be explicitly modeled. This is because their
underlying dynamics are not random, but controlled by social theories, such as
Structural Balance Theory, which loosely states that users in social networks
will prefer triadic relations that involve less tension. Therefore, we propose
a model based on Structural Balance Theory and the unsigned Transitive Chung-Lu
model for the modeling of signed networks. Our model introduces two parameters
that are able to help maintain the positive link ratio and proportion of
balanced triangles. Empirical experiments on three real-world signed networks
demonstrate the importance of designing models specific to signed networks
based on social theories to obtain better performance in maintaining signed
network properties while generating synthetic networks.Comment: CIKM 2018: https://dl.acm.org/citation.cfm?id=327174
- …