38 research outputs found
Two essays in computational optimization: computing the clar number in fullerene graphs and distributing the errors in iterative interior point methods
Fullerene are cage-like hollow carbon molecules graph of pseudospherical sym-
metry consisting of only pentagons and hexagons faces. It has been the object
of interest for chemists and mathematicians due to its widespread application
in various fields, namely including electronic and optic engineering, medical sci-
ence and biotechnology. A Fullerene molecular, Γ n of n atoms has a multiplicity
of isomers which increases as N iso ∼ O(n 9 ). For instance, Γ 180 has 79,538,751
isomers. The Fries and Clar numbers are stability predictors of a Fullerene
molecule. These number can be computed by solving a (possibly N P -hard)
combinatorial optimization problem. We propose several ILP formulation of
such a problem each yielding a solution algorithm that provides the exact value
of the Fries and Clar numbers. We compare the performances of the algorithm
derived from the proposed ILP formulations. One of this algorithm is used to
find the Clar isomers, i.e., those for which the Clar number is maximum among
all isomers having a given size. We repeated this computational experiment for
all sizes up to 204 atoms. In the course of the study a total of 2 649 413 774
isomers were analyzed.The second essay concerns developing an iterative primal dual infeasible path
following (PDIPF) interior point (IP) algorithm for separable convex quadratic
minimum cost flow network problem. In each iteration of PDIPF algorithm, the
main computational effort is solving the underlying Newton search direction
system. We concentrated on finding the solution of the corresponding linear
system iteratively and inexactly. We assumed that all the involved inequalities
can be solved inexactly and to this purpose, we focused on different approaches
for distributing the error generated by iterative linear solvers such that the
convergences of the PDIPF algorithm are guaranteed. As a result, we achieved
theoretical bases that open the path to further interesting practical investiga-
tion
Brownian Motion of Decaying Particles: Transition Probability, Computer Simulation, and First-Passage Times
Recent developments in the measurement of radioactive gases in passive diffusion motivate the analysis of Brownian motion of decaying particles, a subject that has received little previous attention. This paper reports the derivation and solution of equations comparable to the Fokker-Planck and Langevin equations for one-dimensional diffusion and decay of unstable particles. In marked contrast to the case of stable particles, the two equations are not equivalent, but provide different information regarding the same stochastic process. The differences arise because Brownian motion with particle decay is not a continuous process. The discontinuity is readily apparent in the computer-simulated trajectories of the Langevin equation that incorporate both a Wiener process for displacement fluctuations and a Bernoulli process for random decay. This paper also reports the derivation of the mean time of first passage of the decaying particle to absorbing boundaries. Here, too, particle decay can lead to an outcome markedly different from that for stable particles. In particular, the first-passage time of the decaying particle is always finite, whereas the time for a stable particle to reach a single absorbing boundary is theoretically infinite due to the heavy tail of the inverse Gaussian density. The methodology developed in this paper should prove useful in the investigation of radioactive gases, aerosols of radioactive atoms, dust particles to which adhere radioactive ions, as well as diffusing gases and liquids of unstable molecules
Computing the forcing spectrum of outerplanar graphs in polynomial time
The forcing number of a graph with a perfect matching is the minimum
number of edges in whose endpoints need to be deleted, such that the
remaining graph only has a single perfect matching. This number is of great
interest in theoretical chemistry, since it conveys information about the
structural properties of several interesting molecules. On the other hand, in
bipartite graphs the forcing number corresponds to the famous feedback vertex
set problem in digraphs.
Determining the complexity of finding the smallest forcing number of a given
planar graph is still a widely open and important question in this area,
originally proposed by Afshani, Hatami, and Mahmoodian in 2004. We take a first
step towards the resolution of this question by providing an algorithm that
determines the set of all possible forcing numbers of an outerplanar graph in
polynomial time. This is the first polynomial-time algorithm concerning this
problem for a class of graphs of comparable or greater generality.Comment: 22 pages, 3 figure
Stochastic methods and models for neuronal activity and motor proteins
The main topic of this thesis is to specialize mathematical methods and construct stochastic models for describing and predict biological dynamics such as neuronal firing and acto-myosin sliding.
The apparently twofold theme on which we focused the research is part of a strictly unitary context as the methods and tools of investigation adapt naturally to both issues. Models describing the stochastic evolution in the case of neuronal activity or that of the acto-myosin dynamics are governed by stochastic differential equations whose solutions are diffusion processes and Gauss-Markov processes. Of fundamental importance in the study of these phenomena is the determination of the density of the first passage times through one or two time-depending boundaries. For this purpose the development or use of numerical solution algorithms and the comparison of the results with those obtained by simulation algorithms is essential. A particular type of Gauss-Markov process (the time-inhomogeneous Ornstein-Uhlenbeck process) and its first passage time through suitable boundaries are fundamental for modeling phenomena subject to additional (external) time-dependent forces
Mathematical modelling of corneal epithelium maintenance and recovery
The cornea is the clear shield that covers the eye, estimated to contribute approximately two thirds of the eye’s optical power. Therefore, cornea protection is highly important and is achieved by maintaining its outermost layer, the corneal epithelium. Corneal
epithelium maintenance depends on a peripheral population of stem cells, the limbal stem
cells, that continuously replenish the basal epithelium layer through generating transient
amplifying cells (TAC). TACs move from the periphery to the centre, undergoing several
rounds of cell division, before terminal differentiation (TD). TD cells lose contact with
the basal layer and move up through the epithelium until shed from the surface. A better
understanding of corneal epithelium maintenance and recovery processes are crucial and
may contribute to better treatments.
We develop and analyse mathematical models to investigate these underlying complex biological processes with the aim of a better understanding and possibly prediction
of what cannot be currently obtained from laboratories and clinical trials. The proliferation events are modeled by a stochastic mathematical model, based on an analogy
to chemical reactions. Using this model we are aiming to: (i) clarify the main factors
involved in the maintaining process; (ii) determine the constraints placed on the proliferation process for healthy maintenance; (iii) investigate robustness via noise analysis.
We then consider the spatial migration of TACs from the corneal epithelium periphery
to the centre. In the absence of no precise biological data on the exact mechanism governing centripetal migration of TAC cells, we consider two simple prototype models: (i)
TACs simply migrate randomly, i.e. a diffusion type process; (ii) random migration with
a centrally-directed bias, i.e. a diffusion-advection type process. We first use a first passage time approach to understand the constraints on the parameters for TAC cells to reach
the corneal epithelial centre and then investigate prototypical models to investigate the
mechanisms that account for TAC cell redistribution from the periphery to the centre. Behaviour of the models is investigated under wound healing scenarios, exploring whether
different healing is observed according to model and position of the wound.Engineering and Physical Sciences Research Council (EPSRC) grant EP/L016508/01Scottish Funding Counci
Developing reliable anomaly detection system for critical hosts: a proactive defense paradigm
Current host-based anomaly detection systems have limited accuracy and incur
high processing costs. This is due to the need for processing massive audit data
of the critical host(s) while detecting complex zero-day attacks which can leave
minor, stealthy and dispersed artefacts. In this research study, this observation
is validated using existing datasets and state-of-the-art algorithms related to the
construction of the features of a host's audit data, such as the popular semantic-based
extraction and decision engines, including Support Vector Machines, Extreme
Learning Machines and Hidden Markov Models. There is a challenging
trade-off between achieving accuracy with a minimum processing cost and processing
massive amounts of audit data that can include complex attacks. Also,
there is a lack of a realistic experimental dataset that reflects the normal and
abnormal activities of current real-world computers.
This thesis investigates the development of new methodologies for host-based
anomaly detection systems with the specific aims of improving accuracy at a minimum
processing cost while considering challenges such as complex attacks which,
in some cases, can only be visible via a quantified computing resource, for example,
the execution times of programs, the processing of massive amounts of audit data,
the unavailability of a realistic experimental dataset and the automatic minimization
of the false positive rate while dealing with the dynamics of normal activities.
This study provides three original and significant contributions to this field of
research which represent a marked advance in its body of knowledge.
The first major contribution is the generation and release of a realistic intrusion
detection systems dataset as well as the development of a metric based on fuzzy
qualitative modeling for embedding the possible quality of realism in a dataset's
design process and assessing this quality in existing or future datasets.
The second key contribution is constructing and evaluating the hidden host
features to identify the trivial differences between the normal and abnormal artefacts
of hosts' activities at a minimum processing cost. Linux-centric features include
the frequencies and ranges, frequency-domain representations and Gaussian
interpretations of system call identifiers with execution times while, for Windows,
a count of the distinct core Dynamic Linked Library calls is identified as a hidden
host feature.
The final key contribution is the development of two new anomaly-based statistical
decision engines for capitalizing on the potential of some of the suggested
hidden features and reliably detecting anomalies. The first engine, which has
a forensic module, is based on stochastic theories including Hierarchical hidden
Markov models and the second is modeled using Gaussian Mixture Modeling and
Correntropy. The results demonstrate that the proposed host features and engines
are competent for meeting the identified challenges
A hierarchical adaptive model for robust short-term visual tracking
Visual tracking is a topic in computer vision with applications in many emerging as well as established technological areas, such as robotics, video surveillance, human-computer interaction, autonomous vehicles, and sport analytics. The main question of visual tracking is how to design an algorithm (visual tracker) that determines the state of one or more objects in a stream of images by accounting for their sequential nature. In this doctoral thesis we address two important topics in single-target short-term visual tracking. The first topic is related to construction of an object appearance model for visual tracking. The modeling and updating of the appearance model is crucial for successful tracking. We introduce a hierarchical appearance model which structures object appearance in multiple layers. The bottom layer contains the most specific information and each higher layer models the appearance information in a more general way. The hierarchical relations are also reflected in the update process where the higher layers guide the lower layers in their update while the lower layers provide a source for adaptation to higher layers if their information is reliable. The benefits of hierarchical appearance models are demonstrated with two implementations, primarily designed to tackle tracking of non-rigid and articulated objects that present a challenge for many existing trackers. The first example of appearance model combines local and global visual information in a coupled-layer appearance model. The bottom layer contains a part-based appearance description that is able to adapt to the geometrical deformations of non-rigid targets and the top layer is a multi-modal global object appearance model that guides the model during object appearance changes. The experimental evaluation shows that the proposed coupled-layer appearance model excels in robustness despite the fact that is uses relatively simple appearance descriptors. Our evaluation also exposed several weaknesses that were reflected in a decreased accuracy. Our second presented appearance model extends the hierarchy by introducing the third layer and a concept of template anchors. The first two layers are conceptually similar to the original two-layer appearance model, while the third layer is a memory system that is composed of static templates that provide a strong spatial cue when one of the templates is matched to the image reliably, thus assisting in quick recovery of the entire appearance model. In the experimental evaluation we show that this addition indeed improves the accuracy, as well as the overall performance of a tracker.
The second question that we are addressing is the performance evaluation of single-target short-term visual tracking algorithms. In contrast to the dominant trend in the past decades, we claim that visual tracking is a complex process and that the performance of visual trackers cannot be reduced to a single performance measure, nor should it be described by an arbitrary set of measures where the relationship between measures is not well understood. In our research we investigate performance measures that are traditionally used in performance evaluation of single-target short-term visual trackers, through theoretical and empirical analysis, and show that some of them are measuring the same aspect of tracking performance. Based on our analysis we propose a pair of two weakly correlated measures to measure the accuracy and robustness of a tracker, propose a visualization of the results as well as the analysis of the entire methodology using the theoretical trackers that exhibit extreme tracking behaviors. This is followed by an extension of the methodology on ranking of multiple trackers where we also take into account the potentially stochastic nature of visual trackers and test the statistical significance of performance differences. To support the proposed evaluation methodology we have developed an open-source software tool that implements the methodology and a simple communication protocol that enables a straightforward integration of trackers. The proposed evaluation methodology and the evaluation system have been adopted by several Visual Object Tracking (VOT) challenges