110 research outputs found
Investigations in the semi-strong product of graphs and bootstrap percolation
The semi-strong product of graphs G and H is a way of forming a new graph from the graphs G and H. The vertex set of the semi-strong product is the Cartesian product of the vertex sets of G and H, V(G) x V(H). The edges of the semi-strong product are determined as follows: (g1,h1)(g2,h2) is an edge of the product whenever g1g2 is an edge of G and h1h2 is an edge of H or g1 = g2 and h1h2 is an edge of H.
A natural subject for investigation is to determine properties of the semi-strong product in terms of those properties of its factors. We investigate distance, independence, matching, and domination in the semi-strong product
Bootstrap Percolation is a process defined on a graph. We begin with an initial set of infected vertices. In each subsequent round, uninfected vertices become infected if they are adjacent to at least r infected vertices. Once infected, vertices remain infected. The parameter r is called the percolation threshold. When G is finite, the infection either stops at a proper subset of G or all of V(G) becomes infected. If all of V(G) eventually becomes infected, then we say that the infection percolates and we call the initial set of infected vertices a percolating set.
The cardinality of a minimum percolating set of G with percolation threshold r is denoted m(G,r). We determine m(G,r) for certain Kneser graphs and bipartite Kneser graphs
Minimum Target Sets in Non-Progressive Threshold Models: When Timing Matters
Let be a graph, which represents a social network, and suppose each node
has a threshold value . Consider an initial configuration, where
each node is either positive or negative. In each discrete time step, a node
becomes/remains positive if at least of its neighbors are
positive and negative otherwise. A node set is a Target Set (TS)
whenever the following holds: if is fully positive initially, all
nodes in the graph become positive eventually. We focus on a generalization of
TS, called Timed TS (TTS), where it is permitted to assign a positive state to
a node at any step of the process, rather than just at the beginning.
We provide graph structures for which the minimum TTS is significantly
smaller than the minimum TS, indicating that timing is an essential aspect of
successful target selection strategies. Furthermore, we prove tight bounds on
the minimum size of a TTS in terms of the number of nodes and maximum degree
when the thresholds are assigned based on the majority rule.
We show that the problem of determining the minimum size of a TTS is NP-hard
and provide an Integer Linear Programming formulation and a greedy algorithm.
We evaluate the performance of our algorithm by conducting experiments on
various synthetic and real-world networks. We also present a linear-time exact
algorithm for trees.Comment: Accepted in ECAI-23 (26th European Conference on Artificial
Intelligence
Local treewidth of random and noisy graphs with applications to stopping contagion in networks
We study the notion of local treewidth in sparse random graphs: the maximum
treewidth over all -vertex subgraphs of an -vertex graph. When is not
too large, we give nearly tight bounds for this local treewidth parameter; we
also derive tight bounds for the local treewidth of noisy trees, trees where
every non-edge is added independently with small probability. We apply our
upper bounds on the local treewidth to obtain fixed parameter tractable
algorithms (on random graphs and noisy trees) for edge-removal problems
centered around containing a contagious process evolving over a network. In
these problems, our main parameter of study is , the number of "infected"
vertices in the network. For a certain range of parameters the running time of
our algorithms on -vertex graphs is , improving
upon the performance of the best-known
algorithms designed for worst-case instances of these edge deletion problems
Exploring the barriers to compassion for postpartum mothers and their experiences during the COVID-19 pandemic
This portfolio thesis comprises of three parts:Part One: Systematic Literature ReviewThe systematic literature review explored the psychological experiences of postpartum mothers during the COVID-19 pandemic. A systematic search of 5 electronic databases found twelve qualitative papers that met the inclusion criteria. The National Institute for Health and Care Excellence (NICE) Quality Appraisal Checklist for Qualitative Studies was used to evaluate the quality of the studies, whilst Thomas and Harden’s (2008) Thematic Synthesis was used to configure the findings across the studies. Four superordinate themes were identified: relationships, psychological strengths, mental health difficulties, and emotional responses. The findings revealed a range of positive and negative psychological experiences, with some postpartum mothers experiencing psychological growth, and others emotional distress. Clinical implications and key areas for future research are discussed.Part Two: Empirical PaperThe empirical paper explored the fears, blocks and resistances (FBRs) to the flows of compassion in first-time mothers. Nine women attended an online semi-structured interview with the researcher that were analysed using Reflexive Thematic Analysis (Braun & Clark, 2019). The study found three themes, with subthemes, that encompassed the FBRs that mothers experienced: ‘Super Mum’: the Unobtainable Ideal, the Exchange of Distress and Compassion, and Going Through it Alone. The FBRs identified within each theme are summarised, and a theme map illustrates the relationships between themes and how this maintains FBRs for first-time mothers. Clinical implications of the research and areas for future research are discussed.Part Three: AppendicesAppendices relevant to the systematic literature review and empirical paper, including a reflective statement, epistemological statement, and all relevant documentation
Local Treewidth of Random and Noisy Graphs with Applications to Stopping Contagion in Networks
We study the notion of local treewidth in sparse random graphs: the maximum treewidth over all k-vertex subgraphs of an n-vertex graph. When k is not too large, we give nearly tight bounds for this local treewidth parameter; we also derive nearly tight bounds for the local treewidth of noisy trees, trees where every non-edge is added independently with small probability. We apply our upper bounds on the local treewidth to obtain fixed parameter tractable algorithms (on random graphs and noisy trees) for edge-removal problems centered around containing a contagious process evolving over a network. In these problems, our main parameter of study is k, the number of initially "infected" vertices in the network. For the random graph models we consider and a certain range of parameters the running time of our algorithms on n-vertex graphs is 2^o(k) poly(n), improving upon the 2^?(k) poly(n) performance of the best-known algorithms designed for worst-case instances of these edge deletion problems
In Memoriam, Solomon Marcus
This book commemorates Solomon Marcus’s fifth death anniversary with a selection of articles in mathematics, theoretical computer science, and physics written by authors who work in Marcus’s research fields, some of whom have been influenced by his results and/or have collaborated with him
Recommended from our members
Information dissemination via random walks
Information dissemination is a fundamental task in distributed computing:
How to deliver a piece of information from a node of a network to some or all other nodes?
In the face of large and still growing modern networks, it is imperative that dissemination algorithms are decentralised and can operate under unreliable conditions.
In the past decades, randomised rumour spreading algorithms
have addressed these challenges.
In these algorithms, a message is initially placed at a source node of a network, and, at regular intervals, each node contacts a randomly selected neighbour.
A message may be transmitted in one or both directions during each of these communications, depending on the exact protocol.
The main measure of performance for these algorithms is their broadcast time, which is the time until a message originating from a source node is disseminated to all nodes of the network.
Apart from being extremely simple and robust to failures, randomised rumour spreading achieves theoretically optimal broadcast time in many common network topologies.
In this thesis, we propose an agent-based information dissemination algorithm, called Visit-Exchange.
In our protocol, a number of agents perform independent random walks in the network.
An agent becomes informed when it visits a node that has a message, and later informs all future nodes it visits.
Visit-Exchange shares many of the properties of randomised rumour spreading, namely, it is very simple and uses the same amount of communication in a unit of time.
Moreover, the protocol can be used as a simple model of non-recoverable epidemic processes.
We investigate the broadcast time of Visit-Exchange on a variety of network topologies, and compare it to traditional rumour spreading.
On dense regular networks we show that the two types of protocols are equivalent, which means that in this setting the vast literature on randomised rumour spreading applies in our model as well.
Since many networks of interest, including real-world ones, are very sparse, we also study agent-based broadcast for sparse networks.
Our results include almost optimal or optimal bounds for sparse regular graphs, expanders, random regular graphs, balanced trees and grids.
We establish that depending on the network topology, Visit-Exchange may be either slower or faster than traditional rumour spreading.
In particular, in graphs consisting of hubs that are not well connected, broadcast using agents can be significantly faster.
Our conclusion is that a combined broadcasting protocol that simultaneously uses both traditional rumour spreading and agent-based dissemination can be fast on a larger range of topologies than each of its components separately.Gates Cambridge Trust, St John's College Benefactors' Scholarshi
Minimum lethal sets in grids and tori under 3-neighbour bootstrap percolation
Let be any non negative integer and let be any undirected graph in which a subset of vertices are initially infected. We consider the following process in which, at every step, each non-infected vertex with at least infected neighbours becomes and an infected vertex never becomes non-infected. The problem consists in determining the minimum size of an initially infected vertices set that eventually infects the whole graph . Note that = 1 for any connected graph . This problem is closely related to cellular automata, to percolation problems and to the Game of Life studied by John Conway. Note that for any connected graph . The case when is the grid and is well known and appears in many puzzles books, in particular due to the elegant proof that shows that = for all ∈ . We study the cases of square grids and tori when ∈ {3, 4}. We show that = for every even and that ≤ ≤ + 1 for any odd. When is odd, we show that both bounds are reached, namely = if ≡ 5 (mod 6) or = 2 − 1 for any ∈ , and = + 1 if ∈ {9, 13}. Finally, for all ∈ , we give the exact expression of and of when ∈ {3, 4}
- …