10 research outputs found
An O(n^2 log^2 n) Time Algorithm for Minmax Regret Minsum Sink on Path Networks
We model evacuation in emergency situations by dynamic flow in a network. We want to minimize the aggregate evacuation time to an evacuation center (called a sink) on a path network with uniform edge capacities. The evacuees are initially located at the vertices, but their precise numbers are unknown, and are given by upper and lower bounds. Under this assumption, we compute a sink location that minimizes the maximum "regret." We present the first sub-cubic time algorithm in n to solve this problem, where n is the number of vertices. Although we cast our problem as evacuation, our result is accurate if the "evacuees" are fluid-like continuous material, but is a good approximation for discrete evacuees
On the capacity provisioning on dynamic networks
In this thesis, we consider the development of algorithms suitable for designing evacuation
procedures in sparse or remote communities. The works are extensions of sink location
problems on dynamic networks, which are motivated by real-life disaster events such as
the Tohoku Japanese Tsunami, the Australian wildfire and many more. The available algorithms in this context consider the location of the sinks (safe-havens) with the assumptions
that the evacuation by foot is possible, which is reasonable when immediate evacuation
is needed in urban settings. However, for remote communities, emergency vehicles may
need to be dispatched or situated strategically for an efficient evacuation process. With
the assumption removed, our problems transform to the task of allocating capacities on
the edges of dynamic networks given a budget capacity c. We first of all consider this
problem on a dynamic path network of n vertices with the objective of minimizing the
completion time (minmax criterion) given that the position of the sink is known. This leads
to an O(nlogn + nlog(c/Îľ)) time, where Îľ is a refinement or precision parameter for an
additional binary search in the worst case scenario. Next, we extend the problem to star
topologies. The case where the sink is located at the middle of the star network follows
the same approach for the path network. However, when the sink is located on a leaf node,
the problem becomes more complicated when the number of links (edges) exceeds three.
The second phase of this thesis focuses on allocating capacities on the edges of dynamic
path networks with the objective of minimizing the total evacuation time (minsum criterion)
given the position of the sink and the budget (fixed) capacity. In general, minsum problems
are more difficult than minmax problems in the context of sink location problems. Due to
few combinatorial properties discovered together with the possibility of changing objective.
function configuration in the course of the optimization process, we consider the development of numerical procedure which involves the use of sequential quadratic programming
(SQP). The sequential quadratic programming employed allows the specification of an arbitrary initial capacities and also helps in monitoring the changing configuration of the
objective function. We propose to consider these problems on more complex topolgies
such as trees and general graph in future.NSERC Discovery Grants program.
University of Lethbridge Graduate Research Award.
Alberta Innovates Awar
Sublinear Computation Paradigm
This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms
Scheduling Models with Additional Features: Synchronization, Pliability and Resiliency
In this thesis we study three new extensions of scheduling models with both practical and theoretical relevance, namely synchronization, pliability and resiliency. Synchronization has previously been studied for flow shop scheduling and we now apply the concept to open shop models for the first time. Here, as opposed to the traditional models, operations that are processed together all have to be started at the same time. Operations that are completed are not removed from the machines until the longest operation in their group is finished.
Pliability is a new approach to model flexibility in flow shops and open shops. In scheduling with pliability, parts of the processing load of the jobs can be re-distributed between the machines in order to achieve better schedules. This is applicable, for example, if the machines represent cross-trained workers.
Resiliency is a new measure for the quality of a given solution if the input data are uncertain. A resilient solution remains better than some given bound, even if the original input data are changed. The more we can perturb the input data without the solution losing too much quality, the more resilient the solution is.
We also consider the assignment problem, as it is the traditional combinatorial optimization problem underlying many scheduling problems. Particularly, we study a version of the assignment problem with a special cost structure derived from the synchronous open shop model and obtain new structural and complexity results. Furthermore we study resiliency for the assignment problem.
The main focus of this thesis is the study of structural properties, algorithm development and complexity. For synchronous open shop we show that for a fixed number of machines the makespan can be minimized in polynomial time. All other traditional scheduling objectives are at least as hard to optimize as in the traditional open shop model.
Starting out research in pliability we focus on the most general case of the model as well as two relevant special cases. We deliver a fairly complete complexity study for all three versions of the model.
Finally, for resiliency, we investigate two different questions: `how to compute the resiliency of a given solution?' and `how to find a most resilient solution?'. We focus on the assignment problem and single machine scheduling to minimize the total sum of completion times and present a number of positive results for both questions. The main goal is to make a case that the concept deserves further study
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum