5,289 research outputs found
The Research on the L(2,1)-labeling problem from Graph theoretic and Graph Algorithmic Approaches
The L(2,1) -labeling problem has been extensively researched on many graph classes. In this thesis, we have also studied the problem on some particular classes of graphs.
In Chapter 2 we present a new general approach to derive upper bounds for L(2,1)-labeling numbers and applied that approach to derive bounds for the four standard graph products.
In Chapter 3 we study the L(2,1)-labeling number of the composition of n graphs.
In Chapter 4 we consider the Cartesian sum of graphs and derive, both, lower and upper bounds for their L(2,1)-labeling number. We use two different approaches to derive the upper bounds and both approaches improve previously known bounds. We also present new approximation algorithms for the L(2,1 )-labeling problem on Cartesian sum graphs.
In Chapter 5, we characterize d-disk graphs for d\u3e1, and give the first upper bounds on the L(2,1)-labeling number for this class of graphs.
In Chapter 6, we compute upper bounds for the L(2,1)-labeling number of total graphs of K_{1,n}-free graphs.
In Chapter 7, we study the four standard products of graphs using the adjacency matrix analysis approach.
In Chapter 8, we determine the exact value for the L(2,1)-labeling number of a particular class of Mycielski graphs. We also provide, both, lower and upper bounds for the L(2,1)-labeling number of any Mycielski graph
Certified Computation from Unreliable Datasets
A wide range of learning tasks require human input in labeling massive data.
The collected data though are usually low quality and contain inaccuracies and
errors. As a result, modern science and business face the problem of learning
from unreliable data sets.
In this work, we provide a generic approach that is based on
\textit{verification} of only few records of the data set to guarantee high
quality learning outcomes for various optimization objectives. Our method,
identifies small sets of critical records and verifies their validity. We show
that many problems only need verifications, to
ensure that the output of the computation is at most a factor of away from the truth. For any given instance, we provide an
\textit{instance optimal} solution that verifies the minimum possible number of
records to approximately certify correctness. Then using this instance optimal
formulation of the problem we prove our main result: "every function that
satisfies some Lipschitz continuity condition can be certified with a small
number of verifications". We show that the required Lipschitz continuity
condition is satisfied even by some NP-complete problems, which illustrates the
generality and importance of this theorem.
In case this certification step fails, an invalid record will be identified.
Removing these records and repeating until success, guarantees that the result
will be accurate and will depend only on the verified records. Surprisingly, as
we show, for several computation tasks more efficient methods are possible.
These methods always guarantee that the produced result is not affected by the
invalid records, since any invalid record that affects the output will be
detected and verified
Algorithms for Fundamental Problems in Computer Networks.
Traditional studies of algorithms consider the sequential setting, where the whole input data is fed into a single device that computes the solution. Today, the network, such as the Internet, contains of a vast amount of information. The overhead of aggregating all the information into a single device is too expensive, so a distributed approach to solve the problem is often preferable. In this thesis, we aim to develop efficient algorithms for the following fundamental graph problems that arise in networks, in both sequential and distributed settings.
Graph coloring is a basic symmetry breaking problem in distributed computing. Each node is to be assigned a color such that adjacent nodes are assigned different colors. Both the efficiency and the quality of coloring are important measures of an algorithm. One of our main contributions is providing tools for obtaining colorings of good quality whose existence are non-trivial. We also consider other optimization problems in the distributed setting. For example, we investigate efficient methods for identifying the connectivity as well as the bottleneck edges in a distributed network. Our approximation algorithm is almost-tight in the sense that the running time matches the known lower bound up to a poly-logarithmic factor. For another example, we model how the task allocation can be done in ant colonies, when the ants may have different capabilities in doing different tasks.
The matching problems are one of the classic combinatorial optimization problems. We study the weighted matching problems in the sequential setting. We give a new scaling algorithm for finding the maximum weight perfect matching in general graphs, which improves the long-standing Gabow-Tarjan's algorithm (1991) and matches the running time of the best weighted bipartite perfect matching algorithm (Gabow and Tarjan, 1989). Furthermore, for the maximum weight matching problem in bipartite graphs, we give a faster scaling algorithm whose running time is faster than Gabow and Tarjan's weighted bipartite {it perfect} matching algorithm.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113540/1/hsinhao_1.pd
Using numerical plant models and phenotypic correlation space to design achievable ideotypes
Numerical plant models can predict the outcome of plant traits modifications
resulting from genetic variations, on plant performance, by simulating
physiological processes and their interaction with the environment.
Optimization methods complement those models to design ideotypes, i.e. ideal
values of a set of plant traits resulting in optimal adaptation for given
combinations of environment and management, mainly through the maximization of
a performance criteria (e.g. yield, light interception). As use of simulation
models gains momentum in plant breeding, numerical experiments must be
carefully engineered to provide accurate and attainable results, rooting them
in biological reality. Here, we propose a multi-objective optimization
formulation that includes a metric of performance, returned by the numerical
model, and a metric of feasibility, accounting for correlations between traits
based on field observations. We applied this approach to two contrasting
models: a process-based crop model of sunflower and a functional-structural
plant model of apple trees. In both cases, the method successfully
characterized key plant traits and identified a continuum of optimal solutions,
ranging from the most feasible to the most efficient. The present study thus
provides successful proof of concept for this enhanced modeling approach, which
identified paths for desirable trait modification, including direction and
intensity.Comment: 25 pages, 5 figures, 2017, Plant, Cell and Environmen
Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation
Sensor networks potentially feature large numbers of nodes that can sense
their environment over time, communicate with each other over a wireless
network, and process information. They differ from data networks in that the
network as a whole may be designed for a specific application. We study the
theoretical foundations of such large scale sensor networks, addressing four
fundamental issues- connectivity, capacity, clocks and function computation.
To begin with, a sensor network must be connected so that information can
indeed be exchanged between nodes. The connectivity graph of an ad-hoc network
is modeled as a random graph and the critical range for asymptotic connectivity
is determined, as well as the critical number of neighbors that a node needs to
connect to. Next, given connectivity, we address the issue of how much data can
be transported over the sensor network. We present fundamental bounds on
capacity under several models, as well as architectural implications for how
wireless communication should be organized.
Temporal information is important both for the applications of sensor
networks as well as their operation.We present fundamental bounds on the
synchronizability of clocks in networks, and also present and analyze
algorithms for clock synchronization. Finally we turn to the issue of gathering
relevant information, that sensor networks are designed to do. One needs to
study optimal strategies for in-network aggregation of data, in order to
reliably compute a composite function of sensor measurements, as well as the
complexity of doing so. We address the issue of how such computation can be
performed efficiently in a sensor network and the algorithms for doing so, for
some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE
- …