2,370 research outputs found
Efficient Subgraph Similarity Search on Large Probabilistic Graph Databases
Many studies have been conducted on seeking the efficient solution for
subgraph similarity search over certain (deterministic) graphs due to its wide
application in many fields, including bioinformatics, social network analysis,
and Resource Description Framework (RDF) data management. All these works
assume that the underlying data are certain. However, in reality, graphs are
often noisy and uncertain due to various factors, such as errors in data
extraction, inconsistencies in data integration, and privacy preserving
purposes. Therefore, in this paper, we study subgraph similarity search on
large probabilistic graph databases. Different from previous works assuming
that edges in an uncertain graph are independent of each other, we study the
uncertain graphs where edges' occurrences are correlated. We formally prove
that subgraph similarity search over probabilistic graphs is #P-complete, thus,
we employ a filter-and-verify framework to speed up the search. In the
filtering phase,we develop tight lower and upper bounds of subgraph similarity
probability based on a probabilistic matrix index, PMI. PMI is composed of
discriminative subgraph features associated with tight lower and upper bounds
of subgraph isomorphism probability. Based on PMI, we can sort out a large
number of probabilistic graphs and maximize the pruning capability. During the
verification phase, we develop an efficient sampling algorithm to validate the
remaining candidates. The efficiency of our proposed solutions has been
verified through extensive experiments.Comment: VLDB201
Regret Models and Preprocessing Techniques for Combinatorial Optimization under Uncertainty
Ph.DDOCTOR OF PHILOSOPH
Oracle-Based Robust Optimization via Online Learning
Robust optimization is a common framework in optimization under uncertainty
when the problem parameters are not known, but it is rather known that the
parameters belong to some given uncertainty set. In the robust optimization
framework the problem solved is a min-max problem where a solution is judged
according to its performance on the worst possible realization of the
parameters. In many cases, a straightforward solution of the robust
optimization problem of a certain type requires solving an optimization problem
of a more complicated type, and in some cases even NP-hard. For example,
solving a robust conic quadratic program, such as those arising in robust SVM,
ellipsoidal uncertainty leads in general to a semidefinite program. In this
paper we develop a method for approximately solving a robust optimization
problem using tools from online convex optimization, where in every stage a
standard (non-robust) optimization program is solved. Our algorithms find an
approximate robust solution using a number of calls to an oracle that solves
the original (non-robust) problem that is inversely proportional to the square
of the target accuracy
Recommended from our members
Fair Robust Assignment Using Redundancy
We study the consideration of fairness in redundant assignment for multi-agent task allocation. It has recently been shown that redundant assignment of agents to tasks provides robustness to uncertainty in task performance. However, the question of how to fairly assign these redundant resources across tasks remains unaddressed. In this paper, we present a novel problem formulation for fair redundant task allocation, in which we cast it as the optimization of worst-case task costs. Solving this problem optimally is NP-hard. Therefore, we exploit properties of supermodularity to propose a polynomial-time, near-optimal solution. Our algorithm provides a solution set that is α times larger than the optimal set size in order to guarantee a solution cost at least as good as the optimal target cost. We derive the sub- optimality bound on this cardinality relaxation, α. Additionally, we demonstrate that our algorithm performs near-optimally without the cardinality relaxation. We show the algorithm in simulations of redundant assignments of robots to goal nodes on transport networks with uncertain travel times. Empirically, our algorithm outperforms benchmarks, scales to large problems, and provides improvements in both fairness and average utility.We gratefully acknowledge the support from ARL Grant DCIST CRA W911NF-17-2-0181, NSF Grant CNS-1521617, ARO Grant W911NF-13-1- 0350, ONR Grants N00014-20-1-2822 and ONR grant N00014-20-S-B001, and Qualcomm Research. The first author acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1845298
Best matching processes in distributed systems
The growing complexity and dynamic behavior of modern manufacturing and service industries along with competitive and globalized markets have gradually transformed traditional centralized systems into distributed networks of e- (electronic) Systems. Emerging examples include e-Factories, virtual enterprises, smart farms, automated warehouses, and intelligent transportation systems. These (and similar) distributed systems, regardless of context and application, have a property in common: They all involve certain types of interactions (collaborative, competitive, or both) among their distributed individuals—from clusters of passive sensors and machines to complex networks of computers, intelligent robots, humans, and enterprises. Having this common property, such systems may encounter common challenges in terms of suboptimal interactions and thus poor performance, caused by potential mismatch between individuals. For example, mismatched subassembly parts, vehicles—routes, suppliers—retailers, employees—departments, and products—automated guided vehicles—storage locations may lead to low-quality products, congested roads, unstable supply networks, conflicts, and low service level, respectively. This research refers to this problem as best matching, and investigates it as a major design principle of CCT, the Collaborative Control Theory.
The original contribution of this research is to elaborate on the fundamentals of best matching in distributed and collaborative systems, by providing general frameworks for (1) Systematic analysis, inclusive taxonomy, analogical and structural comparison between different matching processes; (2) Specification and formulation of problems, and development of algorithms and protocols for best matching; (3) Validation of the models, algorithms, and protocols through extensive numerical experiments and case studies. The first goal is addressed by investigating matching problems in distributed production, manufacturing, supply, and service systems based on a recently developed reference model, the PRISM Taxonomy of Best Matching. Following the second goal, the identified problems are then formulated as mixed-integer programs. Due to the computational complexity of matching problems, various optimization algorithms are developed for solving different problem instances, including modified genetic algorithms, tabu search, and neighbourhood search heuristics. The dynamic and collaborative/competitive behaviors of matching processes in distributed settings are also formulated and examined through various collaboration, best matching, and task administration protocols. In line with the third goal, four case studies are conducted on various manufacturing, supply, and service systems to highlight the impact of best matching on their operational performance, including service level, utilization, stability, and cost-effectiveness, and validate the computational merits of the developed solution methodologies
A Portfolio Theory of Route Choice
Although many individual route choice models have been proposed to incorporate travel time variability as a decision factor, they are typically still deterministic in the sense that the optimal strategy requires choosing one particular route that maximizes utility. In contrast, this study introduces an individual route choice model where choos- ing a portfolio of routes instead of a single route is the best strategy for a rational traveler who cares about both journey time and lateness when facing stochastic net- work conditions. The model is then tested with GPS data collected in metropolitan Minneapolis-St. Paul, Minnesota. Our data suggest strong correlation among link speed when analyzing morning commute trips. There is no single dominant route (de- fined here as a route with the shortest travel time for a 15 day period) in 18% of cases when links travel times are correlated. This paper demonstrates that choosing a port- folio of routes could be the rational choice of a traveler who wants to optimize route decisions under variability.Transportation planning, route choice, travel behavior, link performance
Revisiting the Evolution and Application of Assignment Problem: A Brief Overview
The assignment problem (AP) is incredibly challenging that can model many real-life problems. This paper provides a limited review of the recent developments that have appeared in the literature, meaning of assignment problem as well as solving techniques and will provide a review on a lot of research studies on different types of assignment problem taking place in present day real life situation in order to capture the variations in different types of assignment techniques. Keywords: Assignment problem, Quadratic Assignment, Vehicle Routing, Exact Algorithm, Bound, Heuristic etc
OVERCOMING THE CHALLENGES OF FORMAL ORGANIZATIONAL STRUCTURE: INDIVIDUALS’ DESIRE FOR REDUCING THEIR WORKFLOW DEPENDENCIES
In a field social network study of 141 employees in an international organization, I examined individuals’ future desires to either collaborate more intensely with existing network partners or seek out new partners based on the latent value of these social ties – the potential social capital that will be generated from strengthening or building a tie in terms of reducing their formal workflow dependencies on others. Employees tended to desire more intense collaboration with a constraining existing tie (i.e., a bottleneck in their existing workflow network) when they trusted the person, suggesting they believed that the partner would provide high-quality work inputs in a reliable manner once a stronger relationship was built, thus increasing the tie’s latent relational value. Building new ties was more likely to happen when it would reduce one’s workflow dependencies by detouring around the bottlenecking person and closing disadvantageous structural holes, suggesting those new potential ties had greater latent structural value as they allow the focal individual to reach out to other workers further upstream in the workflow network. When comparing the intentions to use both approaches, the bypassing, structural approach was more prevalent than the tie strengthening approach for reducing workflow dependencies, in spite of the inherent additional costs of searching and building a new tie. The study illustrates how informal networks are used intentionally to ameliorate the deficiencies of the formal organizational workflow network and suggests the relative prominence of the latent structural value of ties as compared to their relational value
- …