290 research outputs found
High performance simplex solver
The dual simplex method is frequently the most efficient technique for solving linear programming
(LP) problems. This thesis describes an efficient implementation of the sequential dual
simplex method and the design and development of two parallel dual simplex solvers.
In serial, many advanced techniques for the (dual) simplex method are implemented, including
sparse LU factorization, hyper-sparse linear system solution technique, efficient approaches
to updating LU factors and sophisticated dual simplex pivoting rules. These techniques, some
of which are novel, lead to serial performance which is comparable with the best public domain
dual simplex solver, providing a solid foundation for the simplex parallelization.
During the implementation of the sequential dual simplex solver, the study of classic LU
factor update techniques leads to the development of three novel update variants. One of them
is comparable to the most efficient established approach but is much simpler in terms of implementation,
and the other two are specially useful for one of the parallel simplex solvers. In
addition, the study of the dual simplex pivoting rules identifies and motivates further investigation
of how hyper-sparsity maybe promoted.
In parallel, two high performance simplex solvers are designed and developed. One approach,
based on a less-known dual pivoting rule called suboptimization, exploits parallelism across
multiple iterations (PAMI). The other, based on the regular dual pivoting rule, exploits purely
single iteration parallelism (SIP). The performance of PAMI is comparable to a world-leading
commercial simplex solver. SIP is frequently complementary to PAMI in achieving speedup
when PAMI results in slowdown
Optimal Transport for Domain Adaptation
Domain adaptation from one data space (or domain) to another is one of the
most challenging tasks of modern data analytics. If the adaptation is done
correctly, models built on a specific data space become more robust when
confronted to data depicting the same semantic concepts (the classes), but
observed by another observation system with its own specificities. Among the
many strategies proposed to adapt a domain to another, finding a common
representation has shown excellent properties: by finding a common
representation for both domains, a single classifier can be effective in both
and use labelled samples from the source domain to predict the unlabelled
samples of the target domain. In this paper, we propose a regularized
unsupervised optimal transportation model to perform the alignment of the
representations in the source and target domains. We learn a transportation
plan matching both PDFs, which constrains labelled samples in the source domain
to remain close during transport. This way, we exploit at the same time the few
labeled information in the source and the unlabelled distributions observed in
both domains. Experiments in toy and challenging real visual adaptation
examples show the interest of the method, that consistently outperforms state
of the art approaches
- …