359 research outputs found
Programación vectorial lineal y entera
Se tratan los spectos más interesantes, tanto desde un punto de vista teórico como algorítimico, de los problemas de programación vectorial lineal y entero, proporcionándose en ambos casos un buen número de resultados y procedimientos inéditos de enorme utilidad. Cabe destacar también el trabajo de revisión bibliográfica y compilación realizado, el cual se presenta de manera elegante, homogéneo y coherente a través de una cuidada reelaboración propia. A continuación se detalla el contenido dela memoria: En el Capítulo 1, "Fundamentos de la Programación Vectorial: El Caso Lineal" se define el problema de programación vectorial y se realiza un repaso de propiedades del mismo lo más general posible, prestando especial atención al caso lineal. En el Capítulo 2, titulado "Caracterizaciones de Caras Eficientes" se aborda esta importante y compleja cuestión de la programación vectorial lineal, aportando un buen número de tests de eficiencia originales tanto para caras y puntos arbitrarios, como para caras incidentes en un vértice no degenerado y degenerado, respectivamente. Además, dado que el problema de determinar tests de eficiencia para caras está íntimamente influenciado por el mecanismo utilizado para describirlas, hay que destacar el novedoso estudio realizado sobre este tema a través de la noción de descriptor maximal de una cara y las caracterizacionies obtenidas para e conjunto de soluciones óptimas de un programa lineal escalar. En el Capitulo 3, titulado "Tópicos Seleccionados en Programación Vectorial Lineal" se tratan impecablemente algunas de las cuestiones más relevantes relacionadas con el modelo lineal, como son el análisis de eficiencia completa, la dualidad, la identificación de objetivos redundantes y la optimización de una función lineal sobre la región eficiente. En todos estos temas se hacen aportacioines de gran valor. El capítulo 4 está dedicado exclusivamente a los métodos generadores de soluciones eficientes. Después de analizar cuidadosamente los aspectos más tradicionales de este problema, entre los que están el cálculo de un vértice eficiente inicial y la deteminación de los conjuntos de vértices y aristas eficientes, se presenta una nueva clasificación algorítmica para los métodos generadores de caras eficientes maximales compuesta por cuatro categorías mutuamente excluyentes, siendo la denominada clase "descendente local" un diseño inédito en la literatura que presenta numerosas ventajas. Para cada una de estas clases se hace un estudio detallado de propiedades y se proponen nuevos algoritmos generadores de soluciones eficientes basados en los tests de eficiencia obtenidos en el Capitulo 2. El quinto Capítulo, titulado "Programación Vectorial Lineal Entera" aborda con rigor este importante (por su gran aplicabilidad al mundo real) y difícil problema. Después de estudiar las propiedades más relevantes del mismo y analizar las relaciones existentes con sus relajaciones lineal y convexa, se presentan métodos específicos para generar el conjunto de soluciones eficientes enteras
The assignment problem in distributed computing
This dissertation focuses on the problem of assigning the modules of a program to the processors in a distributed system with the goal of minimizing the overall cost of running the program. The cost depends on the execution times of the modules on the processors and on the cost of communication between modules. This module allocation problem arises in a variety of situations where one is interested in making optimum use of available computer resources. The general module allocation problem is intractable; however it becomes polynomially-solvable when the communication graph is restricted. In this dissertation, we restrict our attention to k-trees;As the first problem, we study parametric module allocation on partial k-trees. We allow the costs, both execution and communication, to vary linearly as functions of a real parameter t. We show that if the number of processors is fixed, the sequence of optimum assignments that are obtained, as t varies from zero to infinity, can be constructed in polynomial time. As an auxiliary result, we develop a linear-time algorithm to find a separator in a k-tree. We discuss the implications of our results for parametric versions of the weighted vertex cover, independent set, and 0-1 quadratic programming problems on partial k-trees;Next, we consider two variants of the assignment problem. The first problem is to find a minimum-cost assignment when one of the processors has a limited memory. The second is to find an assignment that minimizes the maximum processor load. We present exact dynamic programming algorithms for both problems, which lead to approximation schemes for the case where the communication graph is a partial k-tree. Faster algorithms are presented for trees with uniform costs. In contrast to these results, we show that, for arbitrary graphs, no fully polynomial time approximation schemes exist unless P = NP. Both dynamic programming algorithms have been implemented. The implementation details and our experimental results are presented
User-Oriented Methodology and Techniques of Decision Analysis and Support
This volume contains 26 papers selected from Workshop presentations. The book is divided into two sections; the first is devoted to the methodology of decision analysis and support and related theoretical developments, and the second reports on the development of tools -- algorithms, software packages -- for decision support as well as on their applications. Several major contributions on constructing user interfaces, on organizing intelligent DSS, on modifying theory and tools in response to user needs -- are included in this volume
Techniques in Active and Generic Software Libraries
Reusing code from software libraries can reduce the time and effort to construct software
systems and also enable the development of larger systems. However, the benefits
that come from the use of software libraries may not be realized due to limitations in
the way that traditional software libraries are constructed. Libraries come equipped
with application programming interfaces (API) that help enforce the correct use of
the abstractions in those libraries. Writing new components and adapting existing
ones to conform to library APIs may require substantial amounts of "glue" code that
potentially affects software's efficiency, robustness, and ease-of-maintenance. If, as a
result, the idea of reusing functionality from a software library is rejected, no benefits
of reuse will be realized.
This dissertation explores and develops techniques that support the construction
of software libraries with abstraction layers that do not impede efficiency. In many
situations, glue code can be expected to have very low (or zero) performance overhead.
In particular, we describe advances in the design and development of active libraries
- software libraries that take an active role in the compilation of the user's code.
Common to the presented techniques is that they may "break" a library API (in a
controlled manner) to adapt the functionality of the library for a particular use case.
The concrete contributions of this dissertation are: a library API that supports
iterator selection in the Standard Template Library, allowing generic algorithms to
find the most suitable traversal through a container, allowing (in one case) a 30-fold improvement in performance; the development of techniques, idioms, and best practices
for concepts and concept maps in C++, allowing the construction of algorithms
for one domain entirely in terms of formalisms from a second domain; the construction
of generic algorithms for algorithmic differentiation, implemented as an active
library in Spad, language of the Open Axiom computer algebra system, allowing algorithmic
differentiation to be applied to the appropriate mathematical object and
not just concrete data-types; and the description of a static analysis framework to
describe the generic programming notion of local specialization within Spad, allowing
more sophisticated (value-based) control over algorithm selection and specialization
in categories and domains.
We will find that active libraries simultaneously increase the expressivity of the
underlying language and the performance of software using those libraries
Recommended from our members
A general state-based temporal pattern recognition
Time-series and state-sequences are ubiquitous patterns in temporal logic and are widely used to present temporal data in data mining. Generally speaking, there are three known choices for the time primitive: points, intervals, points and intervals. In this thesis, a formal characterization of time-series and state-sequences is presented for both complete and incomplete situations, where a state-sequence is defined as a list of sequential data validated on the corresponding time-series. In addition, subsequence matching is addressed to associate the state-sequences, where both non-temporal aspects as well as rich temporal aspects including temporal order, temporal duration and temporal gap should be taken into account.
Firstly, based on the typed point based time-elements and time-series, a formal characterization of time-series and state-sequences is introduced for both complete and incomplete situations, where a state-sequence is defined as a list of sequential data validated on the corresponding time-series. A time-series is formalized as a tetrad (T, R, Tdur, Tgap), which denotes: the temporal order of time- elements; the temporal relationship between time-elements; the temporal duration of each time-element and the temporal gap between each adjacent pair of time-elements respectively.
Secondly, benefiting from the formal characterization of time-series and state-sequences, a general similarity measurement (GSM) that takes into account both non-temporal and rich temporal information, including temporal order as well as temporal duration and temporal gap, is introduced for subsequence matching. This measurement is general enough to subsume most of the popular existing measurements as special cases. In particular, a new conception of temporal common subsequence is proposed. Furthermore, a new LCS-based algorithm named Optimal Temporal Common Subsequence (OTCS), which takes into account rich temporal information, is designed. The experimental results on 6 benchmark datasets demonstrate the effectiveness and robustness of GSM and its new case OTCS. Compared with binary-value distance measurements, GSM can distinguish between the distance caused by different states in the same operation; compared with the real-penalty distance measurements, it can filter out the noise that may push the similarity into abnormal levels.
Finally, two case studies are investigated for temporal pattern recognition: basketball zone-defence detection and video copy detection.
In the case of basketball zone-defence detection, the computational technique and algorithm for detecting zone-defence patterns from basketball videos is introduced, where the Laplacian Matrix-based algorithm is extended to take into account the effects from zoom and single defender‘s translation in zone-defence graph matching and a set of character-angle based features was proposed to describe the zone-defence graph. The experimental results show that the approach explored is useful in helping the coach of the defensive side check whether the players are keeping to the correct zone-defence strategy, as well as detecting the strategy of the opponent side. It can describe the structure relationship between defender-lines for basketball zone-defence, and has a robust performance in both simulation and real-life applications, especially when disturbances exist.
In the case of video copy detection, a framework for subsequence matching is introduced. A hybrid similarity framework addressing both non-temporal and temporal relationships between state-sequences, represented by bipartite graphs, is proposed. The experimental results using real-life video databases demonstrated that the proposed similarity framework is robust to states alignment with different numbers and different values, and various reordering including inversion and crossover
Principal Component Analysis
This book is aimed at raising awareness of researchers, scientists and engineers on the benefits of Principal Component Analysis (PCA) in data analysis. In this book, the reader will find the applications of PCA in fields such as image processing, biometric, face recognition and speech processing. It also includes the core concepts and the state-of-the-art methods in data analysis and feature extraction
Methods in Computational Biology
Modern biology is rapidly becoming a study of large sets of data. Understanding these data sets is a major challenge for most life sciences, including the medical, environmental, and bioprocess fields. Computational biology approaches are essential for leveraging this ongoing revolution in omics data. A primary goal of this Special Issue, entitled “Methods in Computational Biology”, is the communication of computational biology methods, which can extract biological design principles from complex data sets, described in enough detail to permit the reproduction of the results. This issue integrates interdisciplinary researchers such as biologists, computer scientists, engineers, and mathematicians to advance biological systems analysis. The Special Issue contains the following sections:•Reviews of Computational Methods•Computational Analysis of Biological Dynamics: From Molecular to Cellular to Tissue/Consortia Levels•The Interface of Biotic and Abiotic Processes•Processing of Large Data Sets for Enhanced Analysis•Parameter Optimization and Measuremen
- …