19 research outputs found

    A generalization of convexity /

    Get PDF

    On the Reconstruction Problem in Graph Theory

    Get PDF
    The thesis consists of three chapters. The first chapter introduces the basic notions of graph theory and defines vertex-reconstruction and edge-reconstruction problem. The second chapter and third chapter are devoted to the edge-reconstruction of bi-degreed graphs and bipartite graphs respectively. A bi-degreed graph G is a graph with two degrees d &gt; δ. By elementary arguments we can assume d = δ + 1 and there are at least two vertices of degree δ. Call vertices of degree d "big" vertex and degree δ "small" vertex. Define "symmetric" path of length p Sp to be one with both ends small vertices and all other internal vertices big vertices; define "asymmetric" path of length p Ap to be one with one end a small vertex and all others big vertices. If s(G) is the minimum distance between two small vertices in G, we can show that s(G) is "independent" of G (i.e. it is edge-reconstructable), and that G has at most one nonisomorphic edge-reconstruction H. From this, the concept of "forced move" posed by Dr. Swart is obvious. Using the principle of forced move (and sometimes also "forced edge" posed by Dr. Swart as well), it's easy to derive a few interesting properties, like say G is edge-reconstructable if s(G) is even or if two Ss(G)'s intersect at an internal vertex, etc. Write s for s(G). When s is odd, consider the concept of s - n-chain, which is n Ss's following from end to end. We can show first s - 3-chain and then s - 2-chain cannot exist. Hence Ss's are disjoint. Think of Ss's as "lines" in some geometry. Define two more "distance" functions s1 and s2 such that s1 "represents" the distance from a point to a line and s2 means the distance between to "skew" lines. With the aid of forced move principle again, we can at last prove every bi-degreed graph with at least four edges is edge-reconstructable. A bipartite graph G is a graph whose vertex set V is the disjoint union of two sets v1 and v2 such that every edge joins v1 and v2. By elementary reduction we can assume G to be connected. We define special chains inductively so that it starts at a vertex of minimum degree and always goes to a neighbor or minimum degree. Special chains will be the main tool to prove edge-reconstructability. By G's finiteness, we note they will "terminate" somewhere, and we have three types of termination for them. Let condition A•s be that degree sequence of special chain is edge-reconstructable, condition Bi's be that number of special chains is edge-reconstructable(and some more general variations); condition P's be that the "last vertices" of two special chains be not adjacent; we can prove that all A, Bi and P's should hold inductively in an interlocked way. (This is a big task). Then condition P's can be used to prove G's edge-reconstructability for all three types of termination. We can then prove every bipartite graph with at least four edges is edge-reconstructable.</p

    COMPATIBILITY OF EXTENSIONS OF A COMBINATORIAL GEOMETRY

    Get PDF
    Two extensions of a geometry are compatible with each other if they have a common extension. If the given extensions are elementary, their compatibility can be intrinsically described in terms of their corresponding linear subclasses. Certain adjointness relation between an extension of a geometry and the geometry itself is also discussed. Any extension of a geometry G by a geometry F determines and is determined by a unique quotient bundle on G indexed by F. As a study of the compatibility among given quotients of a geometry, we look at the possibility of completing to F-bundles a family of quotients indexed by a set I of flats of F. If the indexing geometry F is free and if the set I is a Boolean subalgebra or a sublattice of F, for any family Q(I) of quotients of a geometry G, there is a canonical construction which determines its completability and at the same time produces the extremal completion if it is a partial bundle. Geometries studied in this dissertation are furnished with the weak order. Almost invariably, the Higgs' lift construction, in a somewhat generalized sense, constitutes a convenient and indispensable means in various of the extremal constructions

    Parallel programming using functional languages

    Get PDF
    It has been argued for many years that functional programs are well suited to parallel evaluation. This thesis investigates this claim from a programming perspective; that is, it investigates parallel programming using functional languages. The approach taken has been to determine the minimum programming which is necessary in order to write efficient parallel programs. This has been attempted without the aid of clever compile-time analyses. It is argued that parallel evaluation should be explicitly expressed, by the programmer, in programs. To do achieve this a lazy functional language is extended with parallel and sequential combinators. The mathematical nature of functional languages means that programs can be formally derived by program transformation. To date, most work on program derivation has concerned sequential programs. In this thesis Squigol has been used to derive three parallel algorithms. Squigol is a functional calculus from program derivation, which is becoming increasingly popular. It is shown that some aspects of Squigol are suitable for parallel program derivation, while others aspects are specifically orientated towards sequential algorithm derivation. In order to write efficient parallel programs, parallelism must be controlled. Parallelism must be controlled in order to limit storage usage, the number of tasks and the minimum size of tasks. In particular over-eager evaluation or generating excessive numbers of tasks can consume too much storage. Also, tasks can be too small to be worth evaluating in parallel. Several program techniques for parallelism control were tried. These were compared with a run-time system heuristic for parallelism control. It was discovered that the best control was effected by a combination of run-time system and programmer control of parallelism. One of the problems with parallel programming using functional languages is that non-deterministic algorithms cannot be expressed. A bag (multiset) data type is proposed to allow a limited form of non-determinism to be expressed. Bags can be given a non-deterministic parallel implementation. However, providing the operations used to combine bag elements are associative and commutative, the result of bag operations will be deterministic. The onus is on the programmer to prove this, but usually this is not difficult. Also bags' insensitivity to ordering means that more transformations are directly applicable than if, say, lists were used instead. It is necessary to be able to reason about and measure the performance of parallel programs. For example, sometimes algorithms which seem intuitively to be good parallel ones, are not. For some higher order functions it is posible to devise parameterised formulae describing their performance. This is done for divide and conquer functions, which enables constraints to be formulated which guarantee that they have a good performance. Pipelined parallelism is difficult to analyse. Therefore a formal semantics for calculating the performance of pipelined programs is devised. This is used to analyse the performance of a pipelined Quicksort. By treating the performance semantics as a set of transformation rules, the simulation of parallel programs may be achieved by transforming programs. Some parallel programs perform poorly due to programming errors. A pragmatic method of debugging such programming errors is illustrated by some examples

    Bundle methods for regularized risk minimization with applications to robust learning

    Get PDF
    Supervised learning in general and regularized risk minimization in particular is about solving optimization problem which is jointly defined by a performance measure and a set of labeled training examples. The outcome of learning, a model, is then used mainly for predicting the labels for unlabeled examples in the testing environment. In real-world scenarios: a typical learning process often involves solving a sequence of similar problems with different parameters before a final model is identified. For learning to be successful, the final model must be produced timely, and the model should be robust to (mild) irregularities in the testing environment. The purpose of this thesis is to investigate ways to speed up the learning process and improve the robustness of the learned model. We first develop a batch convex optimization solver specialized to the regularized risk minimization based on standard bundle methods. The solver inherits two main properties of the standard bundle methods. Firstly, it is capable of solving both differentiable and non-differentiable problems, hence its implementation can be reused for different tasks with minimal modification. Secondly, the optimization is easily amenable to parallel and distributed computation settings; this makes the solver highly scalable in the number of training examples. However, unlike the standard bundle methods, the solver does not have extra parameters which need careful tuning. Furthermore, we prove that the solver has faster convergence rate. In addition to that, the solver is very efficient in computing approximate regularization path and model selection. We also present a convex risk formulation for incorporating invariances and prior knowledge into the learning problem. This formulation generalizes many existing approaches for robust learning in the setting of insufficient or noisy training examples and covariate shift. Lastly, we extend a non-convex risk formulation for binary classification to structured prediction. Empirical results show that the model obtained with this risk formulation is robust to outliers in the training examples

    Acta Scientiarum Mathematicarum : Tomus 44. Fasc. 3-4.

    Get PDF

    Optimal admission policies for small star networks

    Get PDF
    In this thesis admission stationary policies for small Symmetric Star telecommunication networks in which there are two types of calls requesting access are considered. Arrivals form independent Poisson streams on each route. We consider the routing to be fixed. The holding times of the calls are exponentially distributed periods of time. Rewards are earned for carrying calls and future returns are discounted at a fixed rate. The operation of the network is viewed as a Markov Decision Process and we solve the optimality equation for this network model numerically for a range of small examples by using the policy improvement algorithm of Dynamic Programming. The optimal policies we study involve acceptance or rejection of traffic requests in order to maximise the Total Expected Discounted Reward. Our Star networks are in some respect the simplest networks more complex than single links in isolation but even so only very small examples can be treated numerically. From those examples we find evidence that suggests that despite their complexity, optimal policies have some interesting properties. Admission Price policies are also investigated in this thesis. These policies are not optimal but they are believed to be asymptotically optimal for large networks. In this thesis we investigate if such policies are any good for small networks; we suggest that they are. A reduced state-space model is also considered in which a call on a 2-link route, once accepted, is split into two independent calls on the links involved. This greatly reduces the size of the state-space. We present properties of the optimal policies and the Admission Price policies and conclude that they are very good for the examples considered. Finally we look at Asymmetric Star networks with different number of circuits per link and different exponential holding times. Properties of the optimal policies as well as Admission Price policies are investigated for such networks

    A mathematical programming approach to stochastic and dynamic optimization problems

    Get PDF
    Includes bibliographical references (p. 46-50).Supported by a Presidential Young Investigator Award. DDM-9158118 Supported by matching funds from Draper Laboratory.Dimitris Bertsimas

    Geometric modeling and analysis of dynamic resource allocation mechanisms

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 159-163).The major contribution of this thesis is the investigation of a specific resource allocation optimization problem whose solution has both practical application as well as theoretical interest. It is presented as a specific case of a more general modeling framework we put forth. The underlying question asks how to partition a given resource into a fixed number of parts such that the elements of the resulting partition can be scheduled among a set of user requests to minimize the worst case difference between the schedule and the requests. This particular allocation problem has not been studied before. The general problem is difficult in part because the evaluation of the objective problem is a difficult task by itself. We present a novel algorithm for its exact solution in a constrained setting and discussion of the unconstrained setting in, followed by a number of practical applications of these solutions. The solution to the constrained optimization problem is shown to provide sizable benefits in allocation efficiency in a number of contexts at a minimal implementation cost. The specific contexts we look at include communication over a shared channel, allocation of many small channels to a few users and package delivery from a central office to a number of satellite offices. We also present a set of new fairness results for auction-based allocation mechanisms and show how these mechanisms also fall within our modeling framework. Specifically, we look at using auctions as mechanisms to allocate an indivisible shared resource fairly among a number of users. We establish that a straightforward approach as has been tried in the literature does not guarantee an fair allocation over a long time scale and provide a modified approach that does guarantee a fair allocation. We also show that by allowing users to strategize when bidding on the resource we can avoid the problem of unfairness, for some simple cases. This analysis has not been seen in existing literature. Finally, an analysis of the deterministic and stochastic stability of our class of models is presented that applies to a large subset of the models within our framework. The deterministic stability results presented establish the ultimate boundedness of the lag of deterministically stabilizable models in our framework under a wide variety of quantizer-based scheduling rules. This variety of available rules can be used to further control the behavior of the lag of a stable mechanism. We also discuss the application of existing stochastic stability theory to a large subset of the stochastic models in our framework. This is a straightforward usage of existing stability results based on verifying the satisfaction of a stochastic drift condition.by Matthew Secor.Ph.D
    corecore