14 research outputs found

    Algorithms for lattice problems with respect to general norms

    Get PDF
    Gitter sind klassische Objekte aus der Geometrie der Zahlen. Ein Gitter ist definiert als eine diskrete Untergruppe des n dimensionalen Vektorraumes über den reellen Zahlen. Diese Dissertationsschrift beschäftigt sich mit der algorithmischen Komplexität von vier klassischen Problemen aus der Geometrie der Zahlen, dem Problem des kürzesten Gittervektors, dem Problem der sukzessiven Minima, dem Problem der kürzesten linear unabhängigen Gittervektoren sowie dem Problem des nächsten Gittervektors. Diese Probleme können bezüglich jeder beliebigen Norm definiert werden. Der Schwerpunkt dieser Dissertation liegt auf der Untersuchung der algorithmischen Komplexität dieser oben erwähnten Gitterprobleme mit einem speziellen Fokus auf ihrer Lösbarkeit bezüglich allgemeiner, nicht euklidischer Normen. Aufbauend auf Algorithmen von Ajtai, Kumar und Sivakumar ([AKS01],[AKS02]) für das Problem des kürzesten Gittervektors und das Problem des nächsten Gittervektors beschreiben wir in dieser Arbeit randomisierte Algorithmen mit einfach exponentieller Laufzeit für alle vier erwähnten Gitterprobleme. Diese Algorithmen lösen das Problem des kürzesten Gittervektors sowie wie eingeschränkte Varianten der anderen Gitterprobleme exakt. Für die allgemeinen Varianten des Problems der sukzessiven Minima, des Problems der kürzesten linear unabhängigen Gittervektoren sowie des Problems des nächsten Gittervektors beschreiben wir randomisierte Algorithmen mit einfach exponentieller Laufzeit, die diese Probleme fast optimal lösen. Für das Problem des nächsten Gittervektors entwickeln wir in dieser Arbeit auf Grundlage einer Technik von Lenstra ([Len83]) für ganzzahlige Programmierung einen deterministischen Algorithmus, der das Problem exakt löst und dabei lediglich polynomiellen Platz benötigt.Lattices are classical objects in the geometry of numbers. A lattice is a discrete subgroup of the n-dimensional vector space over the real numbers. In this thesis, we study the complexity of four classical problems from the geometry of numbers, the shortest vector problem (SVP), the successive minima problem (SMP), the shortest independent vectors problem (SIVP), and the closest vector problem (CVP). These problems can be defined for any norm on the vector space.The focus of this thesis is the algorithmic complexity of the four lattice problems described above with respect to arbitrary, especially non-Euclidean norms.Extending and generalizing results of Ajtai et al. we present probabilistic single exponential time algorithms for all four lattice problems using single exponential space. The algorithms solve SVP and restricted versions of the other problems optimally. Furthermore, the algorithms solve the general versions of SMP, SIVP, and CVP almost optimally.To obtain algorithms that solve SMP, SIVP, and CVP exactly with respect to arbitrary norms, we consider CVP in detail since there exist polynomial time reductions from SMP and SIVP to CVP which work for any norm, see [Micc08]. We will describe in this thesis a deterministic polynomially space bounded algorithm for CVP. The algorithm is based on Lenstras technique for integer programming ([Len83]).Tag der Verteidigung: 28.10.2011Paderborn, Univ., Diss., 201

    On the hardness of the shortest vector problem

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 77-84).An n-dimensional lattice is the set of all integral linear combinations of n linearly independent vectors in Rm. One of the most studied algorithmic problems on lattices is the shortest vector problem (SVP): given a lattice, find the shortest non-zero vector in it. We prove that the shortest vector problem is NP-hard (for randomized reductions) to approximate within some constant factor greater than 1 in any 1, norm (p >\=1). In particular, we prove the NP-hardness of approximating SVP in the Euclidean norm 12 within any factor less than [square root of]2. The same NP-hardness results hold for deterministic non-uniform reductions. A deterministic uniform reduction is also given under a reasonable number theoretic conjecture concerning the distribution of smooth numbers. In proving the NP-hardness of SVP we develop a number of technical tools that might be of independent interest. In particular, a lattice packing is constructed with the property that the number of unit spheres contained in an n-dimensional ball of radius greater than 1 + [square root of] 2 grows exponentially in n, and a new constructive version of Sauer's lemma (a combinatorial result somehow related to the notion of VC-dimension) is presented, considerably simplifying all previously known constructions.by Daniele Micciancio.Ph.D

    Bibliographie

    Get PDF

    Decoding complexity and trellis structure of lattices

    Get PDF

    Conflicting Objectives in Decisions

    Get PDF
    This book deals with quantitative approaches in making decisions when conflicting objectives are present. This problem is central to many applications of decision analysis, policy analysis, operational research, etc. in a wide range of fields, for example, business, economics, engineering, psychology, and planning. The book surveys different approaches to the same problem area and each approach is discussed in considerable detail so that the coverage of the book is both broad and deep. The problem of conflicting objectives is of paramount importance, both in planned and market economies, and this book represents a cross-cultural mixture of approaches from many countries to the same class of problem

    Variational models and numerical algorithms for selective image segmentation

    Get PDF
    This thesis deals with the numerical solution of nonlinear partial differential equations and their application in image processing. The differential equations we deal with here arise from the minimization of variational models for image restoration techniques (such as denoising) and recognition of objects techniques (such as segmentation). Image denoising is a technique aimed at restoring a digital image that has been contaminated by noise while segmentation is a fundamental task in image analysis responsible for partitioning an image as sub-regions or representing the image into something that is more meaningful and easier to analyze such as extracting one or more specific objects of interest in images based on relevant information or a desired feature. Although there has been a lot of research in the restoration of images, the performance of such methods is still poor, especially when the images have a high level of noise or when the algorithms are slow. Task of the segmentation is even more challenging problem due to the difficulty of delineating, even manually, the contours of the objects of interest. The problems are often due to low contrast, fuzzy contours, similar intensities with adjacent objects, or the objects to be extracted having no real contours. The first objective of this work is to develop fast image restoration and segmentation methods which provide better denoising and fast and robust performance for image segmentation. The contribution presented here is the development of a restarted homotopy analysis method which has been designed to be easily adaptable to various types of image processing problems. As a second research objective we propose a framework for image selective segmentation which partitions an image based on the information known in advance of the object/objects to be extracted (for example the left kidney is the target to be extracted in a CT image and the prior knowledge is a few markers in this object of interest). This kind of segmentation appears especially in medical applications. Medical experts usually estimate and manually draw the boundaries of the organ/organs based on their experience. Our aim is to introduce automatic segmentation of the object of interest as a contribution not only to the way doctors and surgeons diagnose and operate but to other fields as well. The proposed methods showed success in segmenting different objects and perform well in different types of images not only in two-dimensional but in three-dimensional images as well

    Proceedings of the Workshop on Change of Representation and Problem Reformulation

    Get PDF
    The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning

    Acta Scientiarum Mathematicarum : Tomus 55. Fasc. 1-2.

    Get PDF

    Parallelism and the software-hardware interface in embedded systems

    Get PDF
    This thesis by publications addresses issues in the architecture and microarchitecture of next generation, high performance streaming Systems-on-Chip through quantifying the most important forms of parallelism in current and emerging embedded system workloads. The work consists of three major research tracks, relating to data level parallelism, thread level parallelism and the software-hardware interface which together reflect the research interests of the author as they have been formed in the last nine years. Published works confirm that parallelism at the data level is widely accepted as the most important performance leverage for the efficient execution of embedded media and telecom applications and has been exploited via a number of approaches the most efficient being vectorlSIMD architectures. A further, complementary and substantial form of parallelism exists at the thread level but this has not been researched to the same extent in the context of embedded workloads. For the efficient execution of such applications, exploitation of both forms of parallelism is of paramount importance. This calls for a new architectural approach in the software-hardware interface as its rigidity, manifested in all desktop-based and the majority of embedded CPU's, directly affects the performance ofvectorized, threaded codes. The author advocates a holistic, mature approach where parallelism is extracted via automatic means while at the same time, the traditionally rigid hardware-software interface is optimized to match the temporal and spatial behaviour of the embedded workload. This ultimate goal calls for the precise study of these forms of parallelism for a number of applications executing on theoretical models such as instruction set simulators and parallel RAM machines as well as the development of highly parametric microarchitectural frameworks to encapSUlate that functionality.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore