833,971 research outputs found

    Galois correspondence for counting quantifiers

    Full text link
    We introduce a new type of closure operator on the set of relations, max-implementation, and its weaker analog max-quantification. Then we show that approximation preserving reductions between counting constraint satisfaction problems (#CSPs) are preserved by these two types of closure operators. Together with some previous results this means that the approximation complexity of counting CSPs is determined by partial clones of relations that additionally closed under these new types of closure operators. Galois correspondence of various kind have proved to be quite helpful in the study of the complexity of the CSP. While we were unable to identify a Galois correspondence for partial clones closed under max-implementation and max-quantification, we obtain such results for slightly different type of closure operators, k-existential quantification. This type of quantifiers are known as counting quantifiers in model theory, and often used to enhance first order logic languages. We characterize partial clones of relations closed under k-existential quantification as sets of relations invariant under a set of partial functions that satisfy the condition of k-subset surjectivity. Finally, we give a description of Boolean max-co-clones, that is, sets of relations on {0,1} closed under max-implementations.Comment: 28 pages, 2 figure

    On a New Type of Information Processing for Efficient Management of Complex Systems

    Full text link
    It is a challenge to manage complex systems efficiently without confronting NP-hard problems. To address the situation we suggest to use self-organization processes of prime integer relations for information processing. Self-organization processes of prime integer relations define correlation structures of a complex system and can be equivalently represented by transformations of two-dimensional geometrical patterns determining the dynamics of the system and revealing its structural complexity. Computational experiments raise the possibility of an optimality condition of complex systems presenting the structural complexity of a system as a key to its optimization. From this perspective the optimization of a system could be all about the control of the structural complexity of the system to make it consistent with the structural complexity of the problem. The experiments also indicate that the performance of a complex system may behave as a concave function of the structural complexity. Therefore, once the structural complexity could be controlled as a single entity, the optimization of a complex system would be potentially reduced to a one-dimensional concave optimization irrespective of the number of variables involved its description. This might open a way to a new type of information processing for efficient management of complex systems.Comment: 5 pages, 2 figures, to be presented at the International Conference on Complex Systems, Boston, October 28 - November 2, 200

    Complexity in Translation. An English-Norwegian Study of Two Text Types

    Get PDF
    The present study discusses two primary research questions. Firstly, we have tried to investigate to what extent it is possible to compute the actual translation relation found in a selection of English-Norwegian parallel texts. By this we understand the generation of translations with no human intervention, and we assume an approach to machine translation (MT) based on linguistic knowledge. In order to answer this question, a measurement of translational complexity is applied to the parallel texts. Secondly, we have tried to find out if there is a difference in the degree of translational complexity between the two text types, law and fiction, included in the empirical material. The study is a strictly product-oriented approach to complexity in translation: it disregards aspects related to translation methods, and to the cognitive processes behind translation. What we have analysed are intersubjectively available relations between source texts and existing translations. The degree of translational complexity in a given translation task is determined by the types and amounts of information needed to solve it, as well as by the accessibility of these information sources, and the effort required when they are processed. For the purpose of measuring the complexity of the relation between a source text unit and its target correspondent, we apply a set of four correspondence types, organised in a hierarchy reflecting divisions between different linguistic levels, along with a gradual increase in the degree of translational complexity. In type 1, the least complex type, the corresponding strings are pragmatically, semantically, and syntactically equivalent, down to the level of the sequence of word forms. In type 2, source and target string are pragmatically and semantically equivalent, and equivalent with respect to syntactic functions, but there is at least one mismatch in the sequence of constituents or in the use of grammatical form words. Within type 3, source and target string are pragmatically and semantically equivalent, but there is at least one structural difference violating syntactic functional equivalence between the strings. In type 4, there is at least one linguistically non-predictable, semantic discrepancy between source and target string. The correspondence type hierarchy, ranging from 1 to 4, is characterised by an increase with respect to linguistic divergence between source and target string, an increase in the need for information and in the amount of effort required to translate, and a decrease in the extent to which there exist implications between relations of source-target equivalence at different linguistic levels. We assume that there is a translational relation between the inventories of simple and complex linguistic signs in two languages which is predictable, and hence computable, from information about source and target language systems, and about how the systems correspond. Thus, computable translations are predictable from the linguistic information coded in the source text, together with given, general information about the two languages and their interrelations. Further, we regard non-computable translations to be correspondences where it is not possible to predict the target expression from the information encoded in the source expression, together with given, general information about SL and TL and their interrelations. Non-computable translations require access to additional information sources, such as various kinds of general or task-specific extra-linguistic information, or task-specific linguistic information from the context surrounding the source expression. In our approach, correspondences of types 1–3 constitute the domain of linguistically predictable, or computable, translations, whereas type 4 correspondences belong to the non-predictable, or non-computable, domain, where semantic equivalence is not fulfilled. The empirical method involves extracting translationally corresponding strings from parallel texts, and assigning one of the types defined by the correspondence hierarchy to each recorded string pair. The analysis is applied to running text, omitting no parts of it. Thus, the distribution of the four types of translational correspondence within a set of data provides a measurement of the degree of translational complexity in the parallel texts that the data are extracted from. The complexity measurements of this study are meant to show to what extent we assume that an ideal, rule-based MT system could simulate the given translations, and for this reason the finite clause is chosen as the primary unit of analysis. The work of extracting and classifying translational correspondences is done manually as it requires a bilingually competent human analyst. In the present study, the recorded data cover about 68 000 words. They are compiled from six different text pairs: two of them are law texts, and the remaining four are fiction texts. Comparable amounts of text are included for each text type, and both directions of translation are covered. Since the scope of the investigation is limited, we cannot, on the basis of our analysis, generalise about the degree of translational complexity in the chosen text types and in the language pair English-Norwegian. Calculated in terms of string lengths, the complexity measurement across the entire collection of data shows that as little as 44,8% of all recorded string pairs are classified as computable translational correspondences, i.e. as type 1, 2, or 3, and non-computable string pairs of type 4 constitute a majority (55,2%) of the compiled data. On average, the proportion of computable correspondences is 50,2% in the law data, and 39,6% in fiction. In relation to the question whether it would be fruitful to apply automatic translation to the selected texts, we have considered the workload potentially involved in correcting machine output, and in this respect the difference in restrictedness between the two text types is relevant. Within the non-computable correspondences, the frequency of cases exhibiting only one minimal semantic deviation between source and target string is considerably higher among the data extracted from the law texts than among those recorded from fiction. For this reason we tentatively regard the investigated pairs of law texts as representing a text type where tools for automatic translation may be helpful, if the effort required by post-editing is smaller than that of manual translation. This is possibly the case in one of the law text pairs, where 60,9% of the data involve computable translation tasks. In the other pair of law texts the corresponding figure is merely 38,8%, and the potential helpfulness of automatisation would be even more strongly determined by the edit cost. That text might be a task for computer-aided translation, rather than for MT. As regards the investigated fiction texts, it is our view that post-editing of automatically generated translations would be laborious and not cost effective, even in the case of one text pair showing a relatively low degree of translational complexity. Hence, we concur with the common view that the translation of fiction is not a task for MT

    Why are all dualities conformal? Theory and practical consequences

    Get PDF
    We relate duality mappings to the "Babbage equation" F(F(z)) = z, with F a map linking weak- to strong-coupling theories. Under fairly general conditions F may only be a specific conformal transformation of the fractional linear type. This deep general result has enormous practical consequences. For example, one can establish that weak- and strong- coupling series expansions of arbitrarily large finite size systems are trivially related, i.e., after generating one of those series the other is automatically determined through a set of linear constraints between the series coefficients. This latter relation partially solve or, equivalently, localize the computational complexity of evaluating the series expansion to a simple fraction of those coefficients. As a bonus, those relations also encode non-trivial equalities between different geometric constructions in general dimensions, and connect derived coefficients to polytope volumes. We illustrate our findings by examining various models including, but not limited to, ferromagnetic and spin-glass Ising, and Ising gauge type theories on hypercubic lattices in 1< D <9 dimensions.Comment: 41 pages (18 (main text)+23 (suppl. information)), 2 figures, 8 tables; to appear in Nuclear Physics

    The Complexity of Datalog on Linear Orders

    Full text link
    We study the program complexity of datalog on both finite and infinite linear orders. Our main result states that on all linear orders with at least two elements, the nonemptiness problem for datalog is EXPTIME-complete. While containment of the nonemptiness problem in EXPTIME is known for finite linear orders and actually for arbitrary finite structures, it is not obvious for infinite linear orders. It sharply contrasts the situation on other infinite structures; for example, the datalog nonemptiness problem on an infinite successor structure is undecidable. We extend our upper bound results to infinite linear orders with constants. As an application, we show that the datalog nonemptiness problem on Allen's interval algebra is EXPTIME-complete.Comment: 21 page

    Topological model for machining of parts with complex shapes

    Get PDF
    Complex shapes are widely used to design products in several industries such as aeronautics, automotive and domestic appliances. Several variations of their curvatures and orientations generate difficulties during their manufacturing or the machining of dies used in moulding, injection and forging. Analysis of several parts highlights two levels of difficulties between three types of shapes: prismatic parts with simple geometrical shapes, aeronautic structure parts composed of several shallow pockets and forging dies composed of several deep cavities which often contain protrusions. This paper mainly concerns High Speed Machining (HSM) of these dies which represent the highest complexity level because of the shapes' geometry and their topology. Five axes HSM is generally required for such complex shaped parts but 3 axes machining can be sufficient for dies. Evolutions in HSM CAM software and machine tools lead to an important increase in time for machining preparation. Analysis stages of the CAD model particularly induce this time increase which is required for a wise choice of cutting tools and machining strategies. Assistance modules for prismatic parts machining features identification in CAD models are widely implemented in CAM software. In spite of the last CAM evolutions, these kinds of CAM modules are undeveloped for aeronautical structure parts and forging dies. Development of new CAM modules for the extraction of relevant machining areas as well as the definition of the topological relations between these areas must make it possible for the machining assistant to reduce the machining preparation time. In this paper, a model developed for the description of complex shape parts topology is presented. It is based on machining areas extracted for the construction of geometrical features starting from CAD models of the parts. As topology is described in order to assist machining assistant during machining process generation, the difficulties associated with tasks he carried out are analyzed at first. The topological model presented after is based on the basic geometrical features extracted. Topological relations which represent the framework of the model are defined between the basic geometrical features which are gathered afterwards in macro-features. Approach used for the identification of these macro-features is also presented in this paper. Detailed application on the construction of the topological model of forging dies is presented in the last part of the paper
    • …
    corecore