232 research outputs found

    A vision of re-distributed manufacturing for the UK’s consumer goods industry

    Get PDF
    The linear production of consumer goods is characterised by mass manufacture, multinational enterprises and globally dispersed supply chains. Redistributed manufacture (RDM) is an emerging topic, which seeks to enable a transition of the current linear model of production and consumption, by taking advantage of new technologies. This paper aims to explore the challenges, opportunities and further research questions to set a vision of Redistributed manufacturing for the UK’s consumer goods industry. To set this vision, a literature survey was conducted followed by a qualitative enquiry where PESTLE1 aspects of RDM were analysed. This analysis was interpreted through a roadmap. As a result of this roadmap, four RDM characteristics (i.e. customisation, use of digital technologies, local production and the development of new business models) were identified. These characteristics helped to set the future vision of RDM in the UK’s consumer goods sector

    On the speed of constraint propagation and the time complexity of arc consistency testing

    Full text link
    Establishing arc consistency on two relational structures is one of the most popular heuristics for the constraint satisfaction problem. We aim at determining the time complexity of arc consistency testing. The input structures GG and HH can be supposed to be connected colored graphs, as the general problem reduces to this particular case. We first observe the upper bound O(e(G)v(H)+v(G)e(H))O(e(G)v(H)+v(G)e(H)), which implies the bound O(e(G)e(H))O(e(G)e(H)) in terms of the number of edges and the bound O((v(G)+v(H))3)O((v(G)+v(H))^3) in terms of the number of vertices. We then show that both bounds are tight up to a constant factor as long as an arc consistency algorithm is based on constraint propagation (like any algorithm currently known). Our argument for the lower bounds is based on examples of slow constraint propagation. We measure the speed of constraint propagation observed on a pair G,HG,H by the size of a proof, in a natural combinatorial proof system, that Spoiler wins the existential 2-pebble game on G,HG,H. The proof size is bounded from below by the game length D(G,H)D(G,H), and a crucial ingredient of our analysis is the existence of G,HG,H with D(G,H)=Ω(v(G)v(H))D(G,H)=\Omega(v(G)v(H)). We find one such example among old benchmark instances for the arc consistency problem and also suggest a new, different construction.Comment: 19 pages, 5 figure

    Constraint satisfaction parameterized by solution size

    Full text link
    In the constraint satisfaction problem (CSP) corresponding to a constraint language (i.e., a set of relations) Γ\Gamma, the goal is to find an assignment of values to variables so that a given set of constraints specified by relations from Γ\Gamma is satisfied. The complexity of this problem has received substantial amount of attention in the past decade. In this paper we study the fixed-parameter tractability of constraint satisfaction problems parameterized by the size of the solution in the following sense: one of the possible values, say 0, is "free," and the number of variables allowed to take other, "expensive," values is restricted. A size constraint requires that exactly kk variables take nonzero values. We also study a more refined version of this restriction: a global cardinality constraint prescribes how many variables have to be assigned each particular value. We study the parameterized complexity of these types of CSPs where the parameter is the required number kk of nonzero variables. As special cases, we can obtain natural and well-studied parameterized problems such as Independent Set, Vertex Cover, d-Hitting Set, Biclique, etc. In the case of constraint languages closed under substitution of constants, we give a complete characterization of the fixed-parameter tractable cases of CSPs with size constraints, and we show that all the remaining problems are W[1]-hard. For CSPs with cardinality constraints, we obtain a similar classification, but for some of the problems we are only able to show that they are Biclique-hard. The exact parameterized complexity of the Biclique problem is a notorious open problem, although it is believed to be W[1]-hard.Comment: To appear in SICOMP. Conference version in ICALP 201

    Solving the Sports League Scheduling Problem with Tabu Search

    Get PDF
    In this paper we present a tabu approach for a version of the Sports League Scheduling Problem. The approach adopted is based on a formulation of the problem as a Constraint Satisfaction Problem (CSP). Tests were carried out on problem instances of up to 40 teams representing 780 integer variables with 780 values per variable. Experimental results show that this approach outperforms some existing methods and is one of the most promising methods for solving problems of this type

    Variable and value elimination in binary constraint satisfaction via forbidden patterns

    Get PDF
    Variable or value elimination in a constraint satisfaction problem (CSP) can be used in preprocessing or during search to reduce search space size. A variable elimination rule (value elimination rule) allows the polynomial-time identification of certain variables (domain elements) whose elimination, without the introduction of extra compensatory constraints, does not affect the satisfiability of an instance. We show that there are essentially just four variable elimination rules and three value elimination rules defined by forbidding generic sub-instances, known as irreducible existential patterns, in arc-consistent CSP instances. One of the variable elimination rules is the already-known Broken Triangle Property, whereas the other three are novel. The three value elimination rules can all be seen as strict generalisations of neighbourhood substitution.Comment: A full version of an IJCAI'13 paper to appear in Journal of Computer and System Sciences (JCSS

    Discovery of widespread transcription initiation at microsatellites predictable by sequence-based deep neural network

    Get PDF
    Using the Cap Analysis of Gene Expression (CAGE) technology, the FANTOM5 consortium provided one of the most comprehensive maps of transcription start sites (TSSs) in several species. Strikingly, ~72% of them could not be assigned to a specific gene and initiate at unconventional regions, outside promoters or enhancers. Here, we probe these unassigned TSSs and show that, in all species studied, a significant fraction of CAGE peaks initiate at microsatellites, also called short tandem repeats (STRs). To confirm this transcription, we develop Cap Trap RNA-seq, a technology which combines cap trapping and long read MinION sequencing. We train sequence-based deep learning models able to predict CAGE signal at STRs with high accuracy. These models unveil the importance of STR surrounding sequences not only to distinguish STR classes, but also to predict the level of transcription initiation. Importantly, genetic variants linked to human diseases are preferentially found at STRs with high transcription initiation level, supporting the biological and clinical relevance of transcription initiation at STRs. Together, our results extend the repertoire of non-coding transcription associated with DNA tandem repeats and complexify STR polymorphism

    Automatic discovery and exploitation of promising subproblems for tabulation

    Get PDF
    The performance of a constraint model can often be improved by converting a subproblem into a single table constraint. In this paper we study heuristics for identifying promising subproblems. We propose a small set of heuristics to identify common cases such as expressions that will propagate weakly. The process of discovering promising subproblems and tabulating them is entirely automated in the tool Savile Row. A cache is implemented to avoid tabulating equivalent subproblems many times. We give a simple algorithm to generate table constraints directly from a constraint expression in Savile Row. We demonstrate good performance on the benchmark problems used in earlier work on tabulation, and also for several new problem classes

    Bayesian Action–Perception Computational Model: Interaction of Production and Recognition of Cursive Letters

    Get PDF
    In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception–action loop, based on probabilistic modeling and Bayesian inference, which we call the Bayesian Action–Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using Bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments

    LUNEX5: A French FEL Test Facility Light Source Proposal

    Get PDF
    http://accelconf.web.cern.ch/AccelConf/IPAC2012/papers/tuppp005.pdfInternational audienceLUNEX5 is a new Free Electron Laser (FEL) source project aimed at delivering short and coherent X-ray pulses to probe ultrafast phenomena at the femto-second scale, to investigate extremely low density samples as well as to image individual nm scale objects
    • …
    corecore