103 research outputs found

    Computing and counting longest paths on circular-arc graphs in polynomial time.

    Get PDF
    The longest path problem asks for a path with the largest number of vertices in a given graph. The first polynomial time algorithm (with running time O(n4)) has been recently developed for interval graphs. Even though interval and circular-arc graphs look superficially similar, they differ substantially, as circular-arc graphs are not perfect. In this paper, we prove that for every path P of a circular-arc graph G, we can appropriately “cut” the circle, such that the obtained (not induced) interval subgraph G′ of G admits a path P′ on the same vertices as P. This non-trivial result is of independent interest, as it suggests a generic reduction of a number of path problems on circular-arc graphs to the case of interval graphs with a multiplicative linear time overhead of O(n). As an application of this reduction, we present the first polynomial algorithm for the longest path problem on circular-arc graphs, which turns out to have the same running time O(n4) with the one on interval graphs, as we manage to get rid of the linear overhead of the reduction. This algorithm computes in the same time an n-approximation of the number of different vertex sets that provide a longest path; in the case where G is an interval graph, we compute the exact number. Moreover, our algorithm can be directly extended with the same running time to the case where every vertex has an arbitrary positive weight

    Pencil Puzzles for Introductory Computer Science: an Experience- and Gender-Neutral Context

    Get PDF
    The teaching of introductory computer science can benefit from the use of real-world context to ground the abstract programming concepts. We present the domain of pencil puzzles as a context for a variety of introductory CS topics. Pencil puzzles are puzzles typically found in newspapers and magazines, intended to be solved by the reader through the means of deduction, using only a pencil. A well-known ex- ample of a pencil puzzle is Sudoku, which has been widely used as a typical backtracking assignment. However, there are dozens of other well-tried and liked pencil puzzles avail- able that naturally induce computational thinking and can be used as context for many CS topics such as arrays, loops, recursion, GUIs, inheritance and graph traversal. Our con- tributions in this paper are two-fold. First, we present a few pencil puzzles and map them to introductory CS concepts that the puzzles can target in an assignment, and point the reader to other puzzle repositories which provide the poten- tial to lead to an almost limitless set of introductory CS assignments. Second, we have formally evaluated the effec- tiveness of such assignments used at our institution over the past three years. Students reported that they have learned the material, believe they can tackle similar problems, and have improved their coding skills. The assignments also led to a significantly higher proportion of unsolicited statements of enjoyment, as well as metacognition, when compared to a traditional assignment for the same topic. Lastly, for all but one assignment, the student’s gender or prior programming experience was independent of their grade, their perceptions of and reflection on the assignment

    Teoretická reflexia inštitucionálneho dizajnu Spoločnej zahraničnej a bezpečnostnej politiky EU po prijatí Lisabonskej zmluvy

    Get PDF
    Témou tejto diplomovej práce je teoretická reflexia inštitucionálneho dizajnu Spoločnej zahraničnej a bezpečnostnej politiky EU (SZBP) po prijatí Lisabonskej zmluvy. Hlavným cieľom práce je zistiť čo je dôvodom/dôvodmi vysvetľujúcim/i posun k centralizácii tvorby politiky SZBP, ktorú prináša Lisabonská zmluva. Centralizácia je totožná s delegáciou výkonu suverenity z členských štátov na nadnárodnú úroveň - Vysokú predstaviteľku pre SZBP a EU Službu pre vonkajšiu činnosť. Za týmto účelom práca testuje tri možné vysvetlenia vystavané na troch hlavných teóriách medzinárodných vzťahov: centralizácia ako dôsledok 1. znižujúcej sa relatívnej (vojenskej) moci EU (neorealizmus), 2. zhoršujúcich sa kolaboratívnych problémov (neoliberalizmus) a 3. zvyšujúcej sa miery europeizácie štátno-národných identít a záujmov členských štátov v sledovanom období 2001-2007. (konštruktivizmus). Práca využíva kvalitatívnu metódu- je prípadovou štúdiou. Záverom práce je tvrdenie, že zmenu inštitucionálneho dizajnu SZBP, konkrétne posun k centralizácii procesu tvorby SZBP, ktorú prináša Lisabonská zmluva vysvetľujú dva dôvody. Sú nimi znižujúca sa relatívna vojenská moc EU (neorealizmus) a zvyšujúca sa europeizácia štátno-národných identít a záujmov (konštruktivizmus) v sledovanom období 2001-2007. Zhoršujúce sa...The topic of this thesis is a theoretical analysis of the institutional design of the Common Foreign and Security Policy (CFSP) after the adoption of the Lisbon treaty. The main aim of the thesis is to reveal the reasons explaining a development towards centralization of the policy-making process of the CFSP that the Lisbon treaty introduced. Centralization is equated with a delegation of sovereignty from member states to a supranational level - High Representative for the CFSP/European External Action Service. The thesis for this purpose tests three possible explanations built upon three main theories of international relations: the centralisation as a result of the 1.decreasing EU's relative military power (neorealism) 2.deteriorating collaborative problems (neoliberalism) and 3.increasing degree of Europeanization of the nation-state identities and interests of the member states (constructivism) within covered time period from 2001 to 2007. The thesis applies a qualitative method, it is a case study. The conclusion of the thesis is that there are two reasons for the development towards more centralized policy-making process of the CFSP after the Lisbon treaty. Firstly, it is the decreasing EU's relative military power (neorealism) and secondly increasing level of Europeanization of nation-state...Katedra mezinárodních vztahůDepartment of International RelationsFakulta sociálních vědFaculty of Social Science

    Finding detours is fixed-parameter tractable

    Get PDF
    We consider the following natural "above guarantee" parameterization of the classical Longest Path problem: For given vertices s and t of a graph G, and an integer k, the problem Longest Detour asks for an (s,t)-path in G that is at least k longer than a shortest (s,t)-path. Using insights into structural graph theory, we prove that Longest Detour is fixed-parameter tractable (FPT) on undirected graphs and actually even admits a single-exponential algorithm, that is, one of running time exp(O(k)) poly(n). This matches (up to the base of the exponential) the best algorithms for finding a path of length at least k. Furthermore, we study the related problem Exact Detour that asks whether a graph G contains an (s,t)-path that is exactly k longer than a shortest (s,t)-path. For this problem, we obtain a randomized algorithm with running time about 2.746^k, and a deterministic algorithm with running time about 6.745^k, showing that this problem is FPT as well. Our algorithms for Exact Detour apply to both undirected and directed graphs.Comment: Extended abstract appears at ICALP 201

    The Consumer Rights Directive and Its Implications for Consumer Protection Regarding Intangible Digital Content

    Get PDF
    The provision of digital content delivered in a process of streaming or downloading, thus not on tangible media of expression, came with immense digital and technological revolution central for electronic commerce. Yet, it is not clear what rights, if any, consumers have with respect to these transactions, as gaps in legislation cause troublesome consumer protection lacuna. To address these issues, inter alia, the new Consumer Rights Directive was adopted. This article explores the reasons for adoption of the new measure, as well as its practical impact on consumer protection regarding these products. As the level and scope of consumer protection greatly depends on the legal nature of the product, analysis of the legal definition of intangible digital content is provided. Moreover, the consumer protection in electronic transaction features a distinct right of withdrawal, hence the use and application of this right is examined as well. The last part of the article discusses the causes of consumers´ detriment and seeks to evaluate whether the Consumer Rights Directive has clarified the matter of consumer remedies in the case of detriment, or whether the area remains uncertain

    Fast sampling via spectral independence beyond bounded-degree graphs

    Full text link
    Spectral independence is a recently-developed framework for obtaining sharp bounds on the convergence time of the classical Glauber dynamics. This new framework has yielded optimal O(nlogn)O(n \log n) sampling algorithms on bounded-degree graphs for a large class of problems throughout the so-called uniqueness regime, including, for example, the problems of sampling independent sets, matchings, and Ising-model configurations. Our main contribution is to relax the bounded-degree assumption that has so far been important in establishing and applying spectral independence. Previous methods for avoiding degree bounds rely on using LpL^p-norms to analyse contraction on graphs with bounded connective constant (Sinclair, Srivastava, Yin; FOCS'13). The non-linearity of LpL^p-norms is an obstacle to applying these results to bound spectral independence. Our solution is to capture the LpL^p-analysis recursively by amortising over the subtrees of the recurrence used to analyse contraction. Our method generalises previous analyses that applied only to bounded-degree graphs. As a main application of our techniques, we consider the random graph G(n,d/n)G(n,d/n), where the previously known algorithms run in time nO(logd)n^{O(\log d)} or applied only to large dd. We refine these algorithmic bounds significantly, and develop fast n1+o(1)n^{1+o(1)} algorithms based on Glauber dynamics that apply to all dd, throughout the uniqueness regime

    An EPTAS for Scheduling on Unrelated Machines of Few Different Types

    Full text link
    In the classical problem of scheduling on unrelated parallel machines, a set of jobs has to be assigned to a set of machines. The jobs have a processing time depending on the machine and the goal is to minimize the makespan, that is the maximum machine load. It is well known that this problem is NP-hard and does not allow polynomial time approximation algorithms with approximation guarantees smaller than 1.51.5 unless P==NP. We consider the case that there are only a constant number KK of machine types. Two machines have the same type if all jobs have the same processing time for them. This variant of the problem is strongly NP-hard already for K=1K=1. We present an efficient polynomial time approximation scheme (EPTAS) for the problem, that is, for any ε>0\varepsilon > 0 an assignment with makespan of length at most (1+ε)(1+\varepsilon) times the optimum can be found in polynomial time in the input length and the exponent is independent of 1/ε1/\varepsilon. In particular we achieve a running time of 2O(Klog(K)1εlog41ε)+poly(I)2^{\mathcal{O}(K\log(K) \frac{1}{\varepsilon}\log^4 \frac{1}{\varepsilon})}+\mathrm{poly}(|I|), where I|I| denotes the input length. Furthermore, we study three other problem variants and present an EPTAS for each of them: The Santa Claus problem, where the minimum machine load has to be maximized; the case of scheduling on unrelated parallel machines with a constant number of uniform types, where machines of the same type behave like uniformly related machines; and the multidimensional vector scheduling variant of the problem where both the dimension and the number of machine types are constant. For the Santa Claus problem we achieve the same running time. The results are achieved, using mixed integer linear programming and rounding techniques

    Graph model selection using maximum likelihood

    Get PDF
    In recent years, there has been a proliferation of theoretical graph models, e.g., preferential attachment and small-world models, motivated by real-world graphs such as the Internet topology. To address the natural question of which model is best for a particular data set, we propose a model selection criterion for graph models. Since each model is in fact a probability distribution over graphs, we suggest using Maximum Likelihood to compare graph models and select their parameters. Interestingly, for the case of graph models, computing likelihoods is a difficult algorithmic task. However, we design and implement MCMC algorithms for computing the maximum likelihood for four popular models: a power-law random graph model, a preferential attachment model, a small-world model, and a uniform random graph model. We hope that this novel use of ML will objectify comparisons between graph models. 1
    corecore