11,883 research outputs found

    Energy dissipation and scattering angle distribution analysis of the classical trajectory calculations of methane scattering from a Ni(111) surface

    Get PDF
    We present classical trajectory calculations of the rotational vibrational scattering of a non-rigid methane molecule from a Ni(111) surface. Energy dissipation and scattering angles have been studied as a function of the translational kinetic energy, the incidence angle, the (rotational) nozzle temperature, and the surface temperature. Scattering angles are somewhat towards the surface for the incidence angles of 30, 45, and 60 degree at a translational energy of 96 kJ/mol. Energy loss is primarily from the normal component of the translational energy. It is transfered for somewhat more than half to the surface and the rest is transfered mostly to rotational motion. The spread in the change of translational energy has a basis in the spread of the transfer to rotational energy, and can be enhanced by raising of the surface temperature through the transfer process to the surface motion.Comment: 8 pages REVTeX, 5 figures (eps

    Graph-Based Shape Analysis Beyond Context-Freeness

    Full text link
    We develop a shape analysis for reasoning about relational properties of data structures. Both the concrete and the abstract domain are represented by hypergraphs. The analysis is parameterized by user-supplied indexed graph grammars to guide concretization and abstraction. This novel extension of context-free graph grammars is powerful enough to model complex data structures such as balanced binary trees with parent pointers, while preserving most desirable properties of context-free graph grammars. One strength of our analysis is that no artifacts apart from grammars are required from the user; it thus offers a high degree of automation. We implemented our analysis and successfully applied it to various programs manipulating AVL trees, (doubly-linked) lists, and combinations of both

    Mass loss out of close binaries. II

    Full text link
    Liberal evolution of interacting binaries has been proposed previously by several authors in order to meet various observed binary characteristics better than conservative evolution does. Since Algols are eclipsing binaries the distribution of their orbital periods is precisely known. The distribution of their mass ratios contains however more uncertainties. We try to reproduce these two distributions theoretically using a liberal scenario in which the gainer star can lose mass into interstellar space as a consequence of its rapid rotation and the energy of a hot spot. In a recent paper (Van Rensbergen et al. 2010, A&A) we calculated the liberal evolution of binaries with a B-type primary at birth where mass transfer starts during core hydrogen burning of the donor. In this paper we include the cases where mass transfer starts during hydrogen shell burning and it is our aim to reproduce the observed distributions of the system parameters of Algol-type semi-detached systems. Our calculations reveal the amount of time that an Algol binary lives with a well defined value of mass ratio and orbital period. We use these data to simulate the distribution of mass ratios and orbital periods of Algols. Binaries with a late B-type initial primary hardly lose any mass whereas those with an early B primary evolve in a non-conservative way. Conservative binary evolution predicts only ~ 12 % of Algols with a mass ratio q above 0.4. This value is raised up to ~ 17 % using our scenario of liberal evolution, which is still far below the ~ 45 % that is observed. Observed orbital periods of Algol binaries larger than one day are faithfully reproduced by our liberal scenario. Mass ratios are reproduced better than with conservative evolution, but the resemblance is still poor.Comment: 11 pages, 6 figures, accepted for publication in A&A; accepted versio

    Optimal Polynomial-Time Compression for Boolean Max CSP

    Get PDF
    In the Boolean maximum constraint satisfaction problem - Max CSP(?) - one is given a collection of weighted applications of constraints from a finite constraint language ?, over a common set of variables, and the goal is to assign Boolean values to the variables so that the total weight of satisfied constraints is maximized. There exists a concise dichotomy theorem providing a criterion on ? for the problem to be polynomial-time solvable and stating that otherwise it becomes NP-hard. We study the NP-hard cases through the lens of kernelization and provide a complete characterization of Max CSP(?) with respect to the optimal compression size. Namely, we prove that Max CSP(?) parameterized by the number of variables n is either polynomial-time solvable, or there exists an integer d ? 2 depending on ?, such that: 1) An instance of Max CSP(?) can be compressed into an equivalent instance with ?(n^d log n) bits in polynomial time, 2) Max CSP(?) does not admit such a compression to ?(n^{d-?}) bits unless NP ? co-NP / poly. Our reductions are based on interpreting constraints as multilinear polynomials combined with the framework of constraint implementations. As another application of our reductions, we reveal tight connections between optimal running times for solving Max CSP(?). More precisely, we show that obtaining a running time of the form ?(2^{(1-?)n}) for particular classes of Max CSPs is as hard as breaching this barrier for Max d-SAT for some d

    Numerical stability of the AA evolution system compared to the ADM and BSSN systems

    Full text link
    We explore the numerical stability properties of an evolution system suggested by Alekseenko and Arnold. We examine its behavior on a set of standardized testbeds, and we evolve a single black hole with different gauges. Based on a comparison with two other evolution systems with well-known properties, we discuss some of the strengths and limitations of such simple tests in predicting numerical stability in general.Comment: 16 pages, 12 figure

    Prospects of plutonium fueled fast breeders

    Get PDF

    Assessing Human Error Against a Benchmark of Perfection

    Full text link
    An increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm. Motivated by this development, an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors. To investigate what a general framework for human error prediction might look like, we focus on a model system with a rich history in the behavioral sciences: the decisions made by chess players as they select moves in a game. We carry out our analysis at a large scale, employing datasets with several million recorded games, and using chess tablebases to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging even for the best players in the world. We organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable: the skill of the decision-maker, the time available to make the decision, and the inherent difficulty of the decision. We identify rich structure in all three of these categories of features, and find strong evidence that in our domain, features describing the inherent difficulty of an instance are significantly more powerful than features based on skill or time.Comment: KDD 2016; 10 page

    Adaptive mesh refinement approach to construction of initial data for black hole collisions

    Get PDF
    The initial data for black hole collisions is constructed using a conformal-imaging approach and a new adaptive mesh refinement technique, a fully threaded tree (FTT). We developed a second-order accurate approach to the solution of the constraint equations on a non-uniformly refined high resolution Cartesian mesh including second-order accurate treatment of boundary conditions at the black hole throats. Results of test computations show convergence of the solution as the numerical resolution is increased. FTT-based mesh refinement reduces the required memory and computer time by several orders of magnitude compared to a uniform grid. This opens up the possibility of using Cartesian meshes for very high resolution simulations of black hole collisions.Comment: 13 pages, 11 figure

    Ready for university? A cross national study on students' perceived preparedness for university

    Get PDF
    Students' preparedness for higher education is seen as one of the main factors affecting first-year attrition or study success. In this paper we report on a cross-national study in which students' preparedness for university was measured before students commenced their study at a university in New Zealand or in the Netherlands. This cross-national project provided a unique opportunity to compare students' perceptions of readiness for university where students are prepared for higher education in quite different secondary school systems. Departing from a transition framework, and comparing the results in both countries using logistic regression techniques to investigate which aspects of readiness could predict perceived preparedness, we discovered similarities in as well as differences between students' perceived readiness for university study. It could be argued that differences are caused by the different educational systems at secondary level. However, overall we can conclude that, in spite of differences between the educational systems in the two countries, many differences were not remarkable or very significant. This has clear implications for how we view the relative importance of secondary school preparation and tertiary induction. We can expect greater benefit from implementing first-year pedagogical practices in universities that would assist students to develop their academic skills, than from demanding that high schools prepare students better

    Preprocessing for Outerplanar Vertex Deletion: An Elementary Kernel of Quartic Size

    Get PDF
    In the ?-Minor-Free Deletion problem one is given an undirected graph G, an integer k, and the task is to determine whether there exists a vertex set S of size at most k, so that G-S contains no graph from the finite family ? as a minor. It is known that whenever ? contains at least one planar graph, then ?-Minor-Free Deletion admits a polynomial kernel, that is, there is a polynomial-time algorithm that outputs an equivalent instance of size k^{?(1)} [Fomin, Lokshtanov, Misra, Saurabh; FOCS 2012]. However, this result relies on non-constructive arguments based on well-quasi-ordering and does not provide a concrete bound on the kernel size. We study the Outerplanar Deletion problem, in which we want to remove at most k vertices from a graph to make it outerplanar. This is a special case of ?-Minor-Free Deletion for the family ? = {K?, K_{2,3}}. The class of outerplanar graphs is arguably the simplest class of graphs for which no explicit kernelization size bounds are known. By exploiting the combinatorial properties of outerplanar graphs we present elementary reduction rules decreasing the size of a graph. This yields a constructive kernel with ?(k?) vertices and edges. As a corollary, we derive that any minor-minimal obstruction to having an outerplanar deletion set of size k has ?(k?) vertices and edges
    • …
    corecore