8,345 research outputs found

    Composition with Target Constraints

    Full text link
    It is known that the composition of schema mappings, each specified by source-to-target tgds (st-tgds), can be specified by a second-order tgd (SO tgd). We consider the question of what happens when target constraints are allowed. Specifically, we consider the question of specifying the composition of standard schema mappings (those specified by st-tgds, target egds, and a weakly acyclic set of target tgds). We show that SO tgds, even with the assistance of arbitrary source constraints and target constraints, cannot specify in general the composition of two standard schema mappings. Therefore, we introduce source-to-target second-order dependencies (st-SO dependencies), which are similar to SO tgds, but allow equations in the conclusion. We show that st-SO dependencies (along with target egds and target tgds) are sufficient to express the composition of every finite sequence of standard schema mappings, and further, every st-SO dependency specifies such a composition. In addition to this expressive power, we show that st-SO dependencies enjoy other desirable properties. In particular, they have a polynomial-time chase that generates a universal solution. This universal solution can be used to find the certain answers to unions of conjunctive queries in polynomial time. It is easy to show that the composition of an arbitrary number of standard schema mappings is equivalent to the composition of only two standard schema mappings. We show that surprisingly, the analogous result holds also for schema mappings specified by just st-tgds (no target constraints). This is proven by showing that every SO tgd is equivalent to an unnested SO tgd (one where there is no nesting of function symbols). Similarly, we prove unnesting results for st-SO dependencies, with the same types of consequences.Comment: This paper is an extended version of: M. Arenas, R. Fagin, and A. Nash. Composition with Target Constraints. In 13th International Conference on Database Theory (ICDT), pages 129-142, 201

    Search for lepton flavor violating decays of a heavy neutral particle in p-pbar collisions at root(s)=1.8 TeV

    Get PDF
    We report on a search for a high mass, narrow width particle that decays directly to e+mu, e+tau, or mu+tau. We use approximately 110 pb^-1 of data collected with the Collider Detector at Fermilab from 1992 to 1995. No evidence of lepton flavor violating decays is found. Limits are set on the production and decay of sneutrinos with R-parity violating interactions.Comment: Figure 2 fixed. Reference 4 fixed. Minor changes to tex

    Measurement of Resonance Parameters of Orbitally Excited Narrow B^0 Mesons

    Get PDF
    We report a measurement of resonance parameters of the orbitally excited (L=1) narrow B^0 mesons in decays to B^{(*)+}\pi^- using 1.7/fb of data collected by the CDF II detector at the Fermilab Tevatron. The mass and width of the B^{*0}_2 state are measured to be m(B^{*0}_2) = 5740.2^{+1.7}_{-1.8}(stat.) ^{+0.9}_{-0.8}(syst.) MeV/c^2 and \Gamma(B^{*0}_2) = 22.7^{+3.8}_{-3.2}(stat.) ^{+3.2}_{-10.2}(syst.) MeV/c^2. The mass difference between the B^{*0}_2 and B^0_1 states is measured to be 14.9^{+2.2}_{-2.5}(stat.) ^{+1.2}_{-1.4}(syst.) MeV/c^2, resulting in a B^0_1 mass of 5725.3^{+1.6}_{-2.2}(stat.) ^{+1.4}_{-1.5}(syst.) MeV/c^2. This is currently the most precise measurement of the masses of these states and the first measurement of the B^{*0}_2 width.Comment: 7 pages, 1 figure, 1 table. Submitted to Phys.Rev.Let

    Measurement of the fraction of t-tbar production via gluon-gluon fusion in p-pbar collisions at sqrt(s)=1.96 TeV

    Get PDF
    We present a measurement of the ratio of t-tbar production cross section via gluon-gluon fusion to the total t-tbar production cross section in p-pbar collisions at sqrt{s}=1.96 TeV at the Tevatron. Using a data sample with an integrated luminosity of 955/pb recorded by the CDF II detector at Fermilab, we select events based on the t-tbar decay to lepton+jets. Using an artificial neural network technique we discriminate between t-tbar events produced via q-qbar annihilation and gluon-gluon fusion, and find Cf=(gg->ttbar)/(pp->ttbar)<0.33 at the 68% confidence level. This result is combined with a previous measurement to obtain the most precise measurement of this quantity, Cf=0.07+0.15-0.07.Comment: submitted to Phys. Rev.

    A Focused Sequent Calculus Framework for Proof Search in Pure Type Systems

    Get PDF
    Basic proof-search tactics in logic and type theory can be seen as the root-first applications of rules in an appropriate sequent calculus, preferably without the redundancies generated by permutation of rules. This paper addresses the issues of defining such sequent calculi for Pure Type Systems (PTS, which were originally presented in natural deduction style) and then organizing their rules for effective proof-search. We introduce the idea of Pure Type Sequent Calculus with meta-variables (PTSCalpha), by enriching the syntax of a permutation-free sequent calculus for propositional logic due to Herbelin, which is strongly related to natural deduction and already well adapted to proof-search. The operational semantics is adapted from Herbelin's and is defined by a system of local rewrite rules as in cut-elimination, using explicit substitutions. We prove confluence for this system. Restricting our attention to PTSC, a type system for the ground terms of this system, we obtain the Subject Reduction property and show that each PTSC is logically equivalent to its corresponding PTS, and the former is strongly normalising iff the latter is. We show how to make the logical rules of PTSC into a syntax-directed system PS for proof-search, by incorporating the conversion rules as in syntax-directed presentations of the PTS rules for type-checking. Finally, we consider how to use the explicitly scoped meta-variables of PTSCalpha to represent partial proof-terms, and use them to analyse interactive proof construction. This sets up a framework PE in which we are able to study proof-search strategies, type inhabitant enumeration and (higher-order) unification

    Stillbirth and loss: family practices and display

    No full text
    This paper explores how parents respond to their memories of their stillborn child over the years following their loss. When people die after living for several years or more, their family and friends have the residual traces of a life lived as a basis for an identity that may be remembered over a sustained period of time. For the parent of a stillborn child there is no such basis and the claim for a continuing social identity for their son or daughter is precarious. Drawing on interviews with the parents of 22 stillborn children, this paper explores the identity work performed by parents concerned to create a lasting and meaningful identity for their child and to include him or her in their families after death. The paper draws on Finch's (2007) concept of family display and Walter's (1999) thesis that links continue to exist between the living and the dead over a continued period. The paper argues that evidence from the experience of stillbirth suggests that there is scope for development for both theoretical frameworks

    Differentiating normal and problem gambling: a grounded theory approach.

    Get PDF
    A previous study (Ricketts &amp; Macaskill, 2003) delineated a theory of problem gambling based on the experiences of treatment seeking male gamblers and allowed predictions to be made regarding the processes that differentiate between normal and problem gamblers. These predictions are the focus of the present study, which also utilised a grounded theory approach, but with a sample of male high frequency normal gamblers. The findings suggest that there are common aspects of gambling associated with arousal and a sense of achievement. The use of gambling to manage negative emotional states differentiated normal and problem gambling. Perceived self-efficacy , emotion management skills and perceived likelihood of winning money back were intervening variables differentiating problem and normal gamblers.</p

    Assessing the Reliability and Credibility of Industry Science and Scientists

    Get PDF
    The chemical industry extensively researches and tests its products to implement product stewardship commitments and to ensure compliance with governmental requirements. In this commentary we argue that a wide variety of mechanisms enable policymakers and the public to assure themselves that studies performed or funded by industry are identified as such, meet high scientific standards, and are not suppressed when their findings are adverse to industry’s interests. The more a given study follows these practices and standards, the more confidence one can place in it. No federal laws, rules, or policies express a presumption that scientific work should be ignored or given lesser weight because of the source of its funding. To the contrary, Congress has consistently mandated that agencies allow interested or affected parties to provide information to them and fairly consider that information. All participants in scientific review panels should disclose sources of potential biases and conflicts of interest. The former should be considered in seeking a balanced panel rather than being used as a basis for disqualification. Conflicts of interest generally do require disqualification, except where outweighed by the need for a person’s services. Within these constraints, chemical industry scientists can serve important and legitimate functions on scientific advisory panels and should not be unjustifiably prevented from contributing to their work

    Does the process map influence the outcome of quality improvement work? A comparison of a sequential flow diagram and a hierarchical task analysis diagram

    Get PDF
    Background: Many quality and safety improvement methods in healthcare rely on a complete and accurate map of the process. Process mapping in healthcare is often achieved using a sequential flow diagram, but there is little guidance available in the literature about the most effective type of process map to use. Moreover there is evidence that the organisation of information in an external representation affects reasoning and decision making. This exploratory study examined whether the type of process map - sequential or hierarchical - affects healthcare practitioners' judgments.Methods: A sequential and a hierarchical process map of a community-based anti coagulation clinic were produced based on data obtained from interviews, talk-throughs, attendance at a training session and examination of protocols and policies. Clinic practitioners were asked to specify the parts of the process that they judged to contain quality and safety concerns. The process maps were then shown to them in counter-balanced order and they were asked to circle on the diagrams the parts of the process where they had the greatest quality and safety concerns. A structured interview was then conducted, in which they were asked about various aspects of the diagrams.Results: Quality and safety concerns cited by practitioners differed depending on whether they were or were not looking at a process map, and whether they were looking at a sequential diagram or a hierarchical diagram. More concerns were identified using the hierarchical diagram compared with the sequential diagram and more concerns were identified in relation to clinical work than administrative work. Participants' preference for the sequential or hierarchical diagram depended on the context in which they would be using it. The difficulties of determining the boundaries for the analysis and the granularity required were highlighted.Conclusions: The results indicated that the layout of a process map does influence perceptions of quality and safety problems in a process. In quality improvement work it is important to carefully consider the type of process map to be used and to consider using more than one map to ensure that different aspects of the process are captured
    corecore