1,077 research outputs found

    Almond volatiles attract neonate larvae of Anarsia lineatella (Zeller) (Lepidoptera: Gelechiidae)

    Get PDF
    Post-diapause overwintered larvae and neonates of any generation of the peach twig borer, Anarsia lineatella (Zeller), seek suitable sites to bore into and mine tissue of their host plants, including almond and peach. We tested the hypothesis that larvae are attracted to the same almond volatiles that elicit antennal responses from adult moths. Of five candidate almond semiochemicals [β-bourbonene, (E,E)-α-farnesene, (E)-β-ocimene, nonanal, decenal] tested singly or in binary combination (nonanal, decenal) in laboratory Y-tube olfactometers, only β-bourbonene attracted neonate larvae. β-bourbonene in combination with (EE)-α-farnesene was as attractive as the complete almond volatile blend, indicating that they are key semiochemicals for foraging larvae

    Which mathematics for the Information Society?

    Get PDF
    MathIS is a new project that aims to reinvigorate secondary- school mathematics by exploiting insights of the dynamics of algorithmic problem solving. This paper describes the main ideas that underpin the project. In summary, we propose a central role for formal logic, the development of a calculational style of reasoning, the emphasis on the algorithmic nature of mathematics, and the promotion of self-discovery by the students. These ideas are discussed and the case is made, through a number of examples that show the teaching style that we want to introduce, for their relevance in shaping mathematics training for the years to come. In our opinion, the education of software engineers that work effectively with formal methods and mathematical abstractions should start before university and would benefit from the ideas discussed here.Long-term collaboration with J. N. Oliveira on calculational approaches to mathematics is deeply acknowledged. We are also grateful to the anonymous referees for their valuable comments. This research was supported by FCT (the Portuguese Foundation for Science and Technology), in the context of the MATHIS Project under contract PTDC/EIA/73252/2006. The work of Joao F. Ferreira and AlexandraMendeswas further supported by FCT grants SFRH/BD/24269/2005 and SFRH/BD/29553/2006, respectively

    Using domain-independent problems for introducing formal methods

    Get PDF
    The key to the integration of formal methods into engineering practice is education. In teaching, domain-independent problems i.e., not requiring prior engineering background-offer many advantages. Such problems are widely available, but this paper adds two dimensions that are lacking in typical solutions yet are crucial to formal methods: (i) the translation of informal statements into formal expressions; (ii) the role of formal calculation (including proofs) in exposing risks or misunderstandings and in discovering pathways to solutions. A few example problems illustrate this: (a) a small logical one showing the importance of fully capturing informal statements; (b) a combinatorial one showing how, in going from "real-world" formulations to mathematical ones, formal methods can cover more aspects than classical mathematics, and a half-page formal program semantics suitable for beginners is presented as a support; (c) a larger one showing how a single problem can contain enough elements to serve as a Leitmotiv for all notational and reasoning issues in a complete introductory course. An important final observation is that, in teaching formal methods, no approach can be a substitute for an open mind, as extreme mathphobia appears resistant to any motivation

    Random and exhaustive generation of permutations and cycles

    Full text link
    In 1986 S. Sattolo introduced a simple algorithm for uniform random generation of cyclic permutations on a fixed number of symbols. This algorithm is very similar to the standard method for generating a random permutation, but is less well known. We consider both methods in a unified way, and discuss their relation with exhaustive generation methods. We analyse several random variables associated with the algorithms and find their grand probability generating functions, which gives easy access to moments and limit laws.Comment: 9 page

    ecocomDP: A flexible data design pattern for ecological community survey data

    Get PDF
    The idea of harmonizing data is not new. Decades of amassing data in databases according to community standards - both locally and globally - have been more successful for some research domains than others. It is particularly difficult to harmonize data across studies where sampling protocols vary greatly and complex environmental conditions need to be understood to apply analytical methods correctly. However, a body of longterm ecological community observations is increasingly becoming publicly available and has been used in important studies. Here, we discuss an approach to preparing harmonized community survey data by an environmental data repository, in collaboration with a national observatory. The workflow framework and repository infrastructure are used to create a decentralized, asynchronous model to reformat data without altering original data through cleaning or aggregation, while retaining metadata about sampling methods and provenance, and enabling programmatic data access. This approach does not create another data ‘silo’ but will allow the repository to contribute subsets of available data to a variety of different analysis-ready data preparation efforts. With certain limitations (e.g., changes to the sampling protocol over time), data updates and downstream processing may be completely automated. In addition to supporting reuse of community observation data by synthesis science, a goal for this harmonization and workflow effort is to contribute these datasets to the Global Biodiversity Information Facility (GBIF) to increase the data’s discovery and use

    Probabilistic Topic Modeling of the Russian Text Corpus on Musicology

    Get PDF
    The paper describes the results of experiments on the development of a statistical model of the Russian text corpus on musicology. We construct a topic model based on Latent Dirichlet Allocation and process corpus data with the help of the GenSim statistical toolkit. Results achieved in course of experiments allow us to distinguish general and special topics which describe conceptual structure of the corpus in question and to analyze paradigmatic and syntagmatic relations between lemmata within topics.The research discussed in the paper is supported by the grant of St.-Petersburg State University № 30.38.305.2014 «Quantitative linguistic parameters for defining stylistic characteristics and subject area of texts»

    A Discussion of Value Metrics for Data Repositories in Earth and Environmental Sciences

    Get PDF
    Despite growing recognition of the importance of public data to the modern economy and to scientific progress, long-term investment in the repositories that manage and disseminate scientific data in easily accessible-ways remains elusive. Repositories are asked to demonstrate that there is a net value of their data and services to justify continued funding or attract new funding sources. Here, representatives from a number of environmental and Earth science repositories evaluate approaches for assessing the costs and benefits of publishing scientific data in their repositories, identifying various metrics that repositories typically use to report on the impact and value of their data products and services, plus additional metrics that would be useful but are not typically measured. We rated each metric by (a) the difficulty of implementation by our specific repositories and (b) its importance for value determination. As managers of environmental data repositories, we find that some of the most easily obtainable data-use metrics (such as data downloads and page views) may be less indicative of value than metrics that relate to discoverability and broader use. Other intangible but equally important metrics (e.g., laws or regulations impacted, lives saved, new proposals generated), will require considerable additional research to describe and develop, plus resources to implement at scale. As value can only be determined from the point of view of a stakeholder, it is likely that multiple sets of metrics will be needed, tailored to specific stakeholder needs. Moreover, economically based analyses or the use of specialists in the field are expensive and can happen only as resources permit

    Software Model Checking with Explicit Scheduler and Symbolic Threads

    Full text link
    In many practical application domains, the software is organized into a set of threads, whose activation is exclusive and controlled by a cooperative scheduling policy: threads execute, without any interruption, until they either terminate or yield the control explicitly to the scheduler. The formal verification of such software poses significant challenges. On the one side, each thread may have infinite state space, and might call for abstraction. On the other side, the scheduling policy is often important for correctness, and an approach based on abstracting the scheduler may result in loss of precision and false positives. Unfortunately, the translation of the problem into a purely sequential software model checking problem turns out to be highly inefficient for the available technologies. We propose a software model checking technique that exploits the intrinsic structure of these programs. Each thread is translated into a separate sequential program and explored symbolically with lazy abstraction, while the overall verification is orchestrated by the direct execution of the scheduler. The approach is optimized by filtering the exploration of the scheduler with the integration of partial-order reduction. The technique, called ESST (Explicit Scheduler, Symbolic Threads) has been implemented and experimentally evaluated on a significant set of benchmarks. The results demonstrate that ESST technique is way more effective than software model checking applied to the sequentialized programs, and that partial-order reduction can lead to further performance improvements.Comment: 40 pages, 10 figures, accepted for publication in journal of logical methods in computer scienc

    Automated Generation of User Guidance by Combining Computation and Deduction

    Full text link
    Herewith, a fairly old concept is published for the first time and named "Lucas Interpretation". This has been implemented in a prototype, which has been proved useful in educational practice and has gained academic relevance with an emerging generation of educational mathematics assistants (EMA) based on Computer Theorem Proving (CTP). Automated Theorem Proving (ATP), i.e. deduction, is the most reliable technology used to check user input. However ATP is inherently weak in automatically generating solutions for arbitrary problems in applied mathematics. This weakness is crucial for EMAs: when ATP checks user input as incorrect and the learner gets stuck then the system should be able to suggest possible next steps. The key idea of Lucas Interpretation is to compute the steps of a calculation following a program written in a novel CTP-based programming language, i.e. computation provides the next steps. User guidance is generated by combining deduction and computation: the latter is performed by a specific language interpreter, which works like a debugger and hands over control to the learner at breakpoints, i.e. tactics generating the steps of calculation. The interpreter also builds up logical contexts providing ATP with the data required for checking user input, thus combining computation and deduction. The paper describes the concepts underlying Lucas Interpretation so that open questions can adequately be addressed, and prerequisites for further work are provided.Comment: In Proceedings THedu'11, arXiv:1202.453
    corecore