208,317 research outputs found

    Updating XML Views

    Get PDF
    Update operations over XML views are essential for applications using XML views. In this dissertation work, we provide scalable solutions to support updating through XML views defined over relational databases. Especially we focus on the update-public semantic, where updates are always public (made to the public database), and the update-local semantic, where update effects are first kept local and then made public as and when required. Towards this, we propose the clean extended-source theory for determining whether a correct view update translation exists, which then serves as a theoretical foundation for us to design practical XML view updating algorithms. Under update-public semantic, state-of-the-art view updating work focus on identifying the correct update translation purely on the data. We instead take a schema-centric solution, which utilizes the schema of the underlying source to effectively prune updates that are guaranteed to be not translatable and pass updates that are guaranteed to be translatable directly to the SQL engine. Only those updates that could not be classified using schema knowledge are finally analyzed by examining the data. This required data-level check is further optimized under schema guidance to prune the search space for finding a correct translation. As the first work addressing the update-local semantic, we propose a practical framework, called LoGo. LoGo Localizes the view update translation, while preserves the properties of views being side-effect free and updates being always updatable. LoGo also supports on-demand merging of the local database of the subject viewinto the public database (also called global database), while still guaranteeing the subject view being free of side effects. A flexible synchronization service is provided in LoGo that enables all other views defined over the same public database to be refreshed, i.e., synchronized with the publically committed changes, if so desired. Further, given that XMLis an ordered datamodel,we propose an ordersensitive solution named O-HUX to support XML view updating with order. We have implemented the algorithms, along with respective optimization techniques. Experimental results confirm the effectiveness of the proposed services, and highlight its performance characteristics

    APFEL Web: a web-based application for the graphical visualization of parton distribution functions

    Full text link
    We present APFEL Web, a web-based application designed to provide a flexible user-friendly tool for the graphical visualization of parton distribution functions (PDFs). In this note we describe the technical design of the APFEL Web application, motivating the choices and the framework used for the development of this project. We document the basic usage of APFEL Web and show how it can be used to provide useful input for a variety of collider phenomenological studies. Finally we provide some examples showing the output generated by the application.Comment: Final version, matches published version in JPhysG. Web-application available from http://apfel.mi.infn.it

    The DeepThought Core Architecture Framework

    Get PDF
    The research performed in the DeepThought project aims at demonstrating the potential of deep linguistic processing if combined with shallow methods for robustness. Classical information retrieval is extended by high precision concept indexing and relation detection. On the basis of this approach, the feasibility of three ambitious applications will be demonstrated, namely: precise information extraction for business intelligence; email response management for customer relationship management; creativity support for document production and collective brainstorming. Common to these applications, and the basis for their development is the XML-based, RMRS-enabled core architecture framework that will be described in detail in this paper. The framework is not limited to the applications envisaged in the DeepThought project, but can also be employed e.g. to generate and make use of XML standoff annotation of documents and linguistic corpora, and in general for a wide range of NLP-based applications and research purposes

    Semi Automated Partial Credit Grading of Programming Assignments

    Get PDF
    The grading of student programs is a time consuming process. As class sizes continue to grow, especially in entry level courses, manually grading student programs has become an even more daunting challenge. Increasing the difficulty of grading is the needs of graphical and interactive programs such as those used as part of the UNH Computer Science curriculum (and various textbooks). There are existing tools that support the grading of introductory programming assignments (TAME and Web-CAT). There are also frameworks that can be used to test student code (JUnit, Tester, and TestNG). While these programs and frameworks are helpful, they have little or no no support for programs that use real data structures or that have interactive or graphical features. In addition, the automated tests in all these tools provide only “all or nothing” evaluation. This is a significant limitation in many circumstances. Moreover, there is little or no support for dynamic alteration of grading criteria, which means that refactoring of test classes after deployment is not easily done. Our goal is to create a framework that can address these weaknesses. This framework needs to: 1. Support assignments that have interactive and graphical components. 2. Handle data structures in student programs such as lists, stacks, trees, and hash tables. 3. Be able to assign partial credit automatically when the instructor can predict errors in advance. 4. Provide additional answer clustering information to help graders identify and assign consistent partial credit for incorrect output that was not predefined. Most importantly, these tools, collectively called RPM (short for Rapid Program Management), should interface effectively with our current grading support framework without requiring large amounts of rewriting or refactoring of test code

    HepData reloaded: reinventing the HEP data archive

    Full text link
    We describe the status of the HepData database system, following a major re-development in time for the advent of LHC data. The new HepData system benefits from use of modern database and programming language technologies, as well as a variety of high-quality tools for interfacing the data sources and their presentation, primarily via the Web. The new back-end provides much more flexible and semantic data representations than before, on which new external applications can be built to respond to the data demands of the LHC experimental era. The HepData re-development was largely motivated by a desire to have a single source of reference data for Monte Carlo validation and tuning tools, whose status and connection to HepData we also briefly review.Comment: 7 pages, 3 figures, Presented at 13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2010), February 22-27, 2010, Jaipur, Indi
    • …
    corecore