11,174 research outputs found
Event Data Definition in LHCb
We present the approach used for defining the event object model for the LHCb
experiment. This approach is based on a high level modelling language, which is
independent of the programming language used in the current implementation of
the event data processing software. The different possibilities of object
modelling languages are evaluated, and the advantages of a dedicated model
based on XML over other possible candidates are shown. After a description of
the language itself, we explain the benefits obtained by applying this approach
in the description of the event model of an experiment such as LHCb. Examples
of these benefits are uniform and coherent mapping of the object model to the
implementation language across the experiment software development teams, easy
maintenance of the event model, conformance to experiment coding rules, etc.
The description of the object model is parsed by means of a so called
front-end which allows to feed several back-ends. We give an introduction to
the model itself and to the currently implemented back-ends which produce
information like programming language specific implementations of event objects
or meta information about these objects. Meta information can be used for
introspection of objects at run-time which is essential for functionalities
like object persistency or interactive analysis. This object introspection
package for C++ has been adopted by the LCG project as the starting point for
the LCG object dictionary that is going to be developed in common for the LHC
experiments.
The current status of the event object modelling and its usage in LHCb are
presented and the prospects of further developments are discussed.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 7 pages, LaTeX, 2 eps figures. PSN
MOJT00
Data Definitions in the ACL2 Sedan
We present a data definition framework that enables the convenient
specification of data types in ACL2s, the ACL2 Sedan. Our primary motivation
for developing the data definition framework was pedagogical. We were teaching
undergraduate students how to reason about programs using ACL2s and wanted to
provide them with an effective method for defining, testing, and reasoning
about data types in the context of an untyped theorem prover. Our framework is
now routinely used not only for pedagogical purposes, but also by advanced
users.
Our framework concisely supports common data definition patterns, e.g. list
types, map types, and record types. It also provides support for polymorphic
functions. A distinguishing feature of our approach is that we maintain both a
predicative and an enumerative characterization of data definitions.
In this paper we present our data definition framework via a sequence of
examples. We give a complete characterization in terms of tau rules of the
inclusion/exclusion relations a data definition induces, under suitable
restrictions. The data definition framework is a key component of
counterexample generation support in ACL2s, but can be independently used in
ACL2, and is available as a community book.Comment: In Proceedings ACL2 2014, arXiv:1406.123
Recommended from our members
Using SVG and XSLT for graphic representation
Using SVG and XSLT for graphic representation
In this paper we will present an XML based framework that can be used to produce graphical visualisation of scientific data. The approach rather than producing ordinary histogram and function diagaram graphs, tries to represent the information in a more graphical appealing and easy to understand way. For examples the approach will give the ability to represent the temperature as the level of coulored fluid in a thermometer.
The proposed framework is able to maintain the value of the datas strictly separated from the visual form of its representation (positions of element, colours, visual representation etc.).
By defining appropriate data structures and expressing them using XML, the framework gives the user the ability to create graphic representations using standard SVG and XSLT.
Since XML can be used for describing complex data information, we represent every level of the graphic representation with an XML structure.
To describe our architecture we defined the following XML dialects, each one with different markup tags, reflecting the semantical values of the elements.
Data definition level. Used to define the value of the datas that can be used in the graphic representation
Data representation level. Used to define the graphic representation, it defines how the values expressed by the data definition level are represented.
Both data representation and data definition files are based on a DTD to impose the constraints.
Data representation level is the core of the system, and defines a powerful language for representation.
Source primitives. Used to define for the source of the graphic elements, for example static file or SVG code.
Modification primitives. Used to define the modifications that can affect a graphic element, for example rotation, scaling or repetition.
Disposition primitives. Used to define the possible dispositions along x, y and z axes, for example to impose a order in the representation of elements.
Action primitives. Used to define the possible actions that canbe activated by graphic elements for different user behaviours. For example a mouse action can activate a link to a different resource, or can change the value of any of the other primitives of the data structure, as image source or disposition, or can show a tooltip .
XSLT is used to output a SVG file derived from the two files describing the graphic representation.
Our aim is to provide an abstract language to be used to represent in different ways the same concept. In fact, we can link a data definition file with different data representation levels, providing different kinds and levels of complexity for the same concept. An example use could be the representation of the temperature described before, where the temperature itself could be represented either as the level of mercury in the termomether, or as the rotation of an arrow in a gauge.
The transformation process is made from an XML source tree into an XML result tree, using XPath to define patterns. XSLT transformation process is based on templates, that define some actions (like adding or removing elements, or sorting them) to be performed when a part of the document matches a template.
To implement some of the complex graphics operations we are using XSLT extensions that allow to perform mathematical operations.
These XSLT extensions are not yet standard and require specific compliant parser, as Apache Xalan, that allows the developer to interface with Java classes in order to increase XSLT areas of application, from simple node transformations to quite complex operations
Repository Analytics and Metrics Portal (RAMP) Workflow Documentation and Data Definition
The Repository Analytics & Metrics Portal (RAMP) is a web service that leverages Google Search Console (GSC) data to provide a set of baseline search engine performance metrics for a global, cross-platform group of institutional repositories (IR). Since launching in 2017, RAMP has grown from 3 to more than 50 participating repositories. The underlying data are unique in scope and size, and offer many opportunities for novel analyses of IR search engine performance. The data may be augmented to enable additional analyses including metadata mining and bibliometrics. In November 2019, the RAMP team released a publicly available subset of the RAMP dataset, consisting of daily GSC data for 35 participating repositories harvested between January 1 and May 31, 2019. The purpose of this article is to provide information and increased transparency about how RAMP data are harvested, processed, and audited for quality control. This article is also intended to serve as more extensive, complementary documentation for the published dataset and any published research findings that use RAMP data
APLIKASI PEMBANGKIT ERD DARI SKRIP DDL (DATA DEFINITION LANGUAGE) ERD GENERATOR APPLICATION FROM DDL (DATA DEFINITION LANGUAGE) SCRIPT
ABSTRAKSI: Kehadiran pemrosesan basis data diperlukan oleh berbagai institusi dan perusahaan. Basis data tidak hanya mempercepat perolehan informasi, tetapi juga dapat meningkatkan pelayanan kepada pelanggan. Bagi perusahaan, keuntungan seperti ini dapat meningkatkan daya saingnya terhadap perusahaan lain. Hal ini pulalah yang mendorong banyak perusahaan yang semula menggunakan pemrosesan manual mulai beralih memanfaatkan basis data. Sejalan dengan hal di atas, proses reverse engineering terhadap suatu basis data menjadi suatu kebutuhan bagi perancang basis data untuk mengetahui struktur dari sebuah basis data. Struktur tersebut biasanya dimodelkan dalam bentuk Entity Relationships Diagram (ERD). Penggambaran struktur basis data dalam sebuah ERD dapat menggunakan berbagai notasi agar menjadi lebih mudah dimengerti. Salah satu notasi yang mudah dimengerti adalah notasi ERD menurut Igor T. Hawryszkiewycz dalam bukunya Relational Database Design an Introduction. Dalam tugas akhir ini telah diimplementasikan aplikasi pembangkit ERD sebagai salah satu solusi dalam membantu proses reverse engineering dalam basis data. Aplikasi ini akan menerima inputan berupa skrip DDL dan akan dihasilkan output berupa gambar ERD. Dengan menggunakan teknik scanning dan parsing dalam pemrosesan dan grafika dalam penggambaran ERD, maka Aplikasi Pembangkit ERD dapat menghasilkan gambar ERD sesuai dengan skrip DDL yang diinputkan. Dari hasil uji didapatkan bahwa aplikasi pembangkit ERD mampu menghasilkan gambar ERD dengan benar, sehingga pemakai dapat mengetahui apakah database yang dibangun telah sesuai dengan perancangan awal atau tidak.Kata Kunci : Basisdata, ERD, DDLABSTRACT: The existing of database processing is needed by many institutions and companies. Database is not only to get information faster; it is also enlarging their service to customer. For companies, this advantage can increase competency. Because of this reason, many companies using manual processing turn to database. As mentioned above, database reverse engineering process has become a necessity for database developers to understand the structure of any databases. Commonly, this structure is modeled in some notations of Entity Relationships Diagram (ERD). The graphical visualization of database structure in an ERD can use many notations, so it is easy to understand. One of notation that easy to understand is ERD notation according to Igor T. Hawryszkiewycz in his book Relational Database Design an Introduction. This final project had implements ERD generator application as one of solution to help database reverse engineering. This application will receive DDL script as its input and graphical visualization of database structure in an ERD as its output. By scanning and parsing technique that will be use in processing and graphical visualization of database structure in an ERD, this application can generate ERD which suitable with DDL script that we have inputed. Some test that we have done to this application show that ERD generator application can generate ERD scheme correctly, so user can know whether script that we have made follow to ERD or not.Keyword: Database, ERD, DD
ANALISA PERBANDINGAN TINGKAT EFISIENSI ALGORITMA DATA DEFINITION LANGUAGE COPY, INPLACE, INSTANT DATABASE MYSQL
MySQL database schema changes are becoming more frequent than ever, Four out of five application updates (releases) require appropriate database changes, For Database schema changes are more often a repetitive task, it may be a request from the application team to add or modify a column in a table and many cases other. This study aims to measure the level of time efficiency to modify the sales transaction table schema which consists of 50000 number of records in the MySQL database with the Data Definition Language (DDL) Copy, Inplace and Instant algorithm on the MySQL database
Translation of semantic aspects of OODINI graphical representation to ONTO OODB data definition language
In this thesis we present a system to translate the semantic elements in the graphical schema language of OODINI from API of OODAL to the Type definition of ONTOS DB. To translate semantic constraints of the graphical language, we patch more information to existent class data structure in API of OODAL. After a brief review of OODINI, ONTOS DB and the existent translator without the ability to translate semantic constraints, we describe in detail the methods to translate the essential relationship. dependent relationship, multi-valued essential relationship and multi-valued dependent relationship. We employ an Inverse Reference to a Set of Type to achieve the goal. Setof and Tupleof relationship are special cases of the above relationships. For validating the result of the translation, we give examples of translation of a schema containing each of the relationships discussed
- …