60,549 research outputs found
Modeling of Traceability Information System for Material Flow Control Data.
This paper focuses on data modeling for traceability of material/work flow in information
layer of manufacturing control system. The model is able to trace all associated data throughout the
product manufacturing from order to final product. Dynamic data processing of Quality and Purchase
activities are considered in data modeling as well as Order and Operation base on lots particulars. The
modeling consisted of four steps and integrated as one final model. Entity-Relationships Modeling as
data modeling methodology is proposed. The model is reengineered with Toad Data Modeler software
in physical modeling step. The developed model promises to handle fundamental issues of a
traceability system effectively. It supports for customization and real-time control of material in flow
in all levels of manufacturing processes. Through enhanced visibility and dynamic store/retrieval of
data, all traceability usages and applications is responded. Designed solution is initially applicable as
reference data model in identical lot-base traceability system
Clinical data wrangling using Ontological Realism and Referent Tracking
Ontological realism aims at the development of high quality ontologies that faithfully represent what is general in reality and to use these ontologies to render heterogeneous data collections comparable. To achieve this second goal for clinical research datasets presupposes not merely (1) that the requisite ontologies already exist, but also (2) that the datasets in question are faithful to reality in the dual sense that (a) they denote only particulars and relationships between particulars that do in fact exist and (b) they do this in terms of the types and type-level relationships described in these ontologies. While much attention has been devoted to (1), work on (2), which is the topic of this paper, is comparatively rare. Using Referent Tracking as basis, we describe a technical data wrangling strategy which consists in creating for each dataset a template that, when applied to each particular record in the
dataset, leads to the generation of a collection of Referent Tracking Tuples (RTT) built out of unique identifiers for the entities described by means of the data items in the record. The proposed strategy is based on (i) the distinction between data and what data are about, and (ii) the explicit descriptions of portions of reality which RTTs provide and which range not only over the particulars described by data items in a dataset, but also over these data items themselves. This last feature allows us to describe particulars that are only implicitly referred to by the dataset; to provide information about correspondences between data items in a dataset; and to assert
which data items are unjustifiably or redundantly present in or absent from the dataset. The approach has been tested on a dataset collected from patients seeking treatment for orofacial pain at two German universities and made available for the
NIDCR-funded OPMQoL project
To dash or to dawdle: verb-associated speed of motion influences eye movements during spoken sentence comprehension
In describing motion events verbs of manner provide information about the speed of agents or objects in those events. We used eye tracking to investigate how inferences about this verb-associated speed of motion would influence the time course of attention to a visual scene that matched an event described in language. Eye movements were recorded as participants heard spoken sentences with verbs that implied a fast (“dash”) or slow (“dawdle”) movement of an agent towards a goal. These sentences were heard whilst participants concurrently looked at scenes depicting the agent and a path which led to the goal object. Our results indicate a mapping of events onto the visual scene consistent with participants mentally simulating the movement of the agent along the path towards the goal: when the verb implies a slow manner of motion, participants look more often and longer along the path to the goal; when the verb implies a fast manner of motion, participants tend to look earlier at the goal and less on the path. These results reveal that event comprehension in the presence of a visual world involves establishing and dynamically updating the locations of entities in response to linguistic descriptions of events
Modelling Provenance of Sensor Data for Food Safety Compliance Checking
The research described here was funded by an award made by the RCUK IT as a Utility Network+ (EP/K003569/1) and the UK Food Standards Agency. We thank the owner and staff of Rye & Soda restaurant, Aberdeen for their support throughout the project.Postprin
A core ontology for business process analysis
Business Process Management (BPM) aims at supporting the whole life-cycle necessary to deploy and maintain business processes in organisations. An important step of the BPM life-cycle is the analysis of the processes deployed in companies. However, the degree of automation currently achieved cannot support the level of adaptation required by businesses. Initial steps have been performed towards including some sort of automated reasoning within Business Process Analysis (BPA) but this is typically limited to using taxonomies. We present a core ontology aimed at enhancing the state of the art in BPA. The ontology builds upon a Time Ontology and is structured around the process, resource, and object perspectives as typically adopted when analysing business processes. The ontology has been extended and validated by means of an Events Ontology and an Events Analysis Ontology aimed at capturing the audit trails generated by Process-Aware Information Systems and deriving additional knowledge
The Xeros data model: tracking interpretations of archaeological finds
At an archaeological dig, interpretations are built around discovered artifacts based on measurements and informed intuition. These interpretations are semi-structured and organic, yet existing tools do not capture their creation or evolution. Patina of Notes (PoN) is an application designed to tackle this, and is underpinned by the Xeros data model. Xeros is a graph structure and a set of operations that can deal with the addition, edition, and removal of interpretations. This data model is a specialisation of the W3C PROV provenance data model, tracking the evolution of interpretations. The model is presented, with operations defined formally, and characteristics of the representation that are beneficial to implementations are discussed
From manuscript catalogues to a handbook of Syriac literature: Modeling an infrastructure for Syriaca.org
Despite increasing interest in Syriac studies and growing digital
availability of Syriac texts, there is currently no up-to-date infrastructure
for discovering, identifying, classifying, and referencing works of Syriac
literature. The standard reference work (Baumstark's Geschichte) is over ninety
years old, and the perhaps 20,000 Syriac manuscripts extant worldwide can be
accessed only through disparate catalogues and databases. The present article
proposes a tentative data model for Syriaca.org's New Handbook of Syriac
Literature, an open-access digital publication that will serve as both an
authority file for Syriac works and a guide to accessing their manuscript
representations, editions, and translations. The authors hope that by
publishing a draft data model they can receive feedback and incorporate
suggestions into the next stage of the project.Comment: Part of special issue: Computer-Aided Processing of Intertextuality
in Ancient Languages. 15 pages, 4 figure
- …