124 research outputs found

    Penghasilan dan penilaian video pembelajaran (CD) bagi mata pelajaran Prinsip Ekonomi (BPA 1013) bertajuk permintaan dan penawaran di KUITTHO

    Get PDF
    Kajian ini dijaiankan untuk meniiai keberkesanan sebuah video pembeiajaran (CD) mata peiajaran Prinsip Ekonomi (BPA 1013) bertajuk Permintaan dan Penawaran. Bagi tujuan tersebut, sebuah video pembelajaran telah dihasilkan membantu pelajar bagi memahami mata pelajaran berkenan semasa proses pengajaran dan pembelajaran berlaku. Video pembelajaran yang dihasilkan ini kemudian dinilai dari aspek proses pengajaran dan pembelajaran, minat dan persepsi responden terhadap ciri-ciri video (audio dan visual). Seramai 60 orang pelajar semester 2 Sarjana Muda Sains Pengurusan di Kolej Universiti Teknologi Tun Hussein Onn telah dipiih bagi membuat penilaian kebolehgunaan produk ini sebagai alat bantuan mengajar di dalam kelas. Semua data yang diperolehi kemudiannya dikumpulkan bagi dianalisis dengan menggunakan perisian "SrarMfKM/ Pac/rageybr Rocaj/ Sb/'eace " (SPSS). Hasil dapatan kajian yang dilakukan jelas menunjukkan video pengajaran yang dihasilkan dan dinilai ini amat sesuai digunakan bagi tujuan memenuhi keperluan proses pengajaran dan pembelajaran subjek ini di dalam kelas

    Towards interoperability of i* models using iStarML

    Get PDF
    Goal-oriented and agent-oriented modelling provides an effective approach to the understanding of distributed information systems that need to operate in open, heterogeneous and evolving environments. Frameworks, firstly introduced more than ten years ago, have been extended along language variants, analysis methods and CASE tools, posing language semantics and tool interoperability issues. Among them, the i* framework is one the most widespread. We focus on i*-based modelling languages and tools and on the problem of supporting model exchange between them. In this paper, we introduce the i* interoperability problem and derive an XML interchange format, called iStarML, as a practical solution to this problem. We first discuss the main requirements for its definition, then we characterise the core concepts of i* and we detail the tags and options of the interchange format. We complete the presentation of iStarML showing some possible applications. Finally, a survey on the i* community perception about iStarML is included for assessment purposes.Preprin

    Validation Framework for RDF-based Constraint Languages

    Get PDF
    In this thesis, a validation framework is introduced that enables to consistently execute RDF-based constraint languages on RDF data and to formulate constraints of any type. The framework reduces the representation of constraints to the absolute minimum, is based on formal logics, consists of a small lightweight vocabulary, and ensures consistency regarding validation results and enables constraint transformations for each constraint type across RDF-based constraint languages

    Data quality evaluation through data quality rules and data provenance.

    Get PDF
    The application and exploitation of large amounts of data play an ever-increasing role in today’s research, government, and economy. Data understanding and decision making heavily rely on high quality data; therefore, in many different contexts, it is important to assess the quality of a dataset in order to determine if it is suitable to be used for a specific purpose. Moreover, as the access to and the exchange of datasets have become easier and more frequent, and as scientists increasingly use the World Wide Web to share scientific data, there is a growing need to know the provenance of a dataset (i.e., information about the processes and data sources that lead to its creation) in order to evaluate its trustworthiness. In this work, data quality rules and data provenance are used to evaluate the quality of datasets. Concerning the first topic, the applied solution consists in the identification of types of data constraints that can be useful as data quality rules and in the development of a software tool to evaluate a dataset on the basis of a set of rules expressed in the XML markup language. We selected some of the data constraints and dependencies already considered in the data quality field, but we also used order dependencies and existence constraints as quality rules. In addition, we developed some algorithms to discover the types of dependencies used in the tool. To deal with the provenance of data, the Open Provenance Model (OPM) was adopted, an experimental query language for querying OPM graphs stored in a relational database was implemented, and an approach to design OPM graphs was proposed

    Formalization of molecular interaction maps in systems biology; Application to simulations of the relationship between DNA damage response and circadian rhythms

    Full text link
    Quantitative exploration of biological pathway networks must begin with a qualitative understanding of them. Often researchers aggregate and disseminate experimental data using regulatory diagrams with ad hoc notations leading to ambiguous interpretations of presented results. This thesis has two main aims. First, it develops software to allow researchers to aggregate pathway data diagrammatically using the Molecular Interaction Map (MIM) notation in order to gain a better qualitative understanding of biological systems. Secondly, it develops a quantitative biological model to study the effect of DNA damage on circadian rhythms. The second aim benefits from the first by making use of visual representations to identify potential system boundaries for the quantitative model. I focus first on software for the MIM notation - a notation to concisely visualize bioregulatory complexity and to reduce ambiguity for readers. The thesis provides a formalized MIM specification for software implementation along with a base layer of software components for the inclusion of the MIM notation in other software packages. It also provides an implementation of the specification as a user-friendly tool, PathVisio-MIM, for creating and editing MIM diagrams along with software to validate and overlay external data onto the diagrams. I focus secondly on the application of the MIM software to the quantitative exploration of the poorly understood role of SIRT1 and PARP1, two NAD+-dependent enzymes, in the regulation of circadian rhythms during DNA damage response. SIRT1 and PARP1 participate in the regulation of several key DNA damage-repair proteins and are the subjects of study as potential cancer therapeutic targets. In this part of the thesis, I present an ordinary differential equation (ODE) model that simulates the core circadian clock and the involvement of SIRT1 in both the positive and negative arms of circadian regulation. I then use this model is then used to predict a potential role for the competition for NAD+ supplies by SIRT1 and PARP1 leading to the observed behavior of primarily phase advancement of circadian oscillations during DNA damage response. The model further predicts a potential mechanism by which multiple forms of post-transcriptional modification may cooperate to produce a primarily phase advancement

    Extensible metadata repository for information systems

    Get PDF
    Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer ScienceInformation Systems are, usually, systems that have a strong integration component and some of those systems rely on integration solutions that are based on metadata (data that describes data). In that situation, there’s a need to deal with metadata as if it were “normal”information. For that matter, the existence of a metadata repository that deals with the integrity, storage, validity and eases the processes of information integration in the information system is a wise choice. There are several metadata repositories available in the market, but none of them is prepared to deal with the needs of information systems or is generic enough to deal with the multitude of situations/domains of information and with the necessary integration features. In the SESS project (an European Space Agency project), a generic metadata repository was developed, based on XML technologies. This repository provided the tools for information integration, validity, storage, share, import, as well as system and data integration, but it required the use of fix syntactic rules that were stored in the content of the XML files. This situation causes severe problems when trying to import documents from external data sources (sources unaware of these syntactic rules). In this thesis a metadata repository that provided the same mechanisms of storage, integrity, validity, etc, but is specially focused on easy integration of metadata from any type of external source (in XML format) and provides an environment that simplifies the reuse of already existing types of metadata to build new types of metadata, all this without having to modify the documents it stores was developed. The repository stores XML documents (known as Instances), which are instances of a Concept, that Concept defines a XML structure that validates its Instances. To deal with reuse, a special unit named Fragment, which allows defining a XML structure (which can be created by composing other Fragments) that can be reused by Concepts when defining their own structure. Elements of the repository (Instances,Concepts and Fragment) have an identifier based on (and compatible with) URIs, named Metadata Repository Identifier (MRI). Those identifiers, as well as management information(including relations) are managed by the repository, without the need to use fix syntactic rules, easing integration. A set of tests using documents from the SESS project and from software-house ITDS was used to successfully validate the repository against the thesis objectives of easy integration and promotion of reuse

    Schema Languages & Internationalization Issues: A survey

    Get PDF
    Many XML-related activities (e.g. the creation of a new schema) already address issues with different languages, scripts, and cultures. Nevertheless, a need exists for additional mechanisms and guidelines for more effective internationalization (i18n) and localization (l10n) in XML-related contents and processes. The W3C Internationalization Tag Set Working Group (W3C ITS WG) addresses this need and works on data categories, representation mechanisms and guidelines related to i18n and l10n support in the XML realm. This paper describes initial findings from the (W3C ITS WG). Furthermore, the paper discusses how these findings relate to specific schema languages, and complementary technologies like namespace sectioning, schema annotation and the description of processing chains. The paper exemplifies why certain requirements only can be met by a combination of technologies, and discusses these technologies

    Representation of User Stories in Descriptive Markup

    Get PDF
    The environment in which a software system is developed is in a constant state of flux. The changes at higher levels of software development often manifest themselves in changes at lower levels, especially its activities and artifacts. In the past decade, a notable change has been the emergence of agile methodologies for software development. In a number of agile methodologies, user stories have been adopted as a style of expressing software requirements. This thesis is about theory and practice of describing user stories so as to make them amenable to both humans and machines. In that regard, relevant concerns in describing user stories must be considered and treated separately. In this thesis, a number of concerns in describing user stories are identified, and a collection of conceptual models to help create an understanding of those concerns are formulated. In particular, conceptual models for user story description, stakeholders, information, representation, and presentation are proposed. To facilitate structured descriptions of user stories, a User Story Markup Language (USML) is specified. USML depends on the requisite conceptual models for theoretical foundation. It is informed by experiential knowledge, especially conventions, guidelines, patterns, principles, recommended practices, and standards in markup language engineering. In doing so, USML aims to make the decisions underlying its development explicit. USML provides conformance criteria that include validation against multiple schema documents. In particular, USML is equipped with a grammar-based schema document and a rule-based schema document that constrain USML instance documents in different ways. USML aims to be flexible and extensible. In particular, USML enables a variety of user story forms, which allow a range of user story descriptions. USML instance documents can be intermixed with markup fragments of other languages, presented on conventional user agents, and organized and manipulated in different ways. USML can also be extended in a number of ways to accommodate the needs of those user story stakeholders who wish to personalize the language

    Resolving the Durand Conundrum

    Get PDF
    This paper proposes a minor but significant modification to the TEI ODD language and explores some of its implications. Can we improve on the present compromise whereby TEI content models are expressed in RELAX NG? A very small set of additional elements would permit the ODD language to cut its ties with any existing schema language, and thus permit it to support exactly and only the subset or intersection of their facilities which makes sense in the TEI context. It would make the ODD language an integrated and independent whole rather than an uneasy hybrid, and pave the way for future developments in the management of structured text beyond the XML paradigm

    Formalization of the neuro-biological models for spike neurons.

    Get PDF
    When modelizing cortical neuronal maps (here spiking neuronal networks) within the scope of the FACETS project, researchers in neuro-science and computer-science use NeuroML, a XML language, to specify biological neuronal networks. These networks could be simulated either using analogue or event-based techniques. Specifications include : - parametric model specification - model equation symbolic definition - formalization of related semantic aspects (paradigms, ..) and they are used by "non-computer-scientists". In this context XML is used to specify data structures, not documents. The first version of NeuroML uses Java to map XML biological data which can be later simulated within GENESIS, NEURON, etc. The second version uses tools for handling XML data, as XSL, to transform an XML file. To allow NeuroML to be used intensively within the scope of the FACETS project, we will entirely analyse the software. First we are going to evaluate this software deeply in the Technical Report section. Then we will propose a prototype to write down NeuroML code easily
    corecore