14,510 research outputs found

    Penghasilan dan penilaian video pembelajaran (CD) bagi mata pelajaran Prinsip Ekonomi (BPA 1013) bertajuk permintaan dan penawaran di KUITTHO

    Get PDF
    Kajian ini dijaiankan untuk meniiai keberkesanan sebuah video pembeiajaran (CD) mata peiajaran Prinsip Ekonomi (BPA 1013) bertajuk Permintaan dan Penawaran. Bagi tujuan tersebut, sebuah video pembelajaran telah dihasilkan membantu pelajar bagi memahami mata pelajaran berkenan semasa proses pengajaran dan pembelajaran berlaku. Video pembelajaran yang dihasilkan ini kemudian dinilai dari aspek proses pengajaran dan pembelajaran, minat dan persepsi responden terhadap ciri-ciri video (audio dan visual). Seramai 60 orang pelajar semester 2 Sarjana Muda Sains Pengurusan di Kolej Universiti Teknologi Tun Hussein Onn telah dipiih bagi membuat penilaian kebolehgunaan produk ini sebagai alat bantuan mengajar di dalam kelas. Semua data yang diperolehi kemudiannya dikumpulkan bagi dianalisis dengan menggunakan perisian "SrarMfKM/ Pac/rageybr Rocaj/ Sb/'eace " (SPSS). Hasil dapatan kajian yang dilakukan jelas menunjukkan video pengajaran yang dihasilkan dan dinilai ini amat sesuai digunakan bagi tujuan memenuhi keperluan proses pengajaran dan pembelajaran subjek ini di dalam kelas

    State-of-the-art on evolution and reactivity

    Get PDF
    This report starts by, in Chapter 1, outlining aspects of querying and updating resources on the Web and on the Semantic Web, including the development of query and update languages to be carried out within the Rewerse project. From this outline, it becomes clear that several existing research areas and topics are of interest for this work in Rewerse. In the remainder of this report we further present state of the art surveys in a selection of such areas and topics. More precisely: in Chapter 2 we give an overview of logics for reasoning about state change and updates; Chapter 3 is devoted to briefly describing existing update languages for the Web, and also for updating logic programs; in Chapter 4 event-condition-action rules, both in the context of active database systems and in the context of semistructured data, are surveyed; in Chapter 5 we give an overview of some relevant rule-based agents frameworks

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Modeling views in the layered view model for XML using UML

    Get PDF
    In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of Extensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user-defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi-structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three-fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a view-driven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction

    Ontology mapping: the state of the art

    No full text
    Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mapping has beeb the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping

    A framework for digital model checking

    Get PDF
    Dissertação de mestrado em European Master in Building Information ModellingDigital model checking (DMC) is a solution that has the power to become a primary key player for the AEC industry concerns. Despite the research achievements on DMC, there are still gaps to make it practical to solve real-world problems. DMC, as an emerging research discipline, is still an area of development and not yet completely formalized. This means that there is still a need for enhanced system capabilities, updated processes, and adjustments to the current project delivery documents and proper standardization of DMC aspects. The work of this dissertation proposes a diagnostic approach based on using pre-defined principles to analyse digital model checking (DMC) and a formal framework and implementation plan. These principles are the Digital Information model (DIM), Rule-set, and checking platform. To set up a formal framework a modularization approach was used focused on “what things are”, “what is the logic behind extending the pre-existing concepts” and “how it assists the DMC process”. These modules play a fundamental role and they must be captured, tracked, and interconnected during the development of the framework. Throughout the expansion of principles, modules were built on a basis that 1) DIMs are the wholeness of information that should include existing physical systems not only buildings, 2) verification rules are not only sourced from regulatory codes and standards, and there are other sources of rules that should be taken into consideration, 3) the role of involved stakeholders, native system and project phases has not been ignored, 4) evaluate the effectiveness of DIMs to integrate, exchange, identify, and verify its content and 5) highlight on the existent classifications that could aid the DMC process. Moreover, DMC is a dependent activity that has cause and effect on former and subsequent activities. Thus, this dissertation also proposes a DMC implementation plan that could fit within the other project activities.A verificação de modelo digital (DMC) Ă© uma solução que tem o poder de se tornar um ator principal para as preocupaçÔes da indĂșstria de AEC. Apesar dos resultados da investigação sobre DMC, ainda existem lacunas para tornĂĄ-lo prĂĄtico para resolver problemas do mundo real. DMC, como uma ĂĄrea de investigação emergente, Ă© ainda uma ĂĄrea em desenvolvimento e nĂŁo completamente formalizada. Isso significa que existe ainda necessidade de aprimorato das capacidades dos sistemas, atualização de processos, ajustes aos atuais documentos de entrega do projeto e padronização adequada dos aspectos de DMC. O trabalho desta dissertação visa propor uma abordagem de diagnĂłstico baseada no uso de princĂ­pios prĂ©-definidos para analisar o processo de verificação de modelo digital (DMC), um framework formal e um plano de implementação. Esses princĂ­pios sĂŁo o modelo digital de informação (DIM), o conjunto de regras e a plataforma de verificação. Para configurar uma metodologia formal, uma abordagem de modularização foi usada com foco em “o que as coisas sĂŁo”, “qual Ă© a lĂłgica por trĂĄs da extensĂŁo dos conceitos prĂ©-existentes” e “como isso auxilia o processo DMC”. Esses mĂłdulos desempenham um papel fundamental e devem ser capturados, verificados e interconectados durante o desenvolvimento da metodologia. Ao longo da expansĂŁo dos princĂ­pios, os mĂłdulos foram construĂ­dos com base em: 1) os DIMs representam a totalidade da informação os quais devem incluir todos sistemas fĂ­sicos existentes, nĂŁo apenas os edifĂ­cios, 2) as regras de verificação nĂŁo sĂŁo apenas originĂĄrias de cĂłdigos e padrĂ”es regulatĂłrios, existindo outras fontes de regras que devem ser levadas em consideração, 3) o papel das partes interessadas envolvidas, sistemas nativos e as fases do projeto nĂŁo foram ignorados, 4) avaliar a eficĂĄcia dos DIMs para integrar, trocar, identificar e verificar seu conteĂșdo e 5) destacar a existencia de systemas de classificação que poderiam auxiliar no processo de DMC. AlĂ©m disso, o DMC Ă© uma atividade dependente que tem causa e efeito nas atividades anteriores e subsequentes. Assim, esta dissertação tambĂ©m propoe um plano de implementação do DMC para se enquadrar nas outras atividades do projeto

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems

    UML models consistency management: guidelines for software quality manager

    No full text
    Unified Modeling Language (UML) has become the de-facto standard to design today’s large-size object-oriented systems. However, focusing on multiple UML diagrams is a main cause of breaching the consistency problem, which ultimately reduces the overall software model’s quality. Consistency management techniques are widely used to ensure the model consistency by correct model-to-model and model-to-code transformation. Consistency management becomes a promising area of research especially for model-driven architecture. In this paper, we extensively review UML consistency management techniques. The proposed techniques have been classified based on the parameters identified from the research literature. Moreover, we performed a qualitative comparison of consistency management techniques in order to identify current research trends, challenges and research gaps in this field of study. Based on the results, we concluded that researchers have not provided more attention on exploring inter-model and semantic consistency problems. Furthermore, state-of-the-art consistency management techniques mostly focus only on three UML diagrams (i.e., class, sequence and state chart) and the remaining UML diagrams have been overlooked. Consequently, due to this incomplete body of knowledge, researchers are unable to take full advantage of overlooked UML diagrams, which may be otherwise useful to handle the consistency management challenge in an efficient manner
    • 

    corecore