452,358 research outputs found
Towards MKM in the Large: Modular Representation and Scalable Software Architecture
MKM has been defined as the quest for technologies to manage mathematical
knowledge. MKM "in the small" is well-studied, so the real problem is to scale
up to large, highly interconnected corpora: "MKM in the large". We contend that
advances in two areas are needed to reach this goal. We need representation
languages that support incremental processing of all primitive MKM operations,
and we need software architectures and implementations that implement these
operations scalably on large knowledge bases.
We present instances of both in this paper: the MMT framework for modular
theory-graphs that integrates meta-logical foundations, which forms the base of
the next OMDoc version; and TNTBase, a versioned storage system for XML-based
document formats. TNTBase becomes an MMT database by instantiating it with
special MKM operations for MMT.Comment: To appear in The 9th International Conference on Mathematical
Knowledge Management: MKM 201
Tacit Knowledge Database Management
This document provides a progress report for the IT/IS final year project.
This project is called 'Tacit Knowledge Database Management'. Through this
research, I will come out with the best method for storing information in the database.
The study is about the continuation of exiting system done by Mr. Ho Yik Min
(Acquisition of Tacit Knowledge Using Intervention Method). The further research are
more focus on the increasing of the number of intervention and the arrangement or
organizing the knowledge captured in the database in order to improve the system.
Knowledge Management taxonomy and ontology also have been studied to expand the
current work. The current system can be proven to be able to capture tacit knowledge.
Thus, knowledge captured should be organized and tagged so that it can be arranged
and stored accordingly to facilitate knowledge retrieval process and knowledge
upgrade. As the knowledge grows, the system would be able to store and retrieve
knowledge more effectively and efficiently.
Tacit knowledge is when a person reflects upon theory in the light of praxis or practical
judgment, the form of knowledge that results is personal. The entire introduction,
problem statement, objectives, and the scope ofthe studies for the project will befurther
explained in Chapter 1 - INTRODUCTION. This document also gives further
information about the system the literature review/theory section. This section includes
the standard features of database system, as well as the benefits from using the database
system. Thus we can see that what we are looking at are 'human' factors, and this links
to the previous section relating to data, information and knowledge showing that
knowledge management is a technological 'fix', via structuring a good database system.
One of the most important factors that become clear is that the issue of trust needs to be
taken very seriously, such as the trust of technology solution, fellow workers, and
without trust knowledge will not be shared
Adapting the Access Northwind Database to Support a Database Course
A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small ‘toy’ databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts across a single database. The Northwind Traders sample database is available by Microsoft for download and use in Microsoft Access, and illustrates transactional processing for a fictitious company that imports (purchases) and exports (sells) specialty foods from around the world. The database contains sample tables, queries, forms, reports, Macros, VBA Class Objects, functions and modules, and other database features. Although the primary purpose for the database is to serve as an illustrative design template for students and practitioners, unfortunately the database and business processes are largely undocumented. This paper attempts to more completely document the business processes, including establishing business rules, describing relationships and participations, and discusses some problems with the existing design. Following understanding of the database and associated business processes, this paper can be used as both a teaching tool and a guide for practitioners using the Northwind Traders sample database as a design template. Additionally, it has been used successfully to introduce the concept of business processes and mapping them in an underlying database in introductory ERP courses
online clearance system (OC System)
The purpose of this project is to design a system for UTP Security Department,
fnformation Resource Centre Department and finance Department specifically for
the administration of students and staffs' administration management. The system
will be web - based applications that can be executed using normal web browser for
inter - platform capabilities. The project is divided into two terms, first the research
on clearance system for final semester student and second development on the
Online Clearance System (OC System). Research on the OC System will be based
on the problem statement and objective of the project while the research for the
current clearance system for fmal semester at UTP is the support idea for the project.
This document also gives further information about the system in the literature
review/theory section. This section includes the features of the system, the benefits
from using the system, and the data flow diagram of the intended system.
Part of the Final Year Project, student manage to get known with the business
environment on how they manage their database, clearance system and the
performance of the staff. In using Online Clearance System (OC System) give the
best solution for the staff as the database plays an important asset for the staff. l
would use distributed database in my system.A distributed database is a database
that is under the control of a central database management system in which storage
devices are not all attached to a common CPU. It may be stored in multiple
computers located in the same physical location, or may be dispersed over a network
of interconnected computers. Collections of data (eg. in a database) can be
distributed across multiple physical locations. A distributed database is distributed
into separate partitions/fragments. Each partition/fragment of a distributed database
may be replicated (ie. redundant fail-overs, R.AlD fike)
A systematic approach for monitoring and evaluating the construction project progress
A persistent problem in construction is to document changes which occur in the field and to prepare the as-built schedule. In current practice, deviations from planned performance can only be reported after significant time has elapsed and manual monitoring of the construction activities are costly and error prone. Availability of advanced portable computing, multimedia and wireless communication allows, even encourages fundamental changes in many jobsite processes. However a recent investigation indicated that there is a lack of systematic and automated evaluation and monitoring in construction projects. The aim of this study is to identifytechniques that can be used in the construction industry for monitoring and evaluating the
physical progress, and also to establish how current computer technology can be utilised for monitoring the actual physical progress at the construction site. This study discusses the results of questionnaire survey conducted within Malaysian Construction Industry and suggests a prototype system, namely Digitalising Construction Monitoring (DCM). DCM prototype system
integrates the information from construction drawings, digital images of construction site progress and planned schedule of work. Using emerging technologies and information system the DCM re-engineer the traditional practice for monitoring the project progress. This system can automatically interpret CAD drawings of buildings and extract data on its structural components and store in database. It can also extract the engineering information from digital images and when these two databases are simulated the percentage of progress can be calculated and viewed in Microsoft Project automatically. The application of DCM system for monitoring the project progress enables project management teams to better track and controls the productivity and quality of construction projects. The use of the DCM can help resident engineer, construction manager and site engineer in monitoring and evaluating project performance. This model will improve decision-making process and provides better mechanism for advanced project management
A Model-Based Approach for the Management of Electronic Invoices
The globalized market pushes companies to expand their business boundaries to a whole new level. In order to efficiently support this environment, business transactions must be executed over the Internet. However, there are several factors complicating this process, such as the current state of electronic invoices. Electronic invoice adoption is not widespread because of the current format fragmentation originated by national regulations. In this paper we present an approach based on Model-Driven Engineering techniques and abstractions for supporting the core functions of invoice management systems. We compare our solution with the traditional implementations and try to analyze the advantages MDE can bring to this specific domain
Proceedings of the ECSCW'95 Workshop on the Role of Version Control in CSCW Applications
The workshop entitled "The Role of Version Control in Computer Supported Cooperative Work Applications" was held on September 10, 1995 in Stockholm, Sweden in conjunction with the ECSCW'95 conference. Version control, the ability to manage relationships between successive instances of artifacts, organize those instances into meaningful structures, and support navigation and other operations on those structures, is an important problem in CSCW applications. It has long been recognized as a critical issue for inherently cooperative tasks such as software engineering, technical documentation, and authoring. The primary challenge for versioning in these areas is to support opportunistic, open-ended design processes requiring the preservation of historical perspectives in the design process, the reuse of previous designs, and the exploitation of alternative designs.
The primary goal of this workshop was to bring together a diverse group of individuals interested in examining the role of versioning in Computer Supported Cooperative Work. Participation was encouraged from members of the research community currently investigating the versioning process in CSCW as well as application designers and developers who are familiar with the real-world requirements for versioning in CSCW. Both groups were represented at the workshop resulting in an exchange of ideas and information that helped to familiarize developers with the most recent research results in the area, and to provide researchers with an updated view of the needs and challenges faced by application developers. In preparing for this workshop, the organizers were able to build upon the results of their previous one entitled "The Workshop on Versioning in Hypertext" held in conjunction with the ECHT'94 conference. The following section of this report contains a summary in which the workshop organizers report the major results of the workshop. The summary is followed by a section that contains the position papers that were accepted to the workshop. The position papers provide more detailed information describing recent research efforts of the workshop participants as well as current challenges that are being encountered in the development of CSCW applications. A list of workshop participants is provided at the end of the report.
The organizers would like to thank all of the participants for their contributions which were, of course, vital to the success of the workshop. We would also like to thank the ECSCW'95 conference organizers for providing a forum in which this workshop was possible
Impliance: A Next Generation Information Management Appliance
ably successful in building a large market and adapting to the changes of the
last three decades, its impact on the broader market of information management
is surprisingly limited. If we were to design an information management system
from scratch, based upon today's requirements and hardware capabilities, would
it look anything like today's database systems?" In this paper, we introduce
Impliance, a next-generation information management system consisting of
hardware and software components integrated to form an easy-to-administer
appliance that can store, retrieve, and analyze all types of structured,
semi-structured, and unstructured information. We first summarize the trends
that will shape information management for the foreseeable future. Those trends
imply three major requirements for Impliance: (1) to be able to store, manage,
and uniformly query all data, not just structured records; (2) to be able to
scale out as the volume of this data grows; and (3) to be simple and robust in
operation. We then describe four key ideas that are uniquely combined in
Impliance to address these requirements, namely the ideas of: (a) integrating
software and off-the-shelf hardware into a generic information appliance; (b)
automatically discovering, organizing, and managing all data - unstructured as
well as structured - in a uniform way; (c) achieving scale-out by exploiting
simple, massive parallel processing, and (d) virtualizing compute and storage
resources to unify, simplify, and streamline the management of Impliance.
Impliance is an ambitious, long-term effort to define simpler, more robust, and
more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement
(http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute,
display, and perform the work, make derivative works and make commercial use
of the work, but, you must attribute the work to the author and CIDR 2007.
3rd Biennial Conference on Innovative Data Systems Research (CIDR) January
710, 2007, Asilomar, California, US
- …