3,471 research outputs found

    Extraction transformation load (ETL) solution for data integration: a case study of rubber import and export information

    Get PDF
    Data integration is important in consolidating all the data in the organization or outside the organization to provide a unified view of the organization's information. Extraction Transformation Load (ETL) solution is the back-end process of data integration which involves collecting data from various data sources, preparing and transforming the data according to business requirements and loading them into a Data Warehouse (DW). This paper explains the integration of the rubber import and export data between Malaysian Rubber Board (MRB) and Royal Malaysian Customs Department (Customs) using the ETL solution. Microsoft SQL Server Integration Services (SSIS) and Microsoft SQL Server Agent Jobs have been used as the ETL tool and ETL scheduling

    The Use of Olap Reporting Technology to Improve Patient Care Services Decision Making Within the University Health Center

    Get PDF
    The purpose of this paper is to demonstrate that it is feasible for the student health center to leverage existing clinical data in a data warehouse by using OLAP reporting in order to improve patient care and health care services decision making. Historically, University health care centers have relied mainly on operational data sources for critical health care decision making. These sources of data do not contain enough information to allow health officials to recognize trends or predict how future changes in health care services might vastly improve overall heath care. Four proof of concept artifacts are constructed through design science research methodology, and a feasibility study is presented to build the case for the adoption of OLAP reporting technology. The study concludes that it is feasible to implement an OLAP reporting infrastructure at the student health center if physicians, clinical staff, and administration clearly define the need for the new technology, develop proper data extraction, loading, and transformation strategy, and adequately provide project management and data warehouse design towards the implementation of the solution

    Data Warehouse Design and Management: Theory and Practice

    Get PDF
    The need to store data and information permanently, for their reuse in later stages, is a very relevant problem in the modern world and now affects a large number of people and economic agents. The storage and subsequent use of data can indeed be a valuable source for decision making or to increase commercial activity. The next step to data storage is the efficient and effective use of information, particularly through the Business Intelligence, at whose base is just the implementation of a Data Warehouse. In the present paper we will analyze Data Warehouses with their theoretical models, and illustrate a practical implementation in a specific case study on a pharmaceutical distribution companyData warehouse, database, data model.

    Database migration processes and optimization using BSMS (bank staff management system)

    Get PDF
    Veritabanları temel olarak karmaşık verilere bağlı görevleri yerine getirmek ve bu görevleri gerçekleştirmek için tasarlanmış bir depolama teknolojisidir, veri bütünlüğü önemlidir. Pek çok şirket için, veritabanları kelimenin tam anlamıyla şirketin işinin elektronik bir temsilidir ve göç sırasında herhangi bir veri parçasını kaybeder ve kaybeder kabul edilemez. Verilerin taşınmasının çeşitli ticari nedenleri vardır, bunlardan bazıları arşivleme, veri depolama, yeni ortama, platformlara veya teknolojiye geçmedir. Veri tabanı geçişi, genellikle değerlendirme, veri tabanı şeması dönüşümü, veri geçişi ve işlevsel testi içeren karmaşık, çok fazlı bir işlemdir. Çevrimiçi İşlem İşleme (OLTP) veritabanları genellikle veri bütünlüğü sağlama, veri fazlalığını ortadan kaldırma ve kayıt kilitlemesini azaltma gibi görevleri yerine getirerek verimlilik için çok normalize edilir. Ancak bu veritabanı tasarım sistemi bize çok sayıda tablo sunar ve bu tabloların ve yabancı anahtar kısıtlamalarının her biri veri taşıma noktasında dikkate alınmalıdır. Ayrıca, geleneksel görevlerden farklı olarak veri taşıma işi için Kabul kriteri tamamen% 100'dür, çünkü hatalar veritabanlarında tolere edilmez ve kalite önemlidir. Bu tez, verilerin Paradox veritabanı adı verilen yavaş, verimsiz ve eski bir veritabanı platformundan, verileri başarıyla geçiren Oracle adı verilen çok daha gelişmiş bir veritabanına aktarılması sırasında ortaya çıkan zorlukları ve kaygıları göstermektedir. Herhangi bir tutarsızlık ve veri kaybı olmadan verileri hızlı bir şekilde alarak, bir sorgunun performansını iyileştirmek için indeksleme tekniği kullanılmıştır

    IMSMA V3.0: Experiences From the ”IMSMA Diaspora”

    Get PDF
    The Information Management System for Mine Action (IMSMA) V3.0 was released June 2003, and early experience with the system has been positive. Salient features are summarized, including geographic information system (GIS) capabilities based on ArcView GIS. Recommendations include operations-oriented training focusing on reporting information from IMSMA. The following article describes the new version and discusses local customization. The authors also describe upgrading to IMSMA V3.0 based on experience as IMSMA administrators and trainers within their organizations

    Utilizing Big Data Analytics to Improve Education

    Get PDF
    Analytics can be defined as the process of determining, assessing, and interpreting meaning from volumes of data. It has been categorized in three different categories - descriptive, predictive and prescriptive. Predictive analysis can serve many segments of society as it can reveal hidden relationship which may not be apparent with descriptive modeling. Analytics advancement plays an important role in higher education planning. It answers several questions such as -which students will enroll in particular course, what courses are on trending or obsolete, what is the level of student satisfaction in the current education system, effectiveness of online study environment, how to design a better curriculum, likelihood of students transfer, drop out or failure to complete the course. Not only, data analytics helps in analyzing above points but also can be helpful in predictive modeling for faculty, administrative and students groups who are looking out for genuine results about the university rankings, based on which they make their decisions. Using the dataset “Academic Ranking of World Universities, 2003-2014”, we studied and analyzed to forecast how university’s management and faculty could adapt to changes to improve their education and thereby the ranking of their universities in the upcoming years. Microsoft SQL Server Data Mining Add-ins Excel 2008 was employed as a software mining tool for predicting the trending university ranking. This research paper concentrates upon predictive analysis of university ranking using forecasting based on data mining technique

    Type Ahead Search in Database using SQL

    Get PDF
    A type ahead search system computes answers on the fly as a user types in a keyword query character by character. We are going to study how to support type ahead search on data in a relational DBMS. We focus on how to help this type of search using the SQL. A prominent task that tests is how to influence existing database functionalities to meet the high performance to achieve an interactive speed. We extended the efficient way to the case of fuzzy queries, and suggested various techniques to improve query performance. We suggested incremental computation method to answer multi keyword queries, and calculated how to support first N queries and incremental updates. Our experimental results on large and real data sets showed that the proposed techniques can enables DBMS systems to support search as you type on large tables. DOI: 10.17762/ijritcc2321-8169.15024

    XML content warehousing: Improving sociological studies of mailing lists and web data

    Get PDF
    In this paper, we present the guidelines for an XML-based approach for the sociological study of Web data such as the analysis of mailing lists or databases available online. The use of an XML warehouse is a flexible solution for storing and processing this kind of data. We propose an implemented solution and show possible applications with our case study of profiles of experts involved in W3C standard-setting activity. We illustrate the sociological use of semi-structured databases by presenting our XML Schema for mailing-list warehousing. An XML Schema allows many adjunctions or crossings of data sources, without modifying existing data sets, while allowing possible structural evolution. We also show that the existence of hidden data implies increased complexity for traditional SQL users. XML content warehousing allows altogether exhaustive warehousing and recursive queries through contents, with far less dependence on the initial storage. We finally present the possibility of exporting the data stored in the warehouse to commonly-used advanced software devoted to sociological analysis
    corecore