221 research outputs found

    Perbandingan Penggunaan Database OLTP dan Data Warehouse

    Full text link
    As a permanent storage for business process transaction, database is a crucial and the needed for the system. Using database often does not match with the ability and functionality and even is it possible as theory said about using transaction database and beyond the advantages and disadvantages, separating using between transactional database and database for decision making will mine the ability and the powerful database as much as possible. Beside that daily transaction will increase the database capacity month by month and year by year and will decrease the performance, especially for customer daily services. Separating between database transaction and database for decision making will decrease connection to daily database transaction and increase daily database transaction as which is run by application and will implicate the increasing customer satisfaction. Moreover making the strategic reports for decision making never ever become a nightmare and unimportant thing. Differentiation efficiency for saving the amount of data byte and effectiveness the query speed in sql statement in order to make the decision making reports will be used as an approach for justification

    In-Memory and Column Storage Changes IS Curriculum

    Get PDF
    Random Access Memory (RAM) prices have been dropping precipitously. This has given rise to the possibility of keeping all data gathered in RAM rather than utilizing disk storage. This technological capability, along with benefits associated with a columnar storage database system, reduces the benefit of Relational Database Management Systems (RDBMS) and eliminates the need for Online Transactional Processing (OLTP) and Online Analytical Processing (OLAP) activities to remain separate. The RDBMS was a required data structure due to the need to separate the daily OLTP activities from the OLAP analysis of that data. In-memory processing allows both activities simultaneously. Data analysis can be done at the speed of data capture. Relational databases are not the only option for organizations. In-Memory is emerging, and university curriculum needs to innovate and create skills associated with denormalization of existing database (legacy) systems to prepare for the next generation of data managers

    Workload Performance Evaluation Of Large Spatial Database For DSS Based Disaster Management

    Get PDF
    Workload performance evaluation can be implemented during Disaster Management and especially at the response phase to handle large spatial data in the event of an eruption and in this study it is involves the merapi volcano of Indonesia. Merapi volcano is known for its biggest eruption in the world. After the occurrence of an eruption, the affected areas are isolated, and thus it is difficult to be accessed by the rescuers. It is indeed very difficult to reach the isolated area as well as to rescue the victims from the affected areas. Although specific researches have resulted in solutions to the issue, other aspects that include the sending of workload to the database needs to be taken into consideration and it is viable to result in an effective and efficient process. Besides, the shortest route could be defined timely and accurately hence enabling the victims to leave the isolated area and to reach the evacuation point safely. This research intends to study on workload performance which is crucial to support the working mechanism of Database Management System (DBMS). Literature on recent studies has made it clear that research in this particular area of interest is scarce. Therefore, the general objective of this research is to evaluate and predict workload performance of spatial DBMS associated with PostgreSQL which is different from MySQL. Based on incoming workload, this research is able to predict the associated workload into OLTP and DSS workload performance types. From the SQL statements it is clear that the DBMS is able to obtain and record the process, measure the analyzed performances and the workload classifier in the form of snapshots from the DBMS. For example, it has been proven that Dijkstra Algorithm is able to determine the shortest and the safest path. Then, all the workload that are obtained to determine the processes are recorded into one excel file. The Case Based Reasoning (CBR) optimized with Hash Search Technique has been adopted in this study for the purpose of evaluating and predicting the workload performance of PostgreSQL DBMS. Data recorded in the shortest path analysis process reveals that the evaluation and the prediction on workload performance of shortest path analysis using Dijkstra algorithm has been well implemented. It has been proven that the proposed CBR using Hash Search technique has resulted in an excellent prediction of the accuracy measurement. Besides, the results of the evaluation using confusion matrix has resulted in excellent accuracy as well as improvement in execution time. Additionally, the results of the study indicated that the prediction model for workload performance evaluation using CBR that is optimized with Hash Search technique for determining workload data on Shortest Path analysis via the employment of Dijkstra algorithm can be useful for the prediction of incoming workload based on the status of the DBMS parameters. In this way, information is delivered to DBMS hence ensuring incoming workload information is very crucial for the purpose of determining the smooth works of PostgreSQL DBMS

    Growth of relational model: Interdependence and complementary to big data

    Get PDF
    A database management system is a constant application of science that provides a platform for the creation, movement, and use of voluminous data. The area has witnessed a series of developments and technological advancements from its conventional structured database to the recent buzzword, bigdata. This paper aims to provide a complete model of a relational database that is still being widely used because of its well known ACID properties namely, atomicity, consistency, integrity and durability. Specifically, the objective of this paper is to highlight the adoption of relational model approaches by bigdata techniques. Towards addressing the reason for this in corporation, this paper qualitatively studied the advancements done over a while on the relational data model. First, the variations in the data storage layout are illustrated based on the needs of the application. Second, quick data retrieval techniques like indexing, query processing and concurrency control methods are revealed. The paper provides vital insights to appraise the efficiency of the structured database in the unstructured environment, particularly when both consistency and scalability become an issue in the working of the hybrid transactional and analytical database management system

    Analisis dan Perancangan Data Warehouse Perpustakaan (Studi Kasus: Perpustakaan Universitas Binadarma Palembang)

    Full text link
    Perpustakaan merupakan sarana yang digunakan untuk mendapatkan informasi karena dalam perpustakaan memiliki koleksi-koleksi yang dapat digunakan bagi kalangan akademis untuk mendapatkan informasi. Penelitian ini akan melakukan perancangan Data Warehouse yaitu sebuah repositori penyimpanan data dalam ukuran yang sangat besar yang mampu memberikan basisdata berorientasi subjek untuk informasi yang bersifat historis serta dapat digunakan untuk mendukung sistem pengambilan keputusan. Dalam penelitian ini akan dirancang sebuah Data Warehouse sebagai repository perpustakaan yang diimplementasikan menggunakan perangkat lunak bantu Pentaho Kettle. Perancangan Data Warehouse dalam penelitian ini menggunakan langkah-langkah yang ada dalam pengembangan sebuah Data Warehouse. Hasil dalam penelitian ini adalah sebuah rancangan Data Warehouse yang digunakan sebagai repositori data-data perpustakaan

    Analisis Faktor-faktor yang Mempengaruhi Proses Etl pada Data Warehouse

    Get PDF
    ETL ( Extrac Transform Loading ) pada proses develop data warehouse merupakan suatu proses yang memakan waktu paling lama. Kesuksesan proses ETL sangat dipengaruhi oleh kualitas data yang ada pada database OLTP. Penelitian ini bertujuan untuk mencari noise-noise yang mungkin timbul pada proses ETL dengan metode pengembangan data warehouse. Database OTLP yang digunakan untuk penelitian adalah database Perpustakaan STMIK AMIKOM Yogyakarta dan data warehouse yang dibangun berdasarkan tabel fakta transaksi perpustakaan. Dari hasil pengujian yang didapat adalah kegagalan pada proses ETL dari database OLTP ke database data warehouse adalah adanya noise. Setelah dianalisis ternyata noise ada pada tabel pinjam_mhs, yaitu adanya data yang bernilai null pada kolom kd_pinjam_mhs di tabel pinjam_mhs. Sehingga sebelum proses ETL dilakukan perlu adanya proses menghilangkan noise yang ada pada database sumber atau database OLTP
    corecore