117 research outputs found

    Content warehouses

    Get PDF
    Nowadays, content management systems are an established technology. Based on the experiences from several application scenarios we discuss the points of contact between content management systems and other disciplines of information systems engineering like data warehouses, data mining, and data integration. We derive a system architecture called "content warehouse" that integrates these technologies and defines a more general and more sophisticated view on content management. As an example, a system for the collection, maintenance, and evaluation of biological content like survey data or multimedia resources is shown as a case study

    Database migration processes and optimization using BSMS (bank staff management system)

    Get PDF
    Veritabanları temel olarak karmaşık verilere bağlı görevleri yerine getirmek ve bu görevleri gerçekleştirmek için tasarlanmış bir depolama teknolojisidir, veri bütünlüğü önemlidir. Pek çok şirket için, veritabanları kelimenin tam anlamıyla şirketin işinin elektronik bir temsilidir ve göç sırasında herhangi bir veri parçasını kaybeder ve kaybeder kabul edilemez. Verilerin taşınmasının çeşitli ticari nedenleri vardır, bunlardan bazıları arşivleme, veri depolama, yeni ortama, platformlara veya teknolojiye geçmedir. Veri tabanı geçişi, genellikle değerlendirme, veri tabanı şeması dönüşümü, veri geçişi ve işlevsel testi içeren karmaşık, çok fazlı bir işlemdir. Çevrimiçi İşlem İşleme (OLTP) veritabanları genellikle veri bütünlüğü sağlama, veri fazlalığını ortadan kaldırma ve kayıt kilitlemesini azaltma gibi görevleri yerine getirerek verimlilik için çok normalize edilir. Ancak bu veritabanı tasarım sistemi bize çok sayıda tablo sunar ve bu tabloların ve yabancı anahtar kısıtlamalarının her biri veri taşıma noktasında dikkate alınmalıdır. Ayrıca, geleneksel görevlerden farklı olarak veri taşıma işi için Kabul kriteri tamamen% 100'dür, çünkü hatalar veritabanlarında tolere edilmez ve kalite önemlidir. Bu tez, verilerin Paradox veritabanı adı verilen yavaş, verimsiz ve eski bir veritabanı platformundan, verileri başarıyla geçiren Oracle adı verilen çok daha gelişmiş bir veritabanına aktarılması sırasında ortaya çıkan zorlukları ve kaygıları göstermektedir. Herhangi bir tutarsızlık ve veri kaybı olmadan verileri hızlı bir şekilde alarak, bir sorgunun performansını iyileştirmek için indeksleme tekniği kullanılmıştır

    A comparison of statistical machine learning methods in heartbeat detection and classification

    Get PDF
    In health care, patients with heart problems require quick responsiveness in a clinical setting or in the operating theatre. Towards that end, automated classification of heartbeats is vital as some heartbeat irregularities are time consuming to detect. Therefore, analysis of electro-cardiogram (ECG) signals is an active area of research. The methods proposed in the literature depend on the structure of a heartbeat cycle. In this paper, we use interval and amplitude based features together with a few samples from the ECG signal as a feature vector. We studied a variety of classification algorithms focused especially on a type of arrhythmia known as the ventricular ectopic fibrillation (VEB). We compare the performance of the classifiers against algorithms proposed in the literature and make recommendations regarding features, sampling rate, and choice of the classifier to apply in a real-time clinical setting. The extensive study is based on the MIT-BIH arrhythmia database. Our main contribution is the evaluation of existing classifiers over a range sampling rates, recommendation of a detection methodology to employ in a practical setting, and extend the notion of a mixture of experts to a larger class of algorithms

    Support for taxonomic data in systematics

    Get PDF
    The Systematics community works to increase our understanding of biological diversity through identifying and classifying organisms and using phylogenies to understand the relationships between those organisms. It has made great progress in the building of phylogenies and in the development of algorithms. However, it has insufficient provision for the preservation of research outcomes and making those widely accessible and queriable, and this is where database technologies can help. This thesis makes a contribution in the area of database usability, by addressing the query needs present in the community, as supported by the analysis of query logs. It formulates clearly the user requirements in the area of phylogeny and classification queries. It then reports on the use of warehousing techniques in the integration of data from many sources, to satisfy those requirements. It shows how to perform query expansion with synonyms and vernacular names, and how to implement hierarchical query expansion effectively. A detailed analysis of the improvements offered by those query expansion techniques is presented. This is supported by the exposition of the database techniques underlying this development, and of the user and programming interfaces (web services) which make this novel development available to both end-users and programs

    Integrating data warehouses with web data : a survey

    Get PDF
    This paper surveys the most relevant research on combining Data Warehouse (DW) and Web data. It studies the XML technologies that are currently being used to integrate, store, query, and retrieve Web data and their application to DWs. The paper reviews different DW distributed architectures and the use of XML languages as an integration tool in these systems. It also introduces the problem of dealing with semistructured data in a DW. It studies Web data repositories, the design of multidimensional databases for XML data sources, and the XML extensions of OnLine Analytical Processing techniques. The paper addresses the application of information retrieval technology in a DW to exploit text-rich document collections. The authors hope that the paper will help to discover the main limitations and opportunities that offer the combination of the DW and the Web fields, as well as to identify open research line

    The Modernization Process of a Data Pipeline

    Get PDF
    Data plays an integral part in a company’s decision-making. Therefore, decision-makers must have the right data available at the right time. Data volumes grow constantly, and new data is continuously needed for analytical purposes. Many companies use data warehouses to store data in an easy-to-use format for reporting and analytics. The challenge with data warehousing is displaying data using one unified structure. The source data is often gathered from many systems that are structured in various ways. A process called extract, transform, and load (ETL) or extract, load, and transform (ELT) is used to load data into the data warehouse. This thesis describes the modernization process of one such pipeline. The previous solution, which used an on-premises Teradata platform for computation and SQL stored procedures for the transformation logic, is replaced by a new solution. The goal of the new solution is a process that uses modern tools, is scalable, and follows programming best practises. The cloud-based Databricks platform is used for computation, and dbt is used as the transformation tool. Lastly, a comparison is made between the new and old solutions, and their benefits and drawbacks are discussed
    corecore