5 research outputs found

    Examining Quality Factors Influencing the Success of Data Warehouse

    Get PDF
    Increased organizational dependence on data warehouse (DW) systems has drived the management attention towards improving data warehouse systems to a success. However, the successful implementation rate of the data warehouse systems is low and many firms do not achieve intended goals. A recent study shows that improves and evaluates data warehouse success is one of the top concerns facing IT/DW executives. Nevertheless, there is a lack of research that addresses the issue of the data warehouse systems success. In addition, it is important for organizations to learn about quality needs to be emphasized before the actual data warehouse is built. It is also important to determine what aspects of data warehouse systems success are critical to organizations to help IT/DW executives to devise effective data warehouse success improvement strategies. Therefore, the purpose of this study is to further the understanding of the factors which are critical to evaluate the success of data warehouse systems. The study attempted to develop a comprehensive model for the success of data warehouse systems by adapting the updated DeLone and McLean IS Success Model. Researcher models the relationship between the quality factors on the one side and the net benefits of data warehouse on the other side. This study used quantitative method to test the research hypotheses by survey data. The data were collected by using a web-based survey. The sample consisted of 244 members of The Data Warehouse Institution (TDWI) working in variety industries around the world. The questionnaire measured six independent variables and one dependent variable. The independent variables were meant to measure system quality, information quality, service quality, relationship quality, user quality, and business quality. The dependent variable was meant to measure the net benefits of data warehouse systems. Analysis using descriptive analysis, factor analysis, correlation analysis and regression analysis resulted in the support of all hypotheses. The research results indicated that there are statistically positive causal relationship between each quality factors and the net benefits of the data warehouse systems. These results imply that the net benefits of the data warehouse systems increases when the overall qualities were increased. Yet, little thought seems to have been given to what the data warehouse success is, what is necessary to achieve the success of data warehouse, and what benefits can be realistically expected. Therefore, it appears nearly certain and plausible that the way data warehouse systems success is implemented in the future could be changed

    Business intelligence for sustainable competitive advantage: the case of telecommunications companies in Malaysia

    Get PDF
    The concept of Business Intelligence (BI) as an essential competitive tool has been widely emphasized in the strategic management literature. Yet the sustainability of the firms’ competitive advantage provided by BI capability is not well explained. To fill this gap, this study attempts to develop a model for successful BI deployment and empirically examines the association between BI deployment and sustainable competitive advantage.Taking the telecommunications industry in Malaysia as a case example, the research particularly focuses on the influencing perceptions held by telecommunications decision makers and executives on factors that impact successful BI deployment. The research further investigates the relationship between successful BI deployment and sustainable competitive advantage of the telecommunications organizations. Another important aim of this study is to determine the effect of moderating factors such as organization culture, business strategy and use of BI tools on BI deployment and the sustainability of firm’s competitive advantage.This research uses combination of theoretical foundation of resource-based theory and diffusion of innovation theory to examine BI success and its relationship with firm’s sustainability. The research adopts the positivist paradigm and a two-phase sequential mixed method consisting of qualitative and quantitative approaches are employed. A tentative research model is developed first based on extensive literature review. Qualitative field study then is carried out to fine tune the initial research model. Findings from the qualitative method are also used to develop measures and instruments for the next phase of quantitative method. A survey is carried out with sample of business analysts and decision makers in telecommunications firms and is analyzed by Partial Least Square-based Structural Equation Modeling.The findings revealed that some internal resources of the organizations such as BI governance and the perceptions of BI’s characteristics influence the successful deployment of BI. Organizations that practice good BI governance with strong moral and financial support from upper management will have better chance in realizing their dreams of having successful BI initiatives in place. The scope of BI governance includes providing sufficient support and commitment in BI funding and implementation, laying out proper BI infrastructure and staffing and establishing a corporate-wide policy and procedures regarding BI. The perceptions about the characteristics of BI such as its relative advantage, complexity, compatibility and observability are also significant in ensuring BI success. It thus implied that the executives’ positive perceptions towards BI initiatives are deemed necessary. Moreover, the most important results of this study indicated that with BI successfully deployed, executives would use the knowledge provided for their necessary actions in sustaining the organizations’ competitive advantage in terms of economics, social and environmental issues.The BI model well explained how BI was deployed in Malaysian telecommunications companies. This study thus contributes significantly to the existing literature that will assist future BI researchers especially in achieving sustainable competitive advantage. In particular, the model will help practitioners to consider the resources that they are likely to consider when deploying BI. Finally, the applications of this study can be extended through further adaptation in other industries and various geographic contexts

    Data quality and data cleaning in database applications

    Get PDF
    Today, data plays an important role in people's daily activities. With the help of some database applications such as decision support systems and customer relationship management systems (CRM), useful information or knowledge could be derived from large quantities of data. However, investigations show that many such applications fail to work successfully. There are many reasons to cause the failure, such as poor system infrastructure design or query performance. But nothing is more certain to yield failure than lack of concern for the issue of data quality. High quality of data is a key to today's business success. The quality of any large real world data set depends on a number of factors among which the source of the data is often the crucial factor. It has now been recognized that an inordinate proportion of data in most data sources is dirty. Obviously, a database application with a high proportion of dirty data is not reliable for the purpose of data mining or deriving business intelligence and the quality of decisions made on the basis of such business intelligence is also unreliable. In order to ensure high quality of data, enterprises need to have a process, methodologies and resources to monitor and analyze the quality of data, methodologies for preventing and/or detecting and repairing dirty data. This thesis is focusing on the improvement of data quality in database applications with the help of current data cleaning methods. It provides a systematic and comparative description of the research issues related to the improvement of the quality of data, and has addressed a number of research issues related to data cleaning. In the first part of the thesis, related literature of data cleaning and data quality are reviewed and discussed. Building on this research, a rule-based taxonomy of dirty data is proposed in the second part of the thesis. The proposed taxonomy not only summarizes the most dirty data types but is the basis on which the proposed method for solving the Dirty Data Selection (DDS) problem during the data cleaning process was developed. This helps us to design the DDS process in the proposed data cleaning framework described in the third part of the thesis. This framework retains the most appealing characteristics of existing data cleaning approaches, and improves the efficiency and effectiveness of data cleaning as well as the degree of automation during the data cleaning process. Finally, a set of approximate string matching algorithms are studied and experimental work has been undertaken. Approximate string matching is an important part in many data cleaning approaches which has been well studied for many years. The experimental work in the thesis confirmed the statement that there is no clear best technique. It shows that the characteristics of data such as the size of a dataset, the error rate in a dataset, the type of strings in a dataset and even the type of typo in a string will have significant effect on the performance of the selected techniques. In addition, the characteristics of data also have effect on the selection of suitable threshold values for the selected matching algorithms. The achievements based on these experimental results provide the fundamental improvement in the design of 'algorithm selection mechanism' in the data cleaning framework, which enhances the performance of data cleaning system in database applications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Data quality and data cleaning in database applications

    Get PDF
    Today, data plays an important role in people’s daily activities. With the help of some database applications such as decision support systems and customer relationship management systems (CRM), useful information or knowledge could be derived from large quantities of data. However, investigations show that many such applications fail to work successfully. There are many reasons to cause the failure, such as poor system infrastructure design or query performance. But nothing is more certain to yield failure than lack of concern for the issue of data quality. High quality of data is a key to today’s business success. The quality of any large real world data set depends on a number of factors among which the source of the data is often the crucial factor. It has now been recognized that an inordinate proportion of data in most data sources is dirty. Obviously, a database application with a high proportion of dirty data is not reliable for the purpose of data mining or deriving business intelligence and the quality of decisions made on the basis of such business intelligence is also unreliable. In order to ensure high quality of data, enterprises need to have a process, methodologies and resources to monitor and analyze the quality of data, methodologies for preventing and/or detecting and repairing dirty data. This thesis is focusing on the improvement of data quality in database applications with the help of current data cleaning methods. It provides a systematic and comparative description of the research issues related to the improvement of the quality of data, and has addressed a number of research issues related to data cleaning.In the first part of the thesis, related literature of data cleaning and data quality are reviewed and discussed. Building on this research, a rule-based taxonomy of dirty data is proposed in the second part of the thesis. The proposed taxonomy not only summarizes the most dirty data types but is the basis on which the proposed method for solving the Dirty Data Selection (DDS) problem during the data cleaning process was developed. This helps us to design the DDS process in the proposed data cleaning framework described in the third part of the thesis. This framework retains the most appealing characteristics of existing data cleaning approaches, and improves the efficiency and effectiveness of data cleaning as well as the degree of automation during the data cleaning process.Finally, a set of approximate string matching algorithms are studied and experimental work has been undertaken. Approximate string matching is an important part in many data cleaning approaches which has been well studied for many years. The experimental work in the thesis confirmed the statement that there is no clear best technique. It shows that the characteristics of data such as the size of a dataset, the error rate in a dataset, the type of strings in a dataset and even the type of typo in a string will have significant effect on the performance of the selected techniques. In addition, the characteristics of data also have effect on the selection of suitable threshold values for the selected matching algorithms. The achievements based on these experimental results provide the fundamental improvement in the design of ‘algorithm selection mechanism’ in the data cleaning framework, which enhances the performance of data cleaning system in database applications
    corecore