20 research outputs found
Challenges to Developing Interoperable Data Architecture to Support Sustainable Consumption and Sustainable Supply Chains
This chapter focuses on the identification of key challenges to building a data architecture to improve sustainability in supply chains as well as providing consumers with better information for decision support. The chapter builds on the trends of sustainable consumption and sustainable supply chain management and incorporates the views of key stakeholders in the coffee supply chain that we interviewed. Key challenges relate to accuracy and credibility of data in the system, to the availability of technical expertise and infrastructure across the supply chain, as well as with legal aspects related to data ownership, privacy, and confidentiality. Finally, finding appropriate ways of funding the architecture constitutes another important challenge
Full Information Product Pricing: An Information Strategy for Harnessing Consumer Choice to Create a More Sustainable World
Research and practice in the information systems (IS) field have been evolving over time, nourishing and promoting the development of applications that transform the relationships of individuals, corporations, and governments. Building on this evolution, we push forward a vision of the potential influence of the IS field into one of the most important problems of our times, an increasingly unsustainable world, which is traditionally considered the product of imperfect markets or market externalities. We describe our work in Full Information Product Pricing (FIPP) and our vision of a FIPP global socio-technical system, I-Choose, as a way to connect consumer choice and values with environmental, social, and economic effects of production and distribution practices. FIPP and I-Choose represent a vision about how information systems research can contribute to interdisciplinary research in supply chains, governance, and market economies to provide consumers with information packages that help them better understand how, where, and by whom the products they buy are produced. We believe that such a system will have important implications for international trade and agreements, for public policy, and for making a more sustainable world
Using Ontologies to Develop and Test a Certification and Inspection Data Infrastructure Building Block
Global markets for information-intensive products contain sharp information asymmetries that lead to market inefficiencies resulting from consumer purchasing decisions that are based on incomplete information. Elimination or reduction of such information asymmetries has long been the goal of governments as well as various nongovernmental entities that recognize that addressing issues such as sustainable production, socially just labor practices, and reduction in energy needs and health expenditure is closely linked to consumers being fully aware of the economic, environmental, and social impacts of their purchasing decisions. This chapter reports on the creation of ontology-enabled interoperable data infrastructure based on semantic technologies that would enable information sharing in traditionally information-restricted markets. The main technical result is a proof-of-concept set of data standards built on semantic technology applications and the functionalities of formal ontology of certification and inspection processes. The current proof of concept focuses specifically on certified fair-trade coffee, and while its applicability is currently limited, it has the potential to become universally applicable to any certification and inspection process for any product and service. In addition to producing a number of artifacts relevant to the expandability of the work, such as domain ontologies, the research indicates that while big data systems are necessary, they are not sufficient to create high levels of consumer trust. By testing the criteria using both hand-generated and automated queries, we are able to demonstrate that CIDIBB (Certification and Inspection Data Infrastructure Building Block) is not only able to test the trustworthiness of certification schemes but also that our ontology generates consistent results
Impact of customers' digital banking adoption on hidden defection : a combined analyticalāempirical approach
The implementation of digital channels as avenues for economic transactions (e.g., online and mobile banking/FinTech) has shifted the paradigm of customerābank interactions, providing unprecedented opportunities for both parties. The prevailing belief is that digital banking has several advantages, such as lower costs and higher information transferability for customers. These benefits can also promote competition between banks given customers' predilection for āmultiāhoming,ā or engagement with multiple banks. This study investigated the impact of customers' digital banking adoption on hidden defection, in which customers purchase financial products from competing banks instead of their primary banks. To this end, we developed an analytical model to provide insights into the effects of digital banking adoption while taking customers' multiāhoming behaviors into consideration. We then conducted a series of empirical analyses using comprehensive individualālevel transaction data to provide evidence of hidden defection. Our findings indicate that customers with higher loyalty exhibit greater hidden defection after digital banking adoption. Customers who engage primarily with personalāservice channels (e.g., branches) show stronger hidden defection than do selfāservice channel (e.g., ATMs) users, and this effect is more prevalent among loyal customers. Our results provide valuable implications for omniāchannel services in a market characterized by multiāhoming behavior of customers.Accepted versio
A General Approach to Incorporate Data Quality Matrices into Data Mining Algorithms
Data quality is a central issue for many information-oriented organizations. Recent advances in the data quality field reflect the view that a database is the product of a manufacturing process. While routine errors, such as non-existent zip codes, can be detected and corrected using traditional data cleansing tools, many errors systemic to the manufacturing process cannot be addressed. Therefore, the product of the data manufacturing process is an imprecise recording of information about the entities of interest (i.e. customers, transactions or assets). In this way, the database is only one (flawed) version of the entities it is supposed to represent. Quality assurance systems such as Motorola's SixSigma and other continuous improvement methods document the data manufacturing process's shortcomings. A widespread method of documentation is quality matrices. In this paper, we explore the use of the readily available data quality matrices for the data mining classification task. We first illustrate that if we do not factor in these quality matrices, then our results for prediction are sub-optimal. We then suggest a general-purpose ensemble approach that perturbs the data according to these quality matrices to improve the predictive accuracy and show the improvement is due to a reduction in variance