3,464 research outputs found

    A New Reconceptualization of System Usage Based on a Work System Perspective

    Get PDF
    The DIGIT 2021 CFP emphasizes “building resilience with information technology in a time of disruptions.” This paper addresses that issue by going back to basics. It argues that typical uses of the common concept of system usage are insufficient for supporting insightful discussions of using IT for resilience. It proposes a reconceptualization of the concept of system usage in which the concept of IS usage is defined based on a work system perspective in which an IS is a type of work system. The reconceptualization is applied to two case studies, one involving mission- critical workarounds of an ERP system and one involving an electronic medical records (EMR) system. Those examples illustrate many important aspects of IS usage that tend to be ignored in discussions of variables related to a typical notion of system usage. This reconceptualization of system usage has many implications related to describing information systems and the ways they support, control, or perform activities in other work systems. That approach provides a deeper appreciation of where and how IT can support resilience in a time of disruptions

    A framework for personalized dynamic cross-selling in e-commerce retailing

    Get PDF
    Cross-selling and product bundling are prevalent strategies in the retail sector. Instead of static bundling offers, i.e. giving the same offer to everyone, personalized dynamic cross-selling generates targeted bundle offers and can help maximize revenues and profits. In resolving the two basic problems of dynamic cross-selling, which involves selecting the right complementary products and optimizing the discount, the issue of computational complexity becomes central as the customer base and length of the product list grows. Traditional recommender systems are built upon simple collaborative filtering techniques, which exploit the informational cues gained from users in the form of product ratings and rating differences across users. The retail setting differs in that there are only records of transactions (in period X, customer Y purchased product Z). Instead of a range of explicit rating scores, transactions form binary datasets; 1-purchased and 0-not-purchased. This makes it a one-class collaborative filtering (OCCF) problem. Notwithstanding the existence of wider application domains of such an OCCF problem, very little work has been done in the retail setting. This research addresses this gap by developing an effective framework for dynamic cross-selling for online retailing. In the first part of the research, we propose an effective yet intuitive approach to integrate temporal information regarding a product\u27s lifecycle (i.e., the non-stationary nature of the sales history) in the form of a weight component into latent-factor-based OCCF models, improving the quality of personalized product recommendations. To improve the scalability of large product catalogs with transaction sparsity typical in online retailing, the approach relies on product catalog hierarchy and segments (rather than individual SKUs) for collaborative filtering. In the second part of the work, we propose effective bundle discount policies, which estimate a specific customer\u27s interest in potential cross-selling products (identified using the proposed OCCF methods) and calibrate the discount to strike an effective balance between the probability of the offer acceptance and the size of the discount. We also developed a highly effective simulation platform for generation of e-retailer transactions under various settings and test and validate the proposed methods. To the best of our knowledge, this is the first study to address the topic of real-time personalized dynamic cross-selling with discounting. The proposed techniques are applicable to cross-selling, up-selling, and personalized and targeted selling within the e-retail business domain. Through extensive analysis of various market scenario setups, we also provide a number of managerial insights on the performance of cross-selling strategies

    Improving the consumer demand forecast to generate more accurate suggested orders at the store-item level

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2008.Includes bibliographical references (p. 57).One of the biggest opportunities for this consumer goods company today is reducing retail stockouts at its Direct Store Delivery (DSD) customers via pre-selling, which represents approximately 70% of the company's total sales volume. But reducing retail stock-outs is becoming constantly more challenging with an ever-burgeoning number of SKUs due to new product introductions and packaging innovations. The main tool this consumer goods company uses to combat retail stock-outs is the pre-sell handheld, which the company provides to all field sales reps. The handheld runs proprietary software developed by this consumer goods company that creates suggested orders based on a number of factors including: * Baseline forecast (specific to store-item combination) * Seasonality effects (i.e., higher demand for products during particular seasons) * Promotional effects (i.e., lift created from sale prices) * Presence of in-store displays (i.e., more space for product than just shelf space) * Weekday effects (i.e., selling more on weekends when most people shop) * Holiday effects (i.e., higher demand for products at holidays) * Inventory levels on the shelves and in the back room * In-transit orders (i.e., orders that may already be on their way to the customer) The more accurate that the suggested orders are, the fewer retail stock-outs will occur. This project seeks to increase the accuracy of the consumer demand forecast, and ultimately the suggested orders, by improving the baseline forecast and accounting for the effect of cannibalization on demand.by Susan D. Bankston.S.M.M.B.A

    Mapping crime: Understanding Hotspots

    Get PDF

    Information visualisation and data analysis using web mash-up systems

    Get PDF
    A thesis submitted in partial fulfilment for the degree of Doctor of PhilosophyThe arrival of E-commerce systems have contributed greatly to the economy and have played a vital role in collecting a huge amount of transactional data. It is becoming difficult day by day to analyse business and consumer behaviour with the production of such a colossal volume of data. Enterprise 2.0 has the ability to store and create an enormous amount of transactional data; the purpose for which data was collected could quite easily be disassociated as the essential information goes unnoticed in large and complex data sets. The information overflow is a major contributor to the dilemma. In the current environment, where hardware systems have the ability to store such large volumes of data and the software systems have the capability of substantial data production, data exploration problems are on the rise. The problem is not with the production or storage of data but with the effectiveness of the systems and techniques where essential information could be retrieved from complex data sets in a comprehensive and logical approach as the data questions are asked. Using the existing information retrieval systems and visualisation tools, the more specific questions are asked, the more definitive and unambiguous are the visualised results that could be attained, but when it comes to complex and large data sets there are no elementary or simple questions. Therefore a profound information visualisation model and system is required to analyse complex data sets through data analysis and information visualisation, to make it possible for the decision makers to identify the expected and discover the unexpected. In order to address complex data problems, a comprehensive and robust visualisation model and system is introduced. The visualisation model consists of four major layers, (i) acquisition and data analysis, (ii) data representation, (iii) user and computer interaction and (iv) results repositories. There are major contributions in all four layers but particularly in data acquisition and data representation. Multiple attribute and dimensional data visualisation techniques are identified in Enterprise 2.0 and Web 2.0 environment. Transactional tagging and linked data are unearthed which is a novel contribution in information visualisation. The visualisation model and system is first realised as a tangible software system, which is then validated through different and large types of data sets in three experiments. The first experiment is based on the large Royal Mail postcode data set. The second experiment is based on a large transactional data set in an enterprise environment while the same data set is processed in a non-enterprise environment. The system interaction facilitated through new mashup techniques enables users to interact more fluently with data and the representation layer. The results are exported into various reusable formats and retrieved for further comparison and analysis purposes. The information visualisation model introduced in this research is a compact process for any size and type of data set which is a major contribution in information visualisation and data analysis. Advanced data representation techniques are employed using various web mashup technologies. New visualisation techniques have emerged from the research such as transactional tagging visualisation and linked data visualisation. The information visualisation model and system is extremely useful in addressing complex data problems with strategies that are easy to interact with and integrate

    Multivariate discretization of continuous valued attributes.

    Get PDF
    The area of Knowledge discovery and data mining is growing rapidly. Feature Discretization is a crucial issue in Knowledge Discovery in Databases (KDD), or Data Mining because most data sets used in real world applications have features with continuously values. Discretization is performed as a preprocessing step of the data mining to make data mining techniques useful for these data sets. This thesis addresses discretization issue by proposing a multivariate discretization (MVD) algorithm. It begins withal number of common discretization algorithms like Equal width discretization, Equal frequency discretization, NaĂŻve; Entropy based discretization, Chi square discretization, and orthogonal hyper planes. After that comparing the results achieved by the multivariate discretization (MVD) algorithm with the accuracy results of other algorithms. This thesis is divided into six chapters, covering a few common discretization algorithms and tests these algorithms on a real world datasets which varying in size and complexity, and shows how data visualization techniques will be effective in determining the degree of complexity of the given data set. We have examined the multivariate discretization (MVD) algorithm with the same data sets. After that we have classified discrete data using artificial neural network single layer perceptron and multilayer perceptron with back propagation algorithm. We have trained the Classifier using the training data set, and tested its accuracy using the testing data set. Our experiments lead to better accuracy results with some data sets and low accuracy results with other data sets, and this is subject ot the degree of data complexity then we have compared the accuracy results of multivariate discretization (MVD) algorithm with the results achieved by other discretization algorithms. We have found that multivariate discretization (MVD) algorithm produces good accuracy results in comparing with the other discretization algorithm

    MyPHRMachines : personal health desktops in the cloud

    Get PDF
    Personal Health Records (PHRs) should remain the lifelong property of patients, who should be enabled to show them conveniently and securely to selected caregivers and institutions. Current solutions for PHRs focus on standard data exchange formats and transformations to move data across health information systems. In this paper we present MyPHRMachines, a PHR system taking a radically new architectural solution to health record interoperability. In MyPHRMachines, health-related data and the application software to view and/or analyze it are separately deployed in the PHR system. After uploading their medical data to MyPHRMachines, patients can access them again from remote virtual machines that contain the right software to visualize and analyze them without any conversion. Patients can share their remote virtual machine session with selected caregivers, who will need only aWeb browser to access the pre-loaded fragments of their lifelong PHR. We discuss a prototype of MyPHRMachines applied to two use cases, i.e. radiology image sharing and personalized medicine. The first use case demonstrates the ability of patients to build robust PHRs across the space and time dimensions, whereas the second use case demonstrates the ability of MyPHRMachines to preserve the privacy of PHR data deployed in the cloud

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF
    • …
    corecore