418 research outputs found

    Scalable intelligent electronic catalogs

    Get PDF
    The world today is full of information systems which make huge quantities of information available. This incredible amount of information is clearly overwhelming Internet endusers. As a consequence, intelligent tools to identify worthwhile information are needed, in order to fully assist people in finding the right information. Moreover, most systems are ultimately used, not just to provide information, but also to solve problems. Encouraged by the growing popular success of Internet and the enormous business potential of electronic commerce, e-catalogs have been consolidated as one of the most relevant types of information systems. Nearly all currently available electronic catalogs are offering tools for extracting product information based on key-attribute filtering methods. The most advanced electronic catalogs are implemented as recommender systems using collaborative filtering techniques. This dissertation focuses on strategies for coping with the difficulty of building intelligent catalogs which fully support the user in his purchase decision-making process, while maintaining the scalability of the whole system. The contributions of this thesis lie on a mixed-initiative system which is inspired by observations on traditional commerce activities. Such a conversational model consists basically of a dialog between the customer and the system, where the user criticizes proposed products and the catalog suggests new products accordingly. Constraint satisfaction techniques are analyzed in order to provide a uniform framework for modeling electronic catalogs for configurable products. Within the same framework, user preferences and optimization constraints are also easily modeled. Searching strategies for proposing the adequate products according to criteria are described in detail. Another dimension of this dissertation faces the problem of scalability, i.e., the problem of supporting hundreds, or thousands of users simultaneously using intelligent electronic catalogs. Traditional wisdom would presume that in order to provide full assistance to users in complex tasks, the business logic of the system must be complex, thus preventing scalability. SmartClient is a software architectural model that uses constraint satisfaction problems for representing solution spaces, instead of traditional models which represent solution spaces by collections of single solutions. This main idea is supported by the fact that constraint solvers are extreme in their compactness and simplicity, while providing sophisticated business logic. Different SmartClient architecture configurations are provided for different uses and architectural requirements. In order to illustrate the use of constraint satisfaction techniques for complex electronic catalogs with the SmartClient architecture, a commercial Internet-based application for travel planning, called reality, has been successfully developed. Travel planning is a particularly appropriate domain for validating the results of this research, since travel information is dynamic, travel planning problems are combinatorial, and moreover, complex user preferences and optimization constraints must be taken into consideration

    BIG DATA AND ANALYTICS AS A NEW FRONTIER OF ENTERPRISE DATA MANAGEMENT

    Get PDF
    Big Data and Analytics (BDA) promises significant value generation opportunities across industries. Even though companies increase their investments, their BDA initiatives fall short of expectations and they struggle to guarantee a return on investments. In order to create business value from BDA, companies must build and extend their data-related capabilities. While BDA literature has emphasized the capabilities needed to analyze the increasing volumes of data from heterogeneous sources, EDM researchers have suggested organizational capabilities to improve data quality. However, to date, little is known how companies actually orchestrate the allocated resources, especially regarding the quality and use of data to create value from BDA. Considering these gaps, this thesis – through five interrelated essays – investigates how companies adapt their EDM capabilities to create additional business value from BDA. The first essay lays the foundation of the thesis by investigating how companies extend their Business Intelligence and Analytics (BI&A) capabilities to build more comprehensive enterprise analytics platforms. The second and third essays contribute to fundamental reflections on how organizations are changing and designing data governance in the context of BDA. The fourth and fifth essays look at how companies provide high quality data to an increasing number of users with innovative EDM tools, that are, machine learning (ML) and enterprise data catalogs (EDC). The thesis outcomes show that BDA has profound implications on EDM practices. In the past, operational data processing and analytical data processing were two “worlds” that were managed separately from each other. With BDA, these "worlds" are becoming increasingly interdependent and organizations must manage the lifecycles of data and analytics products in close coordination. Also, with BDA, data have become the long-expected, strategically relevant resource. As such data must now be viewed as a distinct value driver separate from IT as it requires specific mechanisms to foster value creation from BDA. BDA thus extends data governance goals: in addition to data quality and regulatory compliance, governance should facilitate data use by broadening data availability and enabling data monetization. Accordingly, companies establish comprehensive data governance designs including structural, procedural, and relational mechanisms to enable a broad network of employees to work with data. Existing EDM practices therefore need to be rethought to meet the emerging BDA requirements. While ML is a promising solution to improve data quality in a scalable and adaptable way, EDCs help companies democratize data to a broader range of employees

    Proceedings of the First Karlsruhe Service Summit Workshop - Advances in Service Research, Karlsruhe, Germany, February 2015 (KIT Scientific Reports ; 7692)

    Get PDF
    Since April 2008 KSRI fosters interdisciplinary research in order to support and advance the progress in the service domain. KSRI brings together academia and industry while serving as a European research hub with respect to service science. For KSS2015 Research Workshop, we invited submissions of theoretical and empirical research dealing with the relevant topics in the context of services including energy, mobility, health care, social collaboration, and web technologies

    Explainability in Music Recommender Systems

    Full text link
    The most common way to listen to recorded music nowadays is via streaming platforms which provide access to tens of millions of tracks. To assist users in effectively browsing these large catalogs, the integration of Music Recommender Systems (MRSs) has become essential. Current real-world MRSs are often quite complex and optimized for recommendation accuracy. They combine several building blocks based on collaborative filtering and content-based recommendation. This complexity can hinder the ability to explain recommendations to end users, which is particularly important for recommendations perceived as unexpected or inappropriate. While pure recommendation performance often correlates with user satisfaction, explainability has a positive impact on other factors such as trust and forgiveness, which are ultimately essential to maintain user loyalty. In this article, we discuss how explainability can be addressed in the context of MRSs. We provide perspectives on how explainability could improve music recommendation algorithms and enhance user experience. First, we review common dimensions and goals of recommenders' explainability and in general of eXplainable Artificial Intelligence (XAI), and elaborate on the extent to which these apply -- or need to be adapted -- to the specific characteristics of music consumption and recommendation. Then, we show how explainability components can be integrated within a MRS and in what form explanations can be provided. Since the evaluation of explanation quality is decoupled from pure accuracy-based evaluation criteria, we also discuss requirements and strategies for evaluating explanations of music recommendations. Finally, we describe the current challenges for introducing explainability within a large-scale industrial music recommender system and provide research perspectives.Comment: To appear in AI Magazine, Special Topic on Recommender Systems 202

    Big Data Computing for Geospatial Applications

    Get PDF
    The convergence of big data and geospatial computing has brought forth challenges and opportunities to Geographic Information Science with regard to geospatial data management, processing, analysis, modeling, and visualization. This book highlights recent advancements in integrating new computing approaches, spatial methods, and data management strategies to tackle geospatial big data challenges and meanwhile demonstrates opportunities for using big data for geospatial applications. Crucial to the advancements highlighted in this book is the integration of computational thinking and spatial thinking and the transformation of abstract ideas and models to concrete data structures and algorithms

    Recommender Systems for Scientific and Technical Information Providers

    Get PDF
    Providers of scientific and technical information are a promising application area of recommender systems due to high search costs for their goods and the general problem of assessing the quality of information products. Nevertheless, the usage of recommendation services in this market is still in its infancy. This book presents economical concepts, statistical methods and algorithms, technical architectures, as well as experiences from case studies on how recommender systems can be integrated

    Software Product Line

    Get PDF
    The Software Product Line (SPL) is an emerging methodology for developing software products. Currently, there are two hot issues in the SPL: modelling and the analysis of the SPL. Variability modelling techniques have been developed to assist engineers in dealing with the complications of variability management. The principal goal of modelling variability techniques is to configure a successful software product by managing variability in domain-engineering. In other words, a good method for modelling variability is a prerequisite for a successful SPL. On the other hand, analysis of the SPL aids the extraction of useful information from the SPL and provides a control and planning strategy mechanism for engineers or experts. In addition, the analysis of the SPL provides a clear view for users. Moreover, it ensures the accuracy of the SPL. This book presents new techniques for modelling and new methods for SPL analysis

    Recommender systems in industrial contexts

    Full text link
    This thesis consists of four parts: - An analysis of the core functions and the prerequisites for recommender systems in an industrial context: we identify four core functions for recommendation systems: Help do Decide, Help to Compare, Help to Explore, Help to Discover. The implementation of these functions has implications for the choices at the heart of algorithmic recommender systems. - A state of the art, which deals with the main techniques used in automated recommendation system: the two most commonly used algorithmic methods, the K-Nearest-Neighbor methods (KNN) and the fast factorization methods are detailed. The state of the art presents also purely content-based methods, hybridization techniques, and the classical performance metrics used to evaluate the recommender systems. This state of the art then gives an overview of several systems, both from academia and industry (Amazon, Google ...). - An analysis of the performances and implications of a recommendation system developed during this thesis: this system, Reperio, is a hybrid recommender engine using KNN methods. We study the performance of the KNN methods, including the impact of similarity functions used. Then we study the performance of the KNN method in critical uses cases in cold start situation. - A methodology for analyzing the performance of recommender systems in industrial context: this methodology assesses the added value of algorithmic strategies and recommendation systems according to its core functions.Comment: version 3.30, May 201
    corecore