30 research outputs found

    PREDICTING CONSUMER INFORMATION SEARCH BENEFITS FOR PERSONALIZED ONLINE PRODUCT RANKING: A CONFIDENCE-BASED APPROACH

    Get PDF
    Product ranking mechanism is an important service for e-commerce that facilitates consumers’ decision-making process. This paper studies online product ranking under uncertainty. Different from previous studies that generally rank products merely based on predicted ratings, a new personalized product ranking method is proposed based on estimating consumer information search benefits and taking prediction uncertainty and confidence into consideration. Experiments using real data of movie ratings illustrate that the proposed method is advantageous over traditional point estimation methods, thus may help enhance customers’ satisfaction with the decision-making process and choices through saving their time and efforts

    PREDICTING CONSUMER INFORMATION SEARCH BENEFITS FOR PERSONALIZED ONLINE PRODUCT RANKING: A CONFIDENCE-BASED APPROACH

    Get PDF
    Abstract: Product ranking mechanism is an important service for e-commerce that facilitates consumers' decision-making process. This paper studies online product ranking under uncertainty. Different from previous studies that generally rank products merely based on predicted ratings, a new personalized product ranking method is proposed based on estimating consumer information search benefits and taking prediction uncertainty and confidence into consideration. Experiments using real data of movie ratings illustrate that the proposed method is advantageous over traditional point estimation methods, thus may help enhance customers' satisfaction with the decision-making process and choices through saving their time and efforts

    Uncertainty of the implementation time of geodynamic monitoring system in multi-criteria ranking of alternatives

    Get PDF
    The paper deals with the problem of ranking alternatives to geodynamic monitoring systems in the case of uncertainty of their implementation time. The problem is characterized by the fact that the choice of alternatives and the effect of it depends on the quality properties of the applied organizational and technical solutions, taking into account the time of implementation. The ordering of alternatives is proposed taking into account the uncertainty of the implementation time factors. Ranking is realized by comparing the trees of functional characteristics of alternatives taking into account the compliance of their characteristics with time-varying requirement

    Auditing and Generating Synthetic Data with Controllable Trust Trade-offs

    Full text link
    Data collected from the real world tends to be biased, unbalanced, and at risk of exposing sensitive and private information. This reality has given rise to the idea of creating synthetic datasets to alleviate risk, bias, harm, and privacy concerns inherent in the real data. This concept relies on Generative AI models to produce unbiased, privacy-preserving synthetic data while being true to the real data. In this new paradigm, how can we tell if this approach delivers on its promises? We present an auditing framework that offers a holistic assessment of synthetic datasets and AI models trained on them, centered around bias and discrimination prevention, fidelity to the real data, utility, robustness, and privacy preservation. We showcase our framework by auditing multiple generative models on diverse use cases, including education, healthcare, banking, human resources, and across different modalities, from tabular, to time-series, to natural language. Our use cases demonstrate the importance of a holistic assessment in order to ensure compliance with socio-technical safeguards that regulators and policymakers are increasingly enforcing. For this purpose, we introduce the trust index that ranks multiple synthetic datasets based on their prescribed safeguards and their desired trade-offs. Moreover, we devise a trust-index-driven model selection and cross-validation procedure via auditing in the training loop that we showcase on a class of transformer models that we dub TrustFormers, across different modalities. This trust-driven model selection allows for controllable trust trade-offs in the resulting synthetic data. We instrument our auditing framework with workflows that connect different stakeholders from model development to audit and certification via a synthetic data auditing report.Comment: 49 pages; submitte

    Risk-Based Regulatory Reform and Public Participation

    Get PDF
    Meaningful public participation has been perceived as difficult to accommodate in regulatory proceedings requiring technical scientific judgments, especially those involving quantitative risk assessments. Quantitative risk assessment, however, is not a purely technical exercise, but instead involves the application of policy preferences in the form of assumptions, extrapolation from animal data to humans and high to low doses, management of incomplete data sets, and resolution of scientific uncertainties. Such junctures at which policy preferences are applied are opportunities to reflect social value choices that are not wholly scientific. Those opportunities should be explicitly identified as such by the regulator. These considerations argue for a soft form of risk assessment that expressly takes societal values into account. Non-adversarial, consensus-based mechanisms of public participation that encourage greater dialogue and interaction among interested parties and with the regulator may enhance the potential for non-scientific social values effectively to be accommodated in regulatory proceedings with a heavy technical component

    Decision making under uncertainty

    Get PDF
    Almost all important decision problems are inevitably subject to some level of uncertainty either about data measurements, the parameters, or predictions describing future evolution. The significance of handling uncertainty is further amplified by the large volume of uncertain data automatically generated by modern data gathering or integration systems. Various types of problems of decision making under uncertainty have been subject to extensive research in computer science, economics and social science. In this dissertation, I study three major problems in this context, ranking, utility maximization, and matching, all involving uncertain datasets. First, we consider the problem of ranking and top-k query processing over probabilistic datasets. By illustrating the diverse and conflicting behaviors of the prior proposals, we contend that a single, specific ranking function may not suffice for probabilistic datasets. Instead we propose the notion of parameterized ranking functions, that generalize or can approximate many of the previously proposed ranking functions. We present novel exact or approximate algorithms for efficiently ranking large datasets according to these ranking functions, even if the datasets exhibit complex correlations or the probability distributions are continuous. The second problem concerns with the stochastic versions of a broad class of combinatorial optimization problems. We observe that the expected value is inadequate in capturing different types of risk-averse or risk-prone behaviors, and instead we consider a more general objective which is to maximize the expected utility of the solution for some given utility function. We present a polynomial time approximation algorithm with additive error ε for any ε > 0, under certain conditions. Our result generalizes and improves several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack. The third is the stochastic matching problem which finds interesting applications in online dating, kidney exchange and online ad assignment. In this problem, the existence of each edge is uncertain and can be only found out by probing the edge. The goal is to design a probing strategy to maximize the expected weight of the matching. We give linear programming based constant-factor approximation algorithms for weighted stochastic matching, which answer an open question raised in prior work

    A Theoretical Approach to Characterize the Accuracy-Fairness Trade-off Pareto Frontier

    Full text link
    While the accuracy-fairness trade-off has been frequently observed in the literature of fair machine learning, rigorous theoretical analyses have been scarce. To demystify this long-standing challenge, this work seeks to develop a theoretical framework by characterizing the shape of the accuracy-fairness trade-off Pareto frontier (FairFrontier), determined by a set of all optimal Pareto classifiers that no other classifiers can dominate. Specifically, we first demonstrate the existence of the trade-off in real-world scenarios and then propose four potential categories to characterize the important properties of the accuracy-fairness Pareto frontier. For each category, we identify the necessary conditions that lead to corresponding trade-offs. Experimental results on synthetic data suggest insightful findings of the proposed framework: (1) When sensitive attributes can be fully interpreted by non-sensitive attributes, FairFrontier is mostly continuous. (2) Accuracy can suffer a \textit{sharp} decline when over-pursuing fairness. (3) Eliminate the trade-off via a two-step streamlined approach. The proposed research enables an in-depth understanding of the accuracy-fairness trade-off, pushing current fair machine-learning research to a new frontier
    corecore