14,638 research outputs found

    An investigation of the trading agent competition : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University, Albany, New Zealand

    Get PDF
    The Internet has swept over the whole world. It is influencing almost every aspect of society. The blooming of electronic commerce on the back of the Internet further increases globalisation and free trade. However, the Internet will never reach its full potential as a new electronic media or marketplace unless agents are developed. The trading Agent Competition (TAC), which simulates online auctions, was designed to create a standard problem in the complex domain of electronic marketplaces and to inspire researchers from all over the world to develop distinctive software agents to a common exercise. In this thesis, a detailed study of intelligent software agents and a comprehensive investigation of the Trading Agent Competition will be presented. The design of the Risker Wise agent and a fuzzy logic system predicting the bid increase of the hotel auction in the TAC game will be discussed in detail

    The Effect of Uncertainty on Contingent Valuation Estimates: A Comparison

    Get PDF
    We examine the impact of uncertainty on contingent valuation responses using (1) a survey of Canadian landowners about willingness to accept compensation for converting cropland to forestry and (2) a survey of Swedish residents about willingness to pay for forest conservation. Five approaches from the literature for incorporating respondent uncertainty are used and compared to the traditional RUM model with assumed certainty. The results indicate that incorporating uncertainty has the potential to increase fit, but could introduce additional variance. While some methods for uncertainty are an improvement over traditional approaches, we caution against systematic judgments about the effect of uncertainty on contingent valuation responses.respondent uncertainty, willingness to accept, contingent valuation

    Lesion boundary segmentation using level set methods

    Get PDF
    This paper addresses the issue of accurate lesion segmentation in retinal imagery, using level set methods and a novel stopping mechanism - an elementary features scheme. Specifically, the curve propagation is guided by a gradient map built using a combination of histogram equalization and robust statistics. The stopping mechanism uses elementary features gathered as the curve deforms over time, and then using a lesionness measure, defined herein, ’looks back in time’ to find the point at which the curve best fits the real object. We implement the level set using a fast upwind scheme and compare the proposed method against five other segmentation algorithms performed on 50 randomly selected images of exudates with a database of clinician marked-up boundaries as ground truth

    Multimodal decision-level fusion for person authentication

    Get PDF
    In this paper, the use of clustering algorithms for decision-level data fusion is proposed. Person authentication results coming from several modalities (e.g., still image, speech), are combined by using fuzzy k-means (FKM), fuzzy vector quantization (FVQ) algorithms, and median radial basis function (MRBF) network. The quality measure of the modalities data is used for fuzzification. Two modifications of the FKM and FVQ algorithms, based on a novel fuzzy vector distance definition, are proposed to handle the fuzzy data and utilize the quality measure. Simulations show that fuzzy clustering algorithms have better performance compared to the classical clustering algorithms and other known fusion algorithms. MRBF has better performance especially when two modalities are combined. Moreover, the use of the quality via the proposed modified algorithms increases the performance of the fusion system

    Zombie Lending and Depressed Restructuring in Japan

    Get PDF
    In this paper, we propose a bank-based explanation for the decade-long Japanese slowdown following the asset price collapse in the early 1990s. We start with the well-known observation that most large Japanese banks were only able to comply with capital standards because regulators were lax in their inspections. To facilitate this forbearance the banks often engaged in sham loan restructurings that kept credit flowing to otherwise insolvent borrowers (that we call zombies). Thus, the normal competitive outcome whereby the zombies would shed workers and lose market share was thwarted. Our model highlights the restructuring implications of the zombie problem. The counterpart of the congestion created by the zombies is a reduction of the profits for healthy firms, which discourages their entry and investment. In this context, even solvent banks do not find good lending opportunities. We confirm our story's key predictions that zombie-dominated industries exhibit more depressed job creation and destruction, and lower productivity. We present firm-level regressions showing that the increase in zombies depressed the investment and employment growth of non-zombies and widened the productivity gap between zombies and non-zombies.

    Validation of Soft Classification Models using Partial Class Memberships: An Extended Concept of Sensitivity & Co. applied to the Grading of Astrocytoma Tissues

    Full text link
    We use partial class memberships in soft classification to model uncertain labelling and mixtures of classes. Partial class memberships are not restricted to predictions, but may also occur in reference labels (ground truth, gold standard diagnosis) for training and validation data. Classifier performance is usually expressed as fractions of the confusion matrix, such as sensitivity, specificity, negative and positive predictive values. We extend this concept to soft classification and discuss the bias and variance properties of the extended performance measures. Ambiguity in reference labels translates to differences between best-case, expected and worst-case performance. We show a second set of measures comparing expected and ideal performance which is closely related to regression performance, namely the root mean squared error RMSE and the mean absolute error MAE. All calculations apply to classical crisp classification as well as to soft classification (partial class memberships and/or one-class classifiers). The proposed performance measures allow to test classifiers with actual borderline cases. In addition, hardening of e.g. posterior probabilities into class labels is not necessary, avoiding the corresponding information loss and increase in variance. We implement the proposed performance measures in the R package "softclassval", which is available from CRAN and at http://softclassval.r-forge.r-project.org. Our reasoning as well as the importance of partial memberships for chemometric classification is illustrated by a real-word application: astrocytoma brain tumor tissue grading (80 patients, 37000 spectra) for finding surgical excision borders. As borderline cases are the actual target of the analytical technique, samples which are diagnosed to be borderline cases must be included in the validation.Comment: The manuscript is accepted for publication in Chemometrics and Intelligent Laboratory Systems. Supplementary figures and tables are at the end of the pd

    Deep Generative Models for Reject Inference in Credit Scoring

    Get PDF
    Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring

    Median evidential c-means algorithm and its application to community detection

    Get PDF
    Median clustering is of great value for partitioning relational data. In this paper, a new prototype-based clustering method, called Median Evidential C-Means (MECM), which is an extension of median c-means and median fuzzy c-means on the theoretical framework of belief functions is proposed. The median variant relaxes the restriction of a metric space embedding for the objects but constrains the prototypes to be in the original data set. Due to these properties, MECM could be applied to graph clustering problems. A community detection scheme for social networks based on MECM is investigated and the obtained credal partitions of graphs, which are more refined than crisp and fuzzy ones, enable us to have a better understanding of the graph structures. An initial prototype-selection scheme based on evidential semi-centrality is presented to avoid local premature convergence and an evidential modularity function is defined to choose the optimal number of communities. Finally, experiments in synthetic and real data sets illustrate the performance of MECM and show its difference to other methods
    • 

    corecore