3,854 research outputs found

    Using multiple classifiers for predicting the risk of endovascular aortic aneurysm repair re-intervention through hybrid feature selection.

    Get PDF
    Feature selection is essential in medical area; however, its process becomes complicated with the presence of censoring which is the unique character of survival analysis. Most survival feature selection methods are based on Cox's proportional hazard model, though machine learning classifiers are preferred. They are less employed in survival analysis due to censoring which prevents them from directly being used to survival data. Among the few work that employed machine learning classifiers, partial logistic artificial neural network with auto-relevance determination is a well-known method that deals with censoring and perform feature selection for survival data. However, it depends on data replication to handle censoring which leads to unbalanced and biased prediction results especially in highly censored data. Other methods cannot deal with high censoring. Therefore, in this article, a new hybrid feature selection method is proposed which presents a solution to high level censoring. It combines support vector machine, neural network, and K-nearest neighbor classifiers using simple majority voting and a new weighted majority voting method based on survival metric to construct a multiple classifier system. The new hybrid feature selection process uses multiple classifier system as a wrapper method and merges it with iterated feature ranking filter method to further reduce features. Two endovascular aortic repair datasets containing 91% censored patients collected from two centers were used to construct a multicenter study to evaluate the performance of the proposed approach. The results showed the proposed technique outperformed individual classifiers and variable selection methods based on Cox's model such as Akaike and Bayesian information criterions and least absolute shrinkage and selector operator in p values of the log-rank test, sensitivity, and concordance index. This indicates that the proposed classifier is more powerful in correctly predicting the risk of re-intervention enabling doctor in selecting patients' future follow-up plan

    Agent-Based Simulations of Blockchain protocols illustrated via Kadena's Chainweb

    Full text link
    While many distributed consensus protocols provide robust liveness and consistency guarantees under the presence of malicious actors, quantitative estimates of how economic incentives affect security are few and far between. In this paper, we describe a system for simulating how adversarial agents, both economically rational and Byzantine, interact with a blockchain protocol. This system provides statistical estimates for the economic difficulty of an attack and how the presence of certain actors influences protocol-level statistics, such as the expected time to regain liveness. This simulation system is influenced by the design of algorithmic trading and reinforcement learning systems that use explicit modeling of an agent's reward mechanism to evaluate and optimize a fully autonomous agent. We implement and apply this simulation framework to Kadena's Chainweb, a parallelized Proof-of-Work system, that contains complexity in how miner incentive compliance affects security and censorship resistance. We provide the first formal description of Chainweb that is in the literature and use this formal description to motivate our simulation design. Our simulation results include a phase transition in block height growth rate as a function of shard connectivity and empirical evidence that censorship in Chainweb is too costly for rational miners to engage in. We conclude with an outlook on how simulation can guide and optimize protocol development in a variety of contexts, including Proof-of-Stake parameter optimization and peer-to-peer networking design.Comment: 10 pages, 7 figures, accepted to the IEEE S&B 2019 conferenc

    "May I borrow Your Filter?" Exchanging Filters to Combat Spam in a Community

    Get PDF
    Leveraging social networks in computer systems can be effective in dealing with a number of trust and security issues. Spam is one such issue where the "wisdom of crowds" can be harnessed by mining the collective knowledge of ordinary individuals. In this paper, we present a mechanism through which members of a virtual community can exchange information to combat spam. Previous attempts at collaborative spam filtering have concentrated on digest-based indexing techniques to share digests or fingerprints of emails that are known to be spam. We take a different approach and allow users to share their spam filters instead, thus dramatically reducing the amount of traffic generated in the network. The resultant diversity in the filters and cooperation in a community allows it to respond to spam in an autonomic fashion. As a test case for exchanging filters we use the popular SpamAssassin spam filtering software and show that exchanging spam filters provides an alternative method to improve spam filtering performance
    • …
    corecore