35,822 research outputs found
Essential guidelines for computational method benchmarking
In computational biology and other sciences, researchers are frequently faced with a choice between several computational methods for performing data analyses. Benchmarking studies aim to rigorously compare the performance of different methods using well-characterized benchmark datasets, to determine the strengths of each method or to provide recommendations regarding suitable choices of methods for an analysis. However, benchmarking studies must be carefully designed and implemented to provide accurate, unbiased, and informative results. Here, we summarize key practical guidelines and recommendations for performing high-quality benchmarking analyses, based on our experiences in computational biology
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
Automated data-driven decision making systems are increasingly being used to
assist, or even replace humans in many settings. These systems function by
learning from historical decisions, often taken by humans. In order to maximize
the utility of these systems (or, classifiers), their training involves
minimizing the errors (or, misclassifications) over the given historical data.
However, it is quite possible that the optimally trained classifier makes
decisions for people belonging to different social groups with different
misclassification rates (e.g., misclassification rates for females are higher
than for males), thereby placing these groups at an unfair disadvantage. To
account for and avoid such unfairness, in this paper, we introduce a new notion
of unfairness, disparate mistreatment, which is defined in terms of
misclassification rates. We then propose intuitive measures of disparate
mistreatment for decision boundary-based classifiers, which can be easily
incorporated into their formulation as convex-concave constraints. Experiments
on synthetic as well as real world datasets show that our methodology is
effective at avoiding disparate mistreatment, often at a small cost in terms of
accuracy.Comment: To appear in Proceedings of the 26th International World Wide Web
Conference (WWW), 2017. Code available at:
https://github.com/mbilalzafar/fair-classificatio
Essential guidelines for computational method benchmarking
In computational biology and other sciences, researchers are frequently faced
with a choice between several computational methods for performing data
analyses. Benchmarking studies aim to rigorously compare the performance of
different methods using well-characterized benchmark datasets, to determine the
strengths of each method or to provide recommendations regarding suitable
choices of methods for an analysis. However, benchmarking studies must be
carefully designed and implemented to provide accurate, unbiased, and
informative results. Here, we summarize key practical guidelines and
recommendations for performing high-quality benchmarking analyses, based on our
experiences in computational biology.Comment: Minor update
Debiasing Community Detection: The Importance of Lowly-Connected Nodes
Community detection is an important task in social network analysis, allowing
us to identify and understand the communities within the social structures.
However, many community detection approaches either fail to assign low degree
(or lowly-connected) users to communities, or assign them to trivially small
communities that prevent them from being included in analysis. In this work, we
investigate how excluding these users can bias analysis results. We then
introduce an approach that is more inclusive for lowly-connected users by
incorporating them into larger groups. Experiments show that our approach
outperforms the existing state-of-the-art in terms of F1 and Jaccard similarity
scores while reducing the bias towards low-degree users
- …