175 research outputs found

    Socially Responsible Machine Learning: On the Preservation of Individual Privacy and Fairness

    Full text link
    Machine learning (ML) techniques have seen significant advances over the last decade and are playing an increasingly critical role in people's lives. While their potential societal benefits are enormous, they can also inflict great harm if not developed or used with care. In this thesis, we focus on two critical ethical issues in ML systems, the violation of privacy and fairness, and explore mitigating approaches in various scenarios. On the privacy front, when ML systems are developed with private data from individuals, it is critical to prevent privacy violation. Differential privacy (DP), a widely used notion of privacy, ensures that no one by observing the computational outcome can infer a particular individual’s data with high confidence. However, DP is typically achieved by randomizing algorithms (e.g., adding noise), which inevitably leads to a trade-off between individual privacy and outcome accuracy. This trade-off can be difficult to balance, especially in settings where the same or correlated data is repeatedly used/exposed during the computation. In the first part of the thesis, we illustrate two key ideas that can be used to balance an algorithm's privacy-accuracy tradeoff: (1) the reuse of intermediate computational results to reduce information leakage; and (2) improving algorithmic robustness to accommodate more randomness. We introduce a number of randomized, privacy-preserving algorithms that leverage these ideas in various contexts such as distributed optimization and sequential computation. It is shown that our algorithms can significantly improve the privacy-accuracy tradeoff over existing solutions. On the fairness front, ML systems trained with real-world data can inherit biases and exhibit discrimination against already-disadvantaged or marginalized social groups. Recent works have proposed many fairness notions to measure and remedy such biases. However, their effectiveness is mostly studied in a static framework without accounting for the interactions between individuals and ML systems. Since individuals inevitably react to the algorithmic decisions they are subjected to, understanding the downstream impacts of ML decisions is critical to ensure that these decisions are socially responsible. In the second part of the thesis, we present our research on evaluating the long-term impacts of (fair) ML decisions. Specifically, we establish a number of theoretically rigorous frameworks to model the interactions and feedback between ML systems and individuals, and conduct equilibrium analysis to evaluate the impact they each have on the other. We will illustrate how ML decisions and individual behavior evolve in such a system, and how imposing common fairness criteria intended to promote fairness may nevertheless lead to undesirable pernicious effects. Aided with such understanding, mitigation approaches are also discussed.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169960/1/xueru_1.pd

    Application of artificial intelligence in computer network technology

    Get PDF
    With the continuous development of science and technology in China, a rtifi cial intelligence has played an important role in many industries. By applying artifi cial intelligence to computer network technology, computer network technology can provide greater help for people’s life and work, enrich the content of computer network technology, and improve the effi ciency of people’s life and work. In the era of big data, the computer system gradually has some defects. The introduction of artifi cial intelligence technology can greatly improve the effi ciency and quality of computer data processing. In view of this, this paper will analyze the application of artifi cial intelligence in computer network technology, and put forward some strategies for your reference

    Research on the Path of Teaching Staff Construction in Independent Colleges

    Get PDF
    Independent college is an important part of higher education in China, and it is also the guarantee of providing applied talents for China’s economic and social development. Therefore, the construction of teaching staff in independent colleges is particularly important, and its comprehensive strength of teachers directly affects the quality of personnel training in independent colleges. Compared with public colleges, the overall faculty of independent colleges is still relatively weak, which is not conducive to the overall promotion of the school-running level of independent colleges. In order to better promote the construction of teachers in independent colleges, Promote the improvement of the school-running strength and teachers’ level of independent colleges, Taking the construction of teaching staff in an independent college in Zhejiang Province as an example, On the basis of a profound analysis of the shortcomings faced by the construction of the teaching staff in this college, this paper puts forward some countermeasures to improve the construction of the teaching staff in this independent college, in order to make suggestions for the construction of the talent team in this independent college and provide case reference for the construction of the teaching staff in other independent colleges of the same type in Chin

    Improving Fairness and Privacy in Selection Problems

    Full text link
    Supervised learning models have been increasingly used for making decisions about individuals in applications such as hiring, lending, and college admission. These models may inherit pre-existing biases from training datasets and discriminate against protected attributes (e.g., race or gender). In addition to unfairness, privacy concerns also arise when the use of models reveals sensitive personal information. Among various privacy notions, differential privacy has become popular in recent years. In this work, we study the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models. Unlike many existing works, we consider a scenario where a supervised model is used to select a limited number of applicants as the number of available positions is limited. This assumption is well-suited for various scenarios, such as job application and college admission. We use ``equal opportunity'' as the fairness notion and show that the exponential mechanisms can make the decision-making process perfectly fair. Moreover, the experiments on real-world datasets show that the exponential mechanism can improve both privacy and fairness, with a slight decrease in accuracy compared to the model without post-processing.Comment: This paper has been accepted for publication in the 35th AAAI Conference on Artificial Intelligenc
    • …
    corecore