1,043 research outputs found

    Three Essays on HRM Algorithms: Where Do We Go from Here?

    Get PDF
    The field of Human Resource Management (HRM) has experienced a significant transformation with the emergence of big data and algorithms. Major technology companies have introduced software and platforms for analyzing various HRM practices, such as hiring, compensation, employee engagement, and turnover management, utilizing algorithmic approaches. However, scholarly research has taken a cautious stance, questioning the strategic value and causal inference basis of these tools, while also raising concerns about bias, discrimination, and ethical issues in the applications of algorithms. Despite these concerns, algorithmic management has gained prominence in large organizations, shaping workforce management practices. This thesis aims to address the gap between the rapidly changing market of HRM algorithms and the lack of theoretical understanding. The thesis begins by conducting a comprehensive review of HRM algorithms in HRM practice and scholarship, clarifying their definition, exploring their unique features, and identifying specific topics and research questions in the field. It aims to bridge the gap between academia and practice to enhance the understanding and utilization of algorithms in HRM. I then explore the legal, causal, and moral issues associated with HR algorithms, comparing fairness criteria and advocating for the use of causal modeling to evaluate algorithmic fairness. The multifaceted nature of fairness is illustrated and practical strategies for enhancing justice perceptions and incorporating fairness into HR algorithms are proposed. Finally, the thesis adopts an artifact-centric approach to examine the ethical implications of HRM algorithms. It explores competing views on moral responsibility, introduces the concept of "ethical affordances," and analyzes the distribution of moral responsibility based on different types of ethical affordances. The paper provides a framework for analyzing and assigning moral responsibility to stakeholders involved in the design, use, and regulation of HRM algorithms. Together, these papers contribute to the understanding of algorithms in HRM by addressing the research-practice gap, exploring fairness and accountability issues, and investigating the ethical implications. They offer theoretical insights, practical recommendations, and future research directions for both researchers and practitioners.ThesisDoctor of Philosophy (PhD)This thesis explores the use of advanced algorithms in Human Resource Management (HRM) and how they affect decision-making in organizations. With the rise of big data and powerful algorithms, companies can analyze various HR practices like hiring, compensation, and employee engagement. However, there are concerns about biases and ethical issues in algorithmic decision-making. This research examines the benefits and challenges of HRM algorithms and suggests ways to ensure fairness and ethical considerations in their design and application. By bridging the gap between theory and practice, this thesis provides insights into the responsible use of algorithms in HRM. The findings of this research can help organizations make better decisions while maintaining fairness and upholding ethical standards in HR practices

    When Can an Expander Code Correct Ω(n) Errors in O(n) Time?

    Get PDF
    Tanner codes are graph-based linear codes whose parity-check matrices can be characterized by a bipartite graph G together with a linear inner code C₀. Expander codes are Tanner codes whose defining bipartite graph G has good expansion property. This paper is motivated by the following natural and fundamental problem in decoding expander codes: What are the sufficient and necessary conditions that δ and d₀ must satisfy, so that every bipartite expander G with vertex expansion ratio δ and every linear inner code C₀ with minimum distance d₀ together define an expander code that corrects Ω(n) errors in O(n) time? For C₀ being the parity-check code, the landmark work of Sipser and Spielman (IEEE-TIT'96) showed that δ > 3/4 is sufficient; later Viderman (ACM-TOCT'13) improved this to δ > 2/3-Ω(1) and he also showed that δ > 1/2 is necessary. For general linear code C₀, the previously best-known result of Dowling and Gao (IEEE-TIT'18) showed that d₀ = Ω(cδ^{-2}) is sufficient, where c is the left-degree of G. In this paper, we give a near-optimal solution to the above question for general C₀ by showing that δ d₀ > 3 is sufficient and δ d₀ > 1 is necessary, thereby also significantly improving Dowling-Gao’s result. We present two novel algorithms for decoding expander codes, where the first algorithm is deterministic, and the second one is randomized and has a larger decoding radius

    Improved Decoding of Expander Codes

    Get PDF
    We study the classical expander codes, introduced by Sipser and Spielman [M. Sipser and D. A. Spielman, 1996]. Given any constants 0 < ?, ? < 1/2, and an arbitrary bipartite graph with N vertices on the left, M < N vertices on the right, and left degree D such that any left subset S of size at most ? N has at least (1-?)|S|D neighbors, we show that the corresponding linear code given by parity checks on the right has distance at least roughly {? N}/{2 ?}. This is strictly better than the best known previous result of 2(1-?) ? N [Madhu Sudan, 2000; Viderman, 2013] whenever ? < 1/2, and improves the previous result significantly when ? is small. Furthermore, we show that this distance is tight in general, thus providing a complete characterization of the distance of general expander codes. Next, we provide several efficient decoding algorithms, which vastly improve previous results in terms of the fraction of errors corrected, whenever ? < 1/4. Finally, we also give a bound on the list-decoding radius of general expander codes, which beats the classical Johnson bound in certain situations (e.g., when the graph is almost regular and the code has a high rate). Our techniques exploit novel combinatorial properties of bipartite expander graphs. In particular, we establish a new size-expansion tradeoff, which may be of independent interests

    FedRFQ: Prototype-Based Federated Learning with Reduced Redundancy, Minimal Failure, and Enhanced Quality

    Full text link
    Federated learning is a powerful technique that enables collaborative learning among different clients. Prototype-based federated learning is a specific approach that improves the performance of local models under non-IID (non-Independently and Identically Distributed) settings by integrating class prototypes. However, prototype-based federated learning faces several challenges, such as prototype redundancy and prototype failure, which limit its accuracy. It is also susceptible to poisoning attacks and server malfunctions, which can degrade the prototype quality. To address these issues, we propose FedRFQ, a prototype-based federated learning approach that aims to reduce redundancy, minimize failures, and improve \underline{q}uality. FedRFQ leverages a SoftPool mechanism, which effectively mitigates prototype redundancy and prototype failure on non-IID data. Furthermore, we introduce the BFT-detect, a BFT (Byzantine Fault Tolerance) detectable aggregation algorithm, to ensure the security of FedRFQ against poisoning attacks and server malfunctions. Finally, we conduct experiments on three different datasets, namely MNIST, FEMNIST, and CIFAR-10, and the results demonstrate that FedRFQ outperforms existing baselines in terms of accuracy when handling non-IID data

    Spatial Crowdsourcing Task Allocation Scheme for Massive Data with Spatial Heterogeneity

    Full text link
    Spatial crowdsourcing (SC) engages large worker pools for location-based tasks, attracting growing research interest. However, prior SC task allocation approaches exhibit limitations in computational efficiency, balanced matching, and participation incentives. To address these challenges, we propose a graph-based allocation framework optimized for massive heterogeneous spatial data. The framework first clusters similar tasks and workers separately to reduce allocation scale. Next, it constructs novel non-crossing graph structures to model balanced adjacencies between unevenly distributed tasks and workers. Based on the graphs, a bidirectional worker-task matching scheme is designed to produce allocations optimized for mutual interests. Extensive experiments on real-world datasets analyze the performance under various parameter settings

    Learning-Based Client Selection for Federated Learning Services Over Wireless Networks with Constrained Monetary Budgets

    Full text link
    We investigate a data quality-aware dynamic client selection problem for multiple federated learning (FL) services in a wireless network, where each client offers dynamic datasets for the simultaneous training of multiple FL services, and each FL service demander has to pay for the clients under constrained monetary budgets. The problem is formalized as a non-cooperative Markov game over the training rounds. A multi-agent hybrid deep reinforcement learning-based algorithm is proposed to optimize the joint client selection and payment actions, while avoiding action conflicts. Simulation results indicate that our proposed algorithm can significantly improve training performance.Comment: 6 pages,8 figure

    Contemporary Recommendation Systems on Big Data and Their Applications: A Survey

    Full text link
    This survey paper conducts a comprehensive analysis of the evolution and contemporary landscape of recommendation systems, which have been extensively incorporated across a myriad of web applications. It delves into the progression of personalized recommendation methodologies tailored for online products or services, organizing the array of recommendation techniques into four main categories: content-based, collaborative filtering, knowledge-based, and hybrid approaches, each designed to cater to specific contexts. The document provides an in-depth review of both the historical underpinnings and the cutting-edge innovations in the domain of recommendation systems, with a special focus on implementations leveraging big data analytics. The paper also highlights the utilization of prominent datasets such as MovieLens, Amazon Reviews, Netflix Prize, Last.fm, and Yelp in evaluating recommendation algorithms. It further outlines and explores the predominant challenges encountered in the current generation of recommendation systems, including issues related to data sparsity, scalability, and the imperative for diversified recommendation outputs. The survey underscores these challenges as promising directions for subsequent research endeavors within the discipline. Additionally, the paper examines various real-life applications driven by recommendation systems, addressing the hurdles involved in seamlessly integrating these systems into everyday life. Ultimately, the survey underscores how the advancements in recommendation systems, propelled by big data technologies, have the potential to significantly enhance real-world experiences.Comment: 34 pages, 8 figures, 2 tabl
    corecore