258 research outputs found

    Significance of Task Significance in Online Marketplaces for Work

    Get PDF
    Online marketplaces for work like Amazon Mechanical Turk facilitate the sourcing of low expertise tasks in a fast and cost effective way. In this study, we explore the impact of task significance on work quality by informing workers of the purpose of the task and who benefits from it. Results from a laboratory experiment and a field experiment showed that perceived task significance improved work quality, but only for participants who recalled the purpose statement. In contrast, increasing monetary payment by 50% had no impact on work quality. A majority of participants who received the purpose statement were not able to recall it. Further analysis showed worker attributes such as English ability and personality traits influenced the likelihood of recall whereas rich media format had no effects. Overall, our work highlights the promise of task significance as a way to motivate online workers and the challenge of promoting task significance online

    Managing Expertise in a Distributed Environment

    Get PDF
    Expertise is the primary resource and product of professional service and technical firms. These firms often organize around project teams that advise and work under contract for clients. A key problem for management is to deploy expertise in project teams so as to meet the expertise requirements of projects and clients. Because expertise may be geographically distributed across multiple sites, many of these firms create virtual or distributed teams. Doing so gives these firms access to a larger pool of knowledge resources than would be available at one site and helps leverage expertise across the organization. However, geographically distributed collaboration in teams incurs coordination and other costs that local work does not. Is a distributed team worth these costs? We studied a professional service firm with distributed and collocated project teams. In this firm, domain expertise tended to be concentrated within geographic sites, whereas methodological expertise was distributed across the firm. We examined whether a better match of domain and methodological expertise to the needs of projects resulted in more profitable projects, and whether distributed teams matched these two types of expertise to the requirements of projects as well as or better than did collocated teams. We found that most projects were collocated, with members drawn from one site who had domain expertise that matched project requirements as well as when members were drawn from other sites. The profits of projects were unrelated to the match of domain expertise with project requirements. However, project profits were significantly and positively related to a match of methodological expertise with project requirements. Furthermore, distributed projects showed a stronger match of methodological expertise with project requirements than did collocated projects, and predicted disproportionately more profits. We conclude that an appropriate utilization of organizationally distributed expertise has a positive impact on project performance

    Object Segmentation with Audio Context

    Full text link
    Visual objects often have acoustic signatures that are naturally synchronized with them in audio-bearing video recordings. For this project, we explore the multimodal feature aggregation for video instance segmentation task, in which we integrate audio features into our video segmentation model to conduct an audio-visual learning scheme. Our method is based on existing video instance segmentation method which leverages rich contextual information across video frames. Since this is the first attempt to investigate the audio-visual instance segmentation, a novel dataset, including 20 vocal classes with synchronized video and audio recordings, is collected. By utilizing combined decoder to fuse both video and audio features, our model shows a slight improvements compared to the base model. Additionally, we managed to show the effectiveness of different modules by conducting extensive ablations.Comment: Research project for Introduction to Deep Learning (11785) at Carnegie Mellon Universit

    Pipelined Architecture for Soft-decision Iterative Projection Aggregation Decoding for RM Codes

    Full text link
    The recently proposed recursive projection-aggregation (RPA) decoding algorithm for Reed-Muller codes has received significant attention as it provides near-ML decoding performance at reasonable complexity for short codes. However, its complicated structure makes it unsuitable for hardware implementation. Iterative projection-aggregation (IPA) decoding is a modified version of RPA decoding that simplifies the hardware implementation. In this work, we present a flexible hardware architecture for the IPA decoder that can be configured from fully-sequential to fully-parallel, thus making it suitable for a wide range of applications with different constraints and resource budgets. Our simulation and implementation results show that the IPA decoder has 41% lower area consumption, 44% lower latency, four times higher throughput, but currently seven times higher power consumption for a code with block length of 128 and information length of 29 compared to a state-of-the-art polar successive cancellation list (SCL) decoder with comparable decoding performance

    MLCopilot: Unleashing the Power of Large Language Models in Solving Machine Learning Tasks

    Full text link
    The field of machine learning (ML) has gained widespread adoption, leading to a significant demand for adapting ML to specific scenarios, which is yet expensive and non-trivial. The predominant approaches towards the automation of solving ML tasks (e.g., AutoML) are often time consuming and hard to understand for human developers. In contrast, though human engineers have the incredible ability to understand tasks and reason about solutions, their experience and knowledge are often sparse and difficult to utilize by quantitative approaches. In this paper, we aim to bridge the gap between machine intelligence and human knowledge by introducing a novel framework MLCopilot, which leverages the state-of-the-art LLMs to develop ML solutions for novel tasks. We showcase the possibility of extending the capability of LLMs to comprehend structured inputs and perform thorough reasoning for solving novel ML tasks. And we find that, after some dedicated design, the LLM can (i) observe from the existing experiences of ML tasks and (ii) reason effectively to deliver promising results for new tasks. The solution generated can be used directly to achieve high levels of competitiveness

    A High-Performance and Low-Complexity 5G LDPC Decoder: Algorithm and Implementation

    Full text link
    5G New Radio (NR) has stringent demands on both performance and complexity for the design of low-density parity-check (LDPC) decoding algorithms and corresponding VLSI implementations. Furthermore, decoders must fully support the wide range of all 5G NR blocklengths and code rates, which is a significant challenge. In this paper, we present a high-performance and low-complexity LDPC decoder, tailor-made to fulfill the 5G requirements. First, to close the gap between belief propagation (BP) decoding and its approximations in hardware, we propose an extension of adjusted min-sum decoding, called generalized adjusted min-sum (GA-MS) decoding. This decoding algorithm flexibly truncates the incoming messages at the check node level and carefully approximates the non-linear functions of BP decoding to balance the error-rate and hardware complexity. Numerical results demonstrate that the proposed fixed-point GAMS has only a minor gap of 0.1 dB compared to floating-point BP under various scenarios of 5G standard specifications. Secondly, we present a fully reconfigurable 5G NR LDPC decoder implementation based on GA-MS decoding. Given that memory occupies a substantial portion of the decoder area, we adopt multiple data compression and approximation techniques to reduce 42.2% of the memory overhead. The corresponding 28nm FD-SOI ASIC decoder has a core area of 1.823 mm2 and operates at 895 MHz. It is compatible with all 5G NR LDPC codes and achieves a peak throughput of 24.42 Gbps and a maximum area efficiency of 13.40 Gbps/mm2 at 4 decoding iterations.Comment: 14 pages, 14 figure

    Benchmarking Data Science Agents

    Full text link
    In the era of data-driven decision-making, the complexity of data analysis necessitates advanced expertise and tools of data science, presenting significant challenges even for specialists. Large Language Models (LLMs) have emerged as promising aids as data science agents, assisting humans in data analysis and processing. Yet their practical efficacy remains constrained by the varied demands of real-world applications and complicated analytical process. In this paper, we introduce DSEval -- a novel evaluation paradigm, as well as a series of innovative benchmarks tailored for assessing the performance of these agents throughout the entire data science lifecycle. Incorporating a novel bootstrapped annotation method, we streamline dataset preparation, improve the evaluation coverage, and expand benchmarking comprehensiveness. Our findings uncover prevalent obstacles and provide critical insights to inform future advancements in the field.Comment: Source code and data are available at https://github.com/MetaCopilot/dseva
    corecore