258 research outputs found
Significance of Task Significance in Online Marketplaces for Work
Online marketplaces for work like Amazon Mechanical Turk facilitate the sourcing of low expertise tasks in a fast and cost effective way. In this study, we explore the impact of task significance on work quality by informing workers of the purpose of the task and who benefits from it. Results from a laboratory experiment and a field experiment showed that perceived task significance improved work quality, but only for participants who recalled the purpose statement. In contrast, increasing monetary payment by 50% had no impact on work quality. A majority of participants who received the purpose statement were not able to recall it. Further analysis showed worker attributes such as English ability and personality traits influenced the likelihood of recall whereas rich media format had no effects. Overall, our work highlights the promise of task significance as a way to motivate online workers and the challenge of promoting task significance online
Managing Expertise in a Distributed Environment
Expertise is the primary resource and product of professional service and technical firms. These firms often organize around project teams that advise and work under contract for clients. A key problem for management is to deploy expertise in project teams so as to meet the expertise requirements of projects and clients. Because expertise may be geographically distributed across multiple sites, many of these firms create virtual or distributed teams. Doing so gives these firms access to a larger pool of knowledge resources than would be available at one site and helps leverage expertise across the organization. However, geographically distributed collaboration in teams incurs coordination and other costs that local work does not. Is a distributed team worth these costs? We studied a professional service firm with distributed and collocated project teams. In this firm, domain expertise tended to be concentrated within geographic sites, whereas methodological expertise was distributed across the firm. We examined whether a better match of domain and methodological expertise to the needs of projects resulted in more profitable projects, and whether distributed teams matched these two types of expertise to the requirements of projects as well as or better than did collocated teams. We found that most projects were collocated, with members drawn from one site who had domain expertise that matched project requirements as well as when members were drawn from other sites. The profits of projects were unrelated to the match of domain expertise with project requirements. However, project profits were significantly and positively related to a match of methodological expertise with project requirements. Furthermore, distributed projects showed a stronger match of methodological expertise with project requirements than did collocated projects, and predicted disproportionately more profits. We conclude that an appropriate utilization of organizationally distributed expertise has a positive impact on project performance
Object Segmentation with Audio Context
Visual objects often have acoustic signatures that are naturally synchronized
with them in audio-bearing video recordings. For this project, we explore the
multimodal feature aggregation for video instance segmentation task, in which
we integrate audio features into our video segmentation model to conduct an
audio-visual learning scheme. Our method is based on existing video instance
segmentation method which leverages rich contextual information across video
frames. Since this is the first attempt to investigate the audio-visual
instance segmentation, a novel dataset, including 20 vocal classes with
synchronized video and audio recordings, is collected. By utilizing combined
decoder to fuse both video and audio features, our model shows a slight
improvements compared to the base model. Additionally, we managed to show the
effectiveness of different modules by conducting extensive ablations.Comment: Research project for Introduction to Deep Learning (11785) at
Carnegie Mellon Universit
Pipelined Architecture for Soft-decision Iterative Projection Aggregation Decoding for RM Codes
The recently proposed recursive projection-aggregation (RPA) decoding
algorithm for Reed-Muller codes has received significant attention as it
provides near-ML decoding performance at reasonable complexity for short codes.
However, its complicated structure makes it unsuitable for hardware
implementation. Iterative projection-aggregation (IPA) decoding is a modified
version of RPA decoding that simplifies the hardware implementation. In this
work, we present a flexible hardware architecture for the IPA decoder that can
be configured from fully-sequential to fully-parallel, thus making it suitable
for a wide range of applications with different constraints and resource
budgets. Our simulation and implementation results show that the IPA decoder
has 41% lower area consumption, 44% lower latency, four times higher
throughput, but currently seven times higher power consumption for a code with
block length of 128 and information length of 29 compared to a state-of-the-art
polar successive cancellation list (SCL) decoder with comparable decoding
performance
MLCopilot: Unleashing the Power of Large Language Models in Solving Machine Learning Tasks
The field of machine learning (ML) has gained widespread adoption, leading to
a significant demand for adapting ML to specific scenarios, which is yet
expensive and non-trivial. The predominant approaches towards the automation of
solving ML tasks (e.g., AutoML) are often time consuming and hard to understand
for human developers. In contrast, though human engineers have the incredible
ability to understand tasks and reason about solutions, their experience and
knowledge are often sparse and difficult to utilize by quantitative approaches.
In this paper, we aim to bridge the gap between machine intelligence and human
knowledge by introducing a novel framework MLCopilot, which leverages the
state-of-the-art LLMs to develop ML solutions for novel tasks. We showcase the
possibility of extending the capability of LLMs to comprehend structured inputs
and perform thorough reasoning for solving novel ML tasks. And we find that,
after some dedicated design, the LLM can (i) observe from the existing
experiences of ML tasks and (ii) reason effectively to deliver promising
results for new tasks. The solution generated can be used directly to achieve
high levels of competitiveness
A High-Performance and Low-Complexity 5G LDPC Decoder: Algorithm and Implementation
5G New Radio (NR) has stringent demands on both performance and complexity
for the design of low-density parity-check (LDPC) decoding algorithms and
corresponding VLSI implementations. Furthermore, decoders must fully support
the wide range of all 5G NR blocklengths and code rates, which is a significant
challenge. In this paper, we present a high-performance and low-complexity LDPC
decoder, tailor-made to fulfill the 5G requirements. First, to close the gap
between belief propagation (BP) decoding and its approximations in hardware, we
propose an extension of adjusted min-sum decoding, called generalized adjusted
min-sum (GA-MS) decoding. This decoding algorithm flexibly truncates the
incoming messages at the check node level and carefully approximates the
non-linear functions of BP decoding to balance the error-rate and hardware
complexity. Numerical results demonstrate that the proposed fixed-point GAMS
has only a minor gap of 0.1 dB compared to floating-point BP under various
scenarios of 5G standard specifications. Secondly, we present a fully
reconfigurable 5G NR LDPC decoder implementation based on GA-MS decoding. Given
that memory occupies a substantial portion of the decoder area, we adopt
multiple data compression and approximation techniques to reduce 42.2% of the
memory overhead. The corresponding 28nm FD-SOI ASIC decoder has a core area of
1.823 mm2 and operates at 895 MHz. It is compatible with all 5G NR LDPC codes
and achieves a peak throughput of 24.42 Gbps and a maximum area efficiency of
13.40 Gbps/mm2 at 4 decoding iterations.Comment: 14 pages, 14 figure
Benchmarking Data Science Agents
In the era of data-driven decision-making, the complexity of data analysis
necessitates advanced expertise and tools of data science, presenting
significant challenges even for specialists. Large Language Models (LLMs) have
emerged as promising aids as data science agents, assisting humans in data
analysis and processing. Yet their practical efficacy remains constrained by
the varied demands of real-world applications and complicated analytical
process. In this paper, we introduce DSEval -- a novel evaluation paradigm, as
well as a series of innovative benchmarks tailored for assessing the
performance of these agents throughout the entire data science lifecycle.
Incorporating a novel bootstrapped annotation method, we streamline dataset
preparation, improve the evaluation coverage, and expand benchmarking
comprehensiveness. Our findings uncover prevalent obstacles and provide
critical insights to inform future advancements in the field.Comment: Source code and data are available at
https://github.com/MetaCopilot/dseva
Recommended from our members
ST2-104 attenuates neuronal injuries in A beta(25-35)-induced AD rats by inhibiting CRMP2-NMDAR2B signaling pathways
Collapsin response mediator protein 2 (CRMP2), traditionally regarded as an axon/dendrite growth and guidance protein, plays an important role in the regulation of both post-and pre-synaptic Ca2+ channels, such as N-methyl-d-aspartate receptors (NMDARs). The Ca2+ channel-binding domain 3 (CBD3) peptide derived from CRMP2 has recently emerged as a Ca2+ channel blocker, suppressing neuropathic pain in a spared nerve injury (SNI) model when linked to the transduction domain of HIV TAT protein and reduced neuronal death in a middle cerebral artery occlusion model and a traumatic brain injury (TBI) model. The present study aimed to examine the neuroprotective effects and biochemical mechanisms of ST2-104 (a non-arginine-conjugated CBD3 peptide) in an A beta(25-35)-induced Alzheimer's disease (AD) rat model. This study demonstrated that CRMP2 and NMDARs subunit NMDAR2B form a direct biochemical complex, which regulates NMDAR activity in a rat model. ST2-104 peptide given via tail vein injections significantly reduced spatial learning and memory impairment. ST2-104 relieved neuronal injuries by suppressing expression of NMDAR2B and p-CRMP2 and increasing expression of CRMP2 in the hippocampus. Remarkably, ST2-104 attenuated levels of intracellular Ca2+ by disrupting the interaction between p-CRMP2 and NMDAR2B. Taken together, these findings support ST2-104 as a novel neuroprotective agent, potentially representing a novel direction for a therapeutic targeting channel in AD.National Natural Science Foundation of China [81571231]; Health and Family Planning Commission of Jilin Province [2015Z043]; Department of Education Foundation of Jilin Province [JJKH20190102KJ]; Department Science and Technology Foundation of Jilin Province [20190701058GH]; Talent Development Fund of Jilin ProvinceOpen access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
- …