579 research outputs found
Fair Labor Practices: Are Workers Truly Protected?
Private businesses often claim that they employ fair labor practices and care about employees\u27 well being. However, firms may face contradictory situations since fair labor practices can result in cost increases and lower profit. In this thesis, I examine historic and contemporary workers’ conditions and fair labor practices. Workers’ conditions during the Industrial Revolution and the basis of relevant Marxist theories are examined. Contemporary labor practices are also examined and a secondary analysis of data on worker attitudes in two major companies is conducted. Results and discussion suggest that although progresses in worker conditions have been made, there still needs to be more improvements
BehAVExplor: Behavior Diversity Guided Testing for Autonomous Driving Systems
Testing Autonomous Driving Systems (ADSs) is a critical task for ensuring the
reliability and safety of autonomous vehicles. Existing methods mainly focus on
searching for safety violations while the diversity of the generated test cases
is ignored, which may generate many redundant test cases and failures. Such
redundant failures can reduce testing performance and increase failure analysis
costs. In this paper, we present a novel behavior-guided fuzzing technique
(BehAVExplor) to explore the different behaviors of the ego vehicle (i.e., the
vehicle controlled by the ADS under test) and detect diverse violations.
Specifically, we design an efficient unsupervised model, called BehaviorMiner,
to characterize the behavior of the ego vehicle. BehaviorMiner extracts the
temporal features from the given scenarios and performs a clustering-based
abstraction to group behaviors with similar features into abstract states. A
new test case will be added to the seed corpus if it triggers new behaviors
(e.g., cover new abstract states). Due to the potential conflict between the
behavior diversity and the general violation feedback, we further propose an
energy mechanism to guide the seed selection and the mutation. The energy of a
seed quantifies how good it is. We evaluated BehAVExplor on Apollo, an
industrial-level ADS, and LGSVL simulation environment. Empirical evaluation
results show that BehAVExplor can effectively find more diverse violations than
the state-of-the-art
Evaluating AIGC Detectors on Code Content
Artificial Intelligence Generated Content (AIGC) has garnered considerable
attention for its impressive performance, with ChatGPT emerging as a leading
AIGC model that produces high-quality responses across various applications,
including software development and maintenance. Despite its potential, the
misuse of ChatGPT poses significant concerns, especially in education and
safetycritical domains. Numerous AIGC detectors have been developed and
evaluated on natural language data. However, their performance on code-related
content generated by ChatGPT remains unexplored. To fill this gap, in this
paper, we present the first empirical study on evaluating existing AIGC
detectors in the software domain. We created a comprehensive dataset including
492.5K samples comprising code-related content produced by ChatGPT,
encompassing popular software activities like Q&A (115K), code summarization
(126K), and code generation (226.5K). We evaluated six AIGC detectors,
including three commercial and three open-source solutions, assessing their
performance on this dataset. Additionally, we conducted a human study to
understand human detection capabilities and compare them with the existing AIGC
detectors. Our results indicate that AIGC detectors demonstrate lower
performance on code-related data compared to natural language data. Fine-tuning
can enhance detector performance, especially for content within the same
domain; but generalization remains a challenge. The human evaluation reveals
that detection by humans is quite challenging
CommitBART: A Large Pre-trained Model for GitHub Commits
GitHub commits, which record the code changes with natural language messages
for description, play a critical role for software developers to comprehend the
software evolution. To promote the development of the open-source software
community, we collect a commit benchmark including over 7.99 million commits
across 7 programming languages. Based on this benchmark, we present CommitBART,
a large pre-trained encoder-decoder Transformer model for GitHub commits. The
model is pre-trained by three categories (i.e., denoising objectives,
cross-modal generation and contrastive learning) for six pre-training tasks to
learn commit fragment representations. Furthermore, we unify a ``commit
intelligence'' framework with one understanding task and three generation tasks
for commits. The comprehensive experiments on these tasks demonstrate that
CommitBARTsignificantly outperforms previous pre-trained works for code.
Further analysis also reveals each pre-training task enhances the model
performance
Toward a More PERMA(nent) Conceptualization of Worker Well-Being? A Cross-Cultural Study of the Workplace PERMA Profiler
We examined the factor structure of the recently developed worker well-being measure the Workplace PERMA Profiler and relationships between PERMA dimensions (i.e., positive emotions, engagement, positive relationships, meaning, accomplishment) and job performance (viz., task performance, organizational citizenship behaviors benefiting individuals and the organization at large). The measure exhibited metric (i.e., weak) invariance across samples of participants from the U.S. (N = 284) and China (N = 420). Additionally, for participants who responded to both the Workplace PERMA Profiler and the performance measures, there was a general pattern of positive PERMA–performance relationships across both samples (NU.S. = 147; NChina = 202). Overall, the Workplace PERMA Profiler may have problematic psychometric properties and item wordings and thus would benefit from further refinement
ELLA-V: Stable Neural Codec Language Modeling with Alignment-guided Sequence Reordering
The language model (LM) approach based on acoustic and linguistic prompts,
such as VALL-E, has achieved remarkable progress in the field of zero-shot
audio generation. However, existing methods still have some limitations: 1)
repetitions, transpositions, and omissions in the output synthesized speech due
to limited alignment constraints between audio and phoneme tokens; 2)
challenges of fine-grained control over the synthesized speech with
autoregressive (AR) language model; 3) infinite silence generation due to the
nature of AR-based decoding, especially under the greedy strategy. To alleviate
these issues, we propose ELLA-V, a simple but efficient LM-based zero-shot
text-to-speech (TTS) framework, which enables fine-grained control over
synthesized audio at the phoneme level. The key to ELLA-V is interleaving
sequences of acoustic and phoneme tokens, where phoneme tokens appear ahead of
the corresponding acoustic tokens. The experimental findings reveal that our
model outperforms VALL-E in terms of accuracy and delivers more stable results
using both greedy and sampling-based decoding strategies. The code of ELLA-V
will be open-sourced after cleanups. Audio samples are available at
https://ereboas.github.io/ELLAV/.Comment: Working in progres
- …