213 research outputs found
The role of product portfolio management in market expansion:a case of the mobile gaming industry
Abstract. The rapid growth of mobile game consumer spending has led to Free-to-Play mobile game developers’ constant competition for players by offering new games. The product portfolio management (PPM) approach helps tackle questions about the market, product and technologies based on a company’s strategic targets. However, to discover game genre diversity by aligning product portfolio with business strategy and existing capabilities in new product development process is challenging. A single-case study was conducted to examine the important connection between PPM and business strategy as well as existing capabilities to propose a practical approach for seeking game genre portfolio expansion opportunities. The main results include proposing an analysis framework using PPM and mobile app intelligence software to identify game genres in market expansion that are strategic fit, bring the best economic value and are resonated with company’s existing capabilities and competence. PPM focused areas and key performance indicators are proposed. This study is the first attempt to apply PPM approach with targets and KPIs in mobile game development. It contributes to the previous studies by extending the application of PPM approach in the initial stage of product development process in discoveries and innovation stage. Also, the results can be applied to other mobile game companies with similar new product development process
Propaganda for Democracy : The Vexed History of the Federal Theatre Project
My thesis explores and analyzes the Federal Theater Project’s cultural and political impact during the Depression, as well as the contested legacy of this unique experiment in government-sponsored, broadly accessible cultural expression. Part of the New Deal’s Works Projects Administration, the FTP aimed to provide jobs for playwrights, actors, designers, stagehands, and other theater professionals on relief in the stark period from 1935 to 1939. But the project became a nationwide political and artistic flashpoint, spurring fierce debate over the leadership, politics and impact of this “people’s theater.” The FTP gave professional theater an unprecedented reach into working-class and black communities. The project was marked by the participation of many prominent leftist and Communist writers, performers, and technicians, but its productions did, nonetheless, reflected a broadly rebellious, economically desperate, culturally inclusive popular spirit sparked by the Depression. Refuting charges that the FTP was a thinly veiled, subversive propaganda tool, the project’s leaders countered that its work was educational “propaganda for democracy.” As in today’s political and artistic conflicts, the dispute centered on which principles and ideals actually constitute core American values. I examine the FTP’s achievements and controversies, which centered on the purported mutually exclusive contradiction between education and entertainment, and the boundaries of acceptable political discourse in publicly funded arts. I include two case studies that exemplify these artistic and political conflicts: the Negro Theater Project and the Children’s Theater Project, through, respectively, the Big White Fog and Revolt of the Beavers. During its short lifespan, the FTP was derided as discredited, dogmatic propaganda with scant artistic merit. But it left an honorable legacy grounded in democratic American principles and values. Perhaps such a grassroots cultural phenomenon that celebrated ordinary, struggling people, and explicitly confronted racism and economic deprivation could only flourish under extreme circumstances like the Depression. But the Federal Theater Project, with its subsidized, high-quality, innovative and widely accessible performances, stands as a compelling reminder of a unique moment in our country’s cultural history
Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM
Recently, Large Language Models (LLMs) have made significant advancements and
are now widely used across various domains. Unfortunately, there has been a
rising concern that LLMs can be misused to generate harmful or malicious
content. Though a line of research has focused on aligning LLMs with human
values and preventing them from producing inappropriate content, such
alignments are usually vulnerable and can be bypassed by alignment-breaking
attacks via adversarially optimized or handcrafted jailbreaking prompts. In
this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against
potential alignment-breaking attacks. RA-LLM can be directly constructed upon
an existing aligned LLM with a robust alignment checking function, without
requiring any expensive retraining or fine-tuning process of the original LLM.
Furthermore, we also provide a theoretical analysis for RA-LLM to verify its
effectiveness in defending against alignment-breaking attacks. Through
real-world experiments on open-source large language models, we demonstrate
that RA-LLM can successfully defend against both state-of-the-art adversarial
prompts and popular handcrafted jailbreaking prompts by reducing their attack
success rates from nearly 100\% to around 10\% or less.Comment: 16 Pages, 5 Figures, 3 Table
A Sentence-level Hierarchical BERT Model for Document Classification with Limited Labelled Data
Training deep learning models with limited labelled data is an attractive
scenario for many NLP tasks, including document classification. While with the
recent emergence of BERT, deep learning language models can achieve reasonably
good performance in document classification with few labelled instances, there
is a lack of evidence in the utility of applying BERT-like models on long
document classification. This work introduces a long-text-specific model -- the
Hierarchical BERT Model (HBM) -- that learns sentence-level features of the
text and works well in scenarios with limited labelled data. Various evaluation
experiments have demonstrated that HBM can achieve higher performance in
document classification than the previous state-of-the-art methods with only 50
to 200 labelled instances, especially when documents are long. Also, as an
extra benefit of HBM, the salient sentences identified by learned HBM are
useful as explanations for labelling documents based on a user study
PUnifiedNER: a Prompting-based Unified NER System for Diverse Datasets
Much of named entity recognition (NER) research focuses on developing
dataset-specific models based on data from the domain of interest, and a
limited set of related entity types. This is frustrating as each new dataset
requires a new model to be trained and stored. In this work, we present a
``versatile'' model -- the Prompting-based Unified NER system (PUnifiedNER) --
that works with data from different domains and can recognise up to 37 entity
types simultaneously, and theoretically it could be as many as possible. By
using prompt learning, PUnifiedNER is a novel approach that is able to jointly
train across multiple corpora, implementing intelligent on-demand entity
recognition. Experimental results show that PUnifiedNER leads to significant
prediction benefits compared to dataset-specific models with impressively
reduced model deployment costs. Furthermore, the performance of PUnifiedNER can
achieve competitive or even better performance than state-of-the-art
domain-specific methods for some datasets. We also perform comprehensive pilot
and ablation studies to support in-depth analysis of each component in
PUnifiedNER.Comment: Accepted to AAAI 202
SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning
Contrastive learning methods achieve state-of-the-art results in unsupervised
sentence representation learning. Although playing essential roles in
contrastive learning, data augmentation methods applied on sentences have not
been fully explored. Current SOTA method SimCSE utilizes a simple dropout
mechanism as continuous augmentation which outperforms discrete augmentations
such as cropping, word deletion and synonym replacement. To understand the
underlying rationales, we revisit existing approaches and attempt to
hypothesize the desiderata of reasonable data augmentation methods: balance of
semantic consistency and expression diversity. Based on the hypothesis, we
propose three simple yet effective discrete sentence augmentation methods,
i.e., punctuation insertion, affirmative auxiliary and double negation. The
punctuation marks, auxiliaries and negative words act as minimal noises in
lexical level to produce diverse sentence expressions. Unlike traditional
augmentation methods which randomly modify the sentence, our augmentation rules
are well designed for generating semantically consistent and grammatically
correct sentences. We conduct extensive experiments on both English and Chinese
semantic textual similarity datasets. The results show the robustness and
effectiveness of the proposed methods
Laser Beam Propagation through Oceanic Turbulence
Using a recently proposed model for the refractive index fluctuations in oceanic turbulence, optical beam propagation through seawater is explored. The model provides an accurate depiction of the ocean through the inclusion of both temperature and salinity fluctuations to the refractive index. Several important statistical characteristics are explored including spatial coherence radius, angle-of-arrival fluctuations, and beam wander. Theoretical values of these parameters are found based on weak fluctuation theory using the Rytov method. The results presented serve as a foundation for the study of optical beam propagation in oceanic turbulence, which may provide an important support for further researches in applications for underwater communicating, imaging, and sensing systems
Deeply Coupled Cross-Modal Prompt Learning
Recent advancements in multimodal foundation models (e.g., CLIP) have
excelled in zero-shot generalization. Prompt tuning involved in the knowledge
transfer from foundation models to downstream tasks has gained significant
attention recently. Existing prompt-tuning methods in cross-modal learning,
however, either solely focus on language branch, or learn vision-language
interaction in a shallow mechanism. In this context, we propose a Deeply
coupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexibly
accommodates the interplay between vision and language with a Cross-Modal
Prompt Attention (CMPA) mechanism, which enables the mutual exchange of
respective representation through a well-connected multi-head attention module
progressively and strongly. We then conduct comprehensive few-shot learning
experiments on 11 image classification datasets and analyze the robustness to
domain shift as well. Thorough experimental analysis evidently demonstrates the
superb few-shot generalization and compelling domain adaption capacity of a
well-executed DCP. The code can be found at https://github.com/GingL/CMPA.Comment: Accepted by ACL 2023 finding
- …