247,895 research outputs found
Learning to be Homo Economicus: Can an LLM Learn Preferences from Choice
This paper explores the use of Large Language Models (LLMs) as decision aids,
with a focus on their ability to learn preferences and provide personalized
recommendations. To establish a baseline, we replicate standard economic
experiments on choice under risk (Choi et al., 2007) with GPT, one of the most
prominent LLMs, prompted to respond as (i) a human decision maker or (ii) a
recommendation system for customers. With these baselines established, GPT is
provided with a sample set of choices and prompted to make recommendations
based on the provided data. From the data generated by GPT, we identify its
(revealed) preferences and explore its ability to learn from data. Our analysis
yields three results. First, GPT's choices are consistent with (expected)
utility maximization theory. Second, GPT can align its recommendations with
people's risk aversion, by recommending less risky portfolios to more
risk-averse decision makers, highlighting GPT's potential as a personalized
decision aid. Third, however, GPT demonstrates limited alignment when it comes
to disappointment aversion
Extension and Application of Event-driven Process Chain for Information System Security Risk Management
Turvatehnika konstrueerimine on ĂŒks suuremaid murekohti sĂŒsteemi arenduses ja sellele tuleks tĂ€helepanu pöörata kogu arendusprotsessi jooksul. Turvaliseks modelleerimiseks on mitmeid erinevaid keeli, mis aitavad hallata turvariske juba nĂ”uete staadiumis. KĂ€esolevas töös keskendutakse esmalt Event-driven Process Chain (EPC)-le, mida kasutatakse Ă€riprotsesside modelleerimisel. TĂ€psemalt öeldes uuritakse, kuidas antud keel toetab infosĂŒsteemi turberiskihaldust (ISSRM). Uurimuse eesmĂ€rk on vĂ€lja selgitada EPC jaoks vajalikud turbenĂ”uded. Nende tulemusena saame vastavustabeli EPC konstruktsioonide ja ISSRM domeeni mudeli kontseptide vahel. JĂ€rgnevalt laiendame EPC keelt ja selle konstruktsioone EPC ja ISSRM vastavustabeli seostega. Tekkinud laiendatud keelt kutsume âSecurity-Oriented EPCâ. Laiendatud modelleerimiskeel sisaldab uut konstruktsioonide kogumikku, mis viitab ISSRM kontseptidele. Olles selgitanud turvanĂ”uete olulisust varajases arendusstaadiumis, esitleme töötluse suunised, et viia ellu tĂ”lked Security-Oriented EPC ja Mal-Activity Diagrams (MAD) vahel. Meie ettepanek pĂ”hineb EPC keele sĂŒstemaatiliste ja maandatud laiendustel ja selle vastastikusest sĂ”ltuvusest ISSRM domeeni mudelisse. Vastavuses olevad tulemused aitavad Ă€rianalĂŒĂŒtikutel mĂ”ista, kuidas modelleerida turvariske sĂŒsteemi nĂ”uete ja disainimise staadiumites. Lisaks annavad töötluse tulemused vĂ”imaluse koostööks erinevate modelleerimiskeelte vahel, mida analĂŒĂŒsitakse kasutades sama kontseptuaalset raamistikku.Security engineering is one of the important concerns during the system development and it should be addressed throughout the whole system development process. Besides, there are several languages for security modeling that help dealing with security risk management at the requirements stage. In this thesis, first of all, we are focusing on Event-driven Process Chain (EPC), which is used during the business process modeling. More specifically, we investigate how this language supports information system security risk management (ISSRM). The purpose of this investigation is the problem of security requirements need of EPC. As a result, we obtain an alignment table between EPC constructs and ISSRM domain model concepts. Next, we extend the EPC language and its constructs with respect to the alignment table between EPC and ISSRM. As a consequence, we call the extended language as âSecurity-Oriented EPCâ. The extended language contains new set of constructs which refer to ISSRM concepts. Lastly, after clarifying the importance of security requirements at the early system development, we present transformation guidelines to perform forward model translations from Security-Oriented EPC to Mal-Activity Diagrams (MAD). During the transformation, our proposal is based on the systematic and grounded extensions of EPC language and its interdependency to the domain model of ISSRM. Alignment results may help business analysts understand how to model security risks at the system requirement and design stages. Also, transformation results pave the way for interoperability between the modeling languages that are analysed using the same conceptual framework
An incremental three-pass system combination framework by combining multiple hypothesis alignment methods
System combination has been applied successfully to various machine translation tasks in recent years. As is known, the hypothesis alignment method is a critical factor for the
translation quality of system combination. To date, many effective hypothesis alignment metrics have been proposed and applied to the system combination, such as TER, HMM,
ITER, IHMM, and SSCI. In addition, Minimum Bayes-risk (MBR) decoding and confusion networks (CN) have become state-of-the-art techniques in system combination. In this paper,
we examine different hypothesis alignment approaches and investigate how much the hypothesis alignment results impact on system combination, and finally present a three-pass system combination strategy that can combine hypothesis alignment results derived from multiple alignment metrics to generate a better translation. Firstly, these different alignment metrics are carried out to align the backbone and hypotheses, and the individual CNs are built corresponding to each set of alignment results; then we construct a âsuper networkâ by merging the multiple metric-based CNs to generate a consensus output. Finally a modified MBR network approach is employed to find the best overall translation. Our proposed strategy outperforms the best single confusion network as well as the best single system in our experiments on the NIST Chinese-to-English test set and the WMT2009 English-to-French system combination shared test set
An augmented three-pass system combination framework: DCU combination system for WMT 2010
This paper describes the augmented threepass
system combination framework of
the Dublin City University (DCU) MT
group for the WMT 2010 system combination
task. The basic three-pass framework
includes building individual confusion
networks (CNs), a super network, and
a modified Minimum Bayes-risk (mCon-
MBR) decoder. The augmented parts for
WMT2010 tasks include 1) a rescoring
component which is used to re-rank the
N-best lists generated from the individual
CNs and the super network, 2) a new hypothesis
alignment metric â TERp â that
is used to carry out English-targeted hypothesis
alignment, and 3) more different
backbone-based CNs which are employed
to increase the diversity of the
mConMBR decoding phase. We took
part in the combination tasks of Englishto-
Czech and French-to-English. Experimental
results show that our proposed
combination framework achieved 2.17 absolute
points (13.36 relative points) and
1.52 absolute points (5.37 relative points)
in terms of BLEU score on English-to-
Czech and French-to-English tasks respectively
than the best single system. We
also achieved better performance on human
evaluation
Source-side context-informed hypothesis alignment for combining outputs from machine translation systems
This paper presents a new hypothesis alignment method for combining outputs of multiple machine translation (MT) systems. Traditional hypothesis alignment algorithms such
as TER, HMM and IHMM do not directly utilise the context information of the source side but rather address the alignment issues via the output data itself. In this paper, a source-side context-informed (SSCI) hypothesis alignment method is proposed to carry out the word alignment and word reordering issues. First of all, the sourceâtarget word alignment links are produced as the hidden variables by exporting source phrase spans during the translation decoding process. Secondly, a mapping strategy and normalisation model are employed to acquire the 1-
to-1 alignment links and build the confusion network (CN). The source-side context-based method outperforms the state-of-the-art TERbased alignment model in our experiments
on the WMT09 English-to-French and NIST Chinese-to-English data sets respectively. Experimental results demonstrate that our proposed approach scores consistently among the
best results across different data and language pair conditions
Sentence-level quality estimation for MT system combination
This paper provides the system description of the Dublin City University system combination module for our participation in the system combination task in the Second Workshop on Applying Machine Learning Techniques to Optimize the Division of Labour in Hybrid MT (ML4HMT- 12). We incorporated a sentence-level quality score, obtained by sentence-level Quality Estimation (QE), as meta information guiding system combination. Instead of using BLEU or (minimum average) TER, we select a backbone for the confusion network using the estimated quality score. For the Spanish-English data, our strategy improved 0.89 BLEU points absolute compared to the best single score and 0.20 BLEU points absolute compared to the standard system combination strateg
Topic modeling-based domain adaptation for system combination
This paper gives the system description of the domain adaptation team of Dublin City University for our participation in the system combination task in the Second Workshop on Applying Machine Learning Techniques to Optimise the Division of Labour in Hybrid MT (ML4HMT-12). We used the results of unsupervised document classification as meta information to the system combination module. For the Spanish-English data, our strategy achieved 26.33 BLEU points, 0.33 BLEU points absolute improvement over the standard confusion-network-based system combination. This was the best score in terms of BLEU among six participants in ML4HMT-12
Large-scale Hierarchical Alignment for Data-driven Text Rewriting
We propose a simple unsupervised method for extracting pseudo-parallel
monolingual sentence pairs from comparable corpora representative of two
different text styles, such as news articles and scientific papers. Our
approach does not require a seed parallel corpus, but instead relies solely on
hierarchical search over pre-trained embeddings of documents and sentences. We
demonstrate the effectiveness of our method through automatic and extrinsic
evaluation on text simplification from the normal to the Simple Wikipedia. We
show that pseudo-parallel sentences extracted with our method not only
supplement existing parallel data, but can even lead to competitive performance
on their own.Comment: RANLP 201
- âŠ