12,131 research outputs found
Recommended from our members
Disclosure measurement in the empirical accounting literature: A review article
This is the first study to provide an extensive and critical review of different
techniques used in the empirical accounting literature to measure disclosure. The
purpose is to help future researchers to identify exemplars and to select suitable
techniques or to develop their own techniques. It also provides in depth discussion of current measurement issues related to disclosure and identifies gaps in the current literature which future research may aim to cover
Forecasting Mortality Rate Using a Neural Network with Fuzzy Inference System
Various methods have been developed to improve mortality forecasts. The authors proposed a neuro-fuzzy model to forecast the mortality. The forecasting of mortality is curried out by an ANFIS model which uses a first order Sugeno-type FIS. The model predicts the yearly mortality in a one step ahead prediction scheme. The method of trial and error was used in order to decide the type of membership function that describe better the model and provides the minimum error. The output of the models is the next year�s mortality. The results were presented and compared based on three different kinds of errors: RMSE, MAE, and MAPE. The ANFIS model gives good results for the case of two gbell membership functions and 500 epochs. Finally, the ANFIS model gives better results than the AR and ARMA model.ANFIS, Forecasting, Mortality, Modeling.
Organic Farming in Europe by 2010: Scenarios for the future
How will organic farming in Europe evolve by the year 2010? The answer provides a basis for the development of different policy options and for anticipating the future relative competitiveness of organic and conventional farming. The authors tackle the question using an innovative approach based on scenario analysis, offering the reader a range of scenarios that encompass the main possible evolutions of the organic farming sector.
This book constitutes an innovative and reliable decision-supporting tool for policy makers, farmers and the private sector. Researchers and students operating in the field of agricultural economics will also benefit from the methodological approach adopted for the scenario analysis
Acquiring Correct Knowledge for Natural Language Generation
Natural language generation (NLG) systems are computer software systems that
produce texts in English and other human languages, often from non-linguistic
input data. NLG systems, like most AI systems, need substantial amounts of
knowledge. However, our experience in two NLG projects suggests that it is
difficult to acquire correct knowledge for NLG systems; indeed, every knowledge
acquisition (KA) technique we tried had significant problems. In general terms,
these problems were due to the complexity, novelty, and poorly understood
nature of the tasks our systems attempted, and were worsened by the fact that
people write so differently. This meant in particular that corpus-based KA
approaches suffered because it was impossible to assemble a sizable corpus of
high-quality consistent manually written texts in our domains; and structured
expert-oriented KA techniques suffered because experts disagreed and because we
could not get enough information about special and unusual cases to build
robust systems. We believe that such problems are likely to affect many other
NLG systems as well. In the long term, we hope that new KA techniques may
emerge to help NLG system builders. In the shorter term, we believe that
understanding how individual KA techniques can fail, and using a mixture of
different KA techniques with different strengths and weaknesses, can help
developers acquire NLG knowledge that is mostly correct
Exploiting Qualitative Information for Decision Support in Scenario Analysis
The development of scenario analysis (SA) to assist decision makers and stakeholders has been growing over the last few years through mainly exploiting qualitative information provided by experts. In this study, we present SA based on the use of qualitative data for strategy planning. We discuss the potential of SA as a decision-support tool, and provide a structured approach for the interpretation of SA data, and an empirical validation of expert evaluations that can help to measure the consistency of the analysis. An application to a specific case study is provided, with reference to the European organic farming business
Forecasting Long-Term Government Bond Yields: An Application of Statistical and AI Models
This paper evaluates several artificial intelligence and classical algorithms on their ability of forecasting the monthly yield of the US 10-year Treasury bonds from a set of four economic indicators. Due to the complexity of the prediction problem, the task represents a challenging test for the algorithms under evaluation. At the same time, the study is of particular significance for the important and paradigmatic role played by the US market in the world economy. Four data-driven artificial intelligence approaches are considered, namely, a manually built fuzzy logic model, a machine learned fuzzy logic model, a self-organising map model and a multi-layer perceptron model. Their performance is compared with the performance of two classical approaches, namely, a statistical ARIMA model and an econometric error correction model. The algorithms are evaluated on a complete series of end-month US 10-year Treasury bonds yields and economic indicators from 1986:1 to 2004:12. In terms of prediction accuracy and reliability of the modelling procedure, the best results are obtained by the three parametric regression algorithms, namely the econometric, the statistical and the multi-layer perceptron model. Due to the sparseness of the learning data samples, the manual and the automatic fuzzy logic approaches fail to follow with adequate precision the range of variations of the US 10-year Treasury bonds. For similar reasons, the self-organising map model gives an unsatisfactory performance. Analysis of the results indicates that the econometric model has a slight edge over the statistical and the multi-layer perceptron models. This suggests that pure data-driven induction may not fully capture the complicated mechanisms ruling the changes in interest rates. Overall, the prediction accuracy of the best models is only marginally better than the prediction accuracy of a basic one-step lag predictor. This result highlights the difficulty of the modelling task and, in general, the difficulty of building reliable predictors for financial markets.interest rates; forecasting; neural networks; fuzzy logic.
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
- …