542,900 research outputs found
Using Wittgensteinâs family resemblance principle to learn exemplars
The introduction of the notion of family resemblance represented a major shift in Wittgensteinâs thoughts on the meaning of words, moving away from a belief that words
were well defined, to a view that words denoted less well defined categories of meaning.
This paper presents the use of the notion of family resemblance in the area of machine learning as an example of the benefits that can accrue from adopting the kind of paradigm shift taken by Wittgenstein. The paper presents a model capable of learning exemplars using the principle of family resemblance and adopting Bayesian networks for a representation of exemplars. An empirical evaluation is presented on three data sets and shows promising results that suggest that previous assumptions about the way we categories need reopening
Heterogeneous beliefs and instability
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000057/While Rational Expectations have dominated the paradigm of expectations formation,
they have been more recently challenged on the empirical ground such as, for
instance, in the dynamics of the exchange rate. This challenge has led to the
introduction of heterogeneous expectations in economic modeling. More specifically,
the forecasts of the market participants have been drawn from competing views. Two
behaviours are usually considered: agents are either fundamentalist or chartist.
Moreover, the possibility of switching from one behaviour to the other one is also
assumed.
In a simple cobweb model, we study the dynamics associated with different
endogenous switching process based on the path of prices. We provide an example
with an asymmetric endogenous switching process built on the dynamics of past
prices. This example confirms the widespread belief that fundamentalist market
behaviour as compared with that of chartist tends to promote market stability.Postprin
Detecting Causes of Variances In Operational Outputs of Manufacturing Organizations: A Forensic Accounting Investigation Approach.
With the introduction of the International Standard on Auditing number 240 (ISA240)
there has been a paradigm shift in auditing as auditors are now required to identify and
assess the risks of material misstatements due to fraud at the financial statement level and
to evaluate the sufficiency, implementation and the effectiveness of the controls related to
those assessed prone to fraud. This, of course, implies that statutory audit must now take
the garb of forensic investigations. The problem with the present system of forensic
investigation is that it is focused more on financial transactions than on the totality of the
entityâs operations and often time neglects areas where there have been constant leakages
of other organizational resources that are of financial consequences but which are not
easily detected with a normal analysis of the financial statement. This paper attempts to
offer suggestions using real case problem on how to apply forensic accounting in
investigating variances and suspected fraudulent activities in manufacturing processes. It
employs both empirical and supervised experimental modules integrated with the normal
audit tools in unearthing fraudulent acts perpetrated over many accounting periods
Hierarchical elimination-by-aspects and nested logit models of stated preferences for alternative fuel vehicles
1. INTRODUCTION
Since the late 1960s, transport demand analysis has been the context for significant developments in model forms for the representation of discrete choice behaviour. Such developments have adhered almost exclusively to
the behavioural paradigm of Random Utility Maximisation (RUM), first proposed by Marschak (1960) and Block and Marschak (1960). A common argument for the allegiance to RUM is that it ensures consistency with the fundamental axioms of microeconomic consumer theory and, it follows,
permits interface between the demand model and the concepts of welfare economics (e.g. Koppelman and Wen, 2001). The desire to better represent observed choice, which has driven developments in RUM models, has been somewhat at odds, however, with the frequent assault on the utility maximisation paradigm, and by implication
RUM, from a range of literatures. This critique has challenged the empirical validity of the fundamental axioms (e.g. Kahneman and Tversky, 2000; Mclntosh and Ryan, 2002; Saelensmide, 1999) and, more generally, the
realism of the notion of instrumental rationality inherent in utility maximisation (e.g. Hargreaves-Heap, 1992; McFadden, 1999; Camerer, 1998). Emanating from these literatures has been an alternative family of so-called
non-RUM models, which seek to offer greater realism in the representation of how individuals actually process choice tasks. The workshop on Methodological Developments at the 2000 Conference of the International Association for Travel Behaviour Research concluded: 'Non-RUM models
deserve to be evaluated side-by-side with RUM models to determine their practicality, ability to describe behaviour, and usefulness for transportation policy. The research agenda should include tests of these models' (Bolduc and McFadden, 2001 p326). The present paper, together with a companion paper, Batley and Daly (2003), offer a timely contribution to this research
priority. Batley and Daly (2003) present a detailed account of the theoretical derivation of RUM, and consider the relationships of two specific RUM forms;
nested logit [NL] (Ben-Akiva, 1974; Williams, 1977; Daly and Zachary, 1976; McFadden, 1978) and recursive nested extreme value [RNEV] (Daly, 2001 ; Bierlaire, 2002; Daly and Bierlaire, 2003); to two specific non-RUM forms;
elimination-by-aspects [EBA] (Tversky, 1972a, 1972b) and hierarchical EBA [HEBA] (Tversky and Sattath, 1979). In particular, Batley and Daly (2003) establish conditions under which NL and RNEV derive equivalent choice
probabilities to HEBA and EBA, respectively. These findings would seem to ameliorate the concern that the application of RUM models to data generated by non-RUM choice processes could introduce significant biases. That
aside, substantive issues remain as to how non-RUM models can best be specified so as to yield useful and robust information in both estimation and forecasting contexts, and how their empirical performance compares with
RUM models. Such issues are the focus of the present paper, which applies non-RUM models to a real empirical context
Presentation of Design Science Research in Information Systems and Engineering Disciplines - Empirical Investigation of Common Structures and Differences
Design Science Research is a research paradigm suitable for application-oriented disciplines that develop (construct) artifacts as solutions to practical problems. Design Science Research is known to be a mainstream research paradigm in engineering and other disciplines. In recent years, Design Science Research (DSR) has become an established research approach in the ïŹeld of Information Systems (IS). Nevertheless, there is an ongoing debate about the methodology and guidelines for Design Science Research in Information Systems (IS-DSR). This paper proposes to gather and leverage insights from other design disciplines, such as engineering, to provide clarity and inspiration for IS-DSR and to work towards a common understanding of design science research across disciplines. This paper provides results of an initial empirical analysis of research literature from engineering disciplines. The results provide suggestions for validating DSR results and contribute to the understanding of research guidelines for DSR. In addition, a novel, ïŹne-grained, and operational framework for analyzing DSR papers and projects is presented. The third contribution is a proposal to develop a common basic schema for design science research, analogous to the standard IMRaD schema for empirical research. Based on the analysis of samples of papers, this paper proposes IDEaD as the standard scheme for Design Science Research, i.e., Introduction, Description, Evaluation, and Discussion
The Need for Empirically-Led Synthetic Philosophy
The problem of unifying knowledge represents the frontier between science and philosophy. Science approaches the problem analytically bottom-up whereas, prior to the end of the nineteenth century, philosophy approached the problem synthetically top-down. In the late nineteenth century, the approach of speculative metaphysics was rejected outright by science. Unfortunately, in the rush for science to break with speculative metaphysics, synthetic or top-down philosophy as a whole was rejected. This meant not only the rejection of speculative metaphysics, but also the implicit rejection of empirically-led synthetic philosophy and the philosophy of nature. Since a change in the paradigm of science requires a change in the philosophy of nature underpinning science, the rejection of the philosophy of nature closes science to the possibility of a paradigm change. Given the foundational problems faced by science, there is a need for empirically-led synthetic philosophy in order to discover a new empirically-based philosophy of nature. Such a philosophy of nature may open science to the possibility of a paradigm change
- âŠ