30 research outputs found

    Investment benchmarks : their ontological and epistemological roots

    Get PDF
    This paper investigates the ontology and epistemological justification behind investment benchmarks through the lens of a logical positivist paradigm. This is relevant because the field of finance is often critiqued by moral philosophers and has to justify itself as having a philosophical basis. It is introduced with a Greek ΎΔίÎșΜυΌÎč thought experiment on the social good of such benchmarks. This is done to draw the reader into a mindset that questions their epistemological underpinning. It asks what the nature of benchmarks is and answers this within the context of the broad academic finance tradition that is dominated by positivists, empiricists and a few critical realists

    Ontology Driven Knowledge Discovery Process: a proposal to integrate Ontology Engineering and KDD

    Get PDF
    This paper is concerned with the integration of ontology engineering and the process of knowledge discovery in databases (KDD).It presents a hybrid life, Ontology Driven Knowledge Discovery process and methodology – ODKD, which leverages both ontology engineering and KDD taking in consideration the best industry and research practices. A brief application of the life cycle is described at the end of the paper

    An Evolutionary Argument for a Self-Explanatory, Benevolent Metaphysics

    Get PDF
    In this paper, a metaphysics is proposed that includes everything that can be represented by a well-founded multiset. It is shown that this metaphysics, apart from being self-explanatory, is also benevolent. Paradoxically, it turns out that the probability that we were born in another life than our own is zero. More insights are gained by inducing properties from a metaphysics that is not self-explanatory. In particular, digital metaphysics is analyzed, which claims that only computable things exist. First of all, it is shown that digital metaphysics contradicts itself by leading to the conclusion that the shortest computer program that computes the world is infinitely long. This means that the Church-Turing conjecture must be false. Secondly, the applicability of Occam’s razor is explained by evolution: in an evolving physics it can appear at each moment as if the world is caused by only finitely many things. Thirdly and most importantly, this metaphysics is benevolent in the sense that it organizes itself to fulfill the deepest wishes of its observers. Fourthly, universal computers with an infinite memory capacity cannot be built in the world. And finally, all the properties of the world, both good and bad, can be explained by evolutionary conservation

    Toward better public health reporting using existing off the shelf approaches: The value of medical dictionaries in automated cancer detection using plaintext medical data

    Get PDF
    Objectives Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and “off the shelf” tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. Materials and methods We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Results Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Conclusion Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing “off the shelf” approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches

    Effectiveness of conservation agriculture (tillage vs. vegetal soil cover) to reduce water erosion in maize cultivation (Zea mays L.): An experimental study in the sub-humid uplands of Guatemala.

    Get PDF
    Cultivated uplands in tropical latitudes are severely affected by soil water erosion. Conservation agriculture (CA) is specifically intended to control erosion. The aim of the present study is to analyse the effectiveness of CA measures to reduce the erosion in maize cultivation (Zea mays L.) on andosols in the mountains of southern Guatemala. The study was conducted over a three-year period, from 2017 to 2019, on three experimental plots managed under conventional tillage (CT), reduced tillage (RT) and no-tillage (NT). The results showed different rates of eroded soil surface between the three management systems: 73.2% under CT, 41.3% under RT and 20.4% under NT. Analysis of the complete database (n = 36) showed that the litter cover (ryl.p = –0.86, p < 0.001) and the soil disturbance (ryp.l = 0.57, p < 0.001) were, in that order, the factors with the greatest explanatory power of the eroded surface. The segmented analysis (n = 12) showed that the management system adopted had a decisive influence on the ground cover (litter and weed cover) and, therefore, on the soil erosion. Under CT, the eroded surface was only correlated with the weed cover (ryw.l = –0.68, p < 0.05), under NT only with the litter cover (ryl.w = –0.89, p < 0.001) and under RT the erosion did not correlate with either of the vegetal layers. Three conclusions are derived from this study. First, litter layer was the key explanatory factor of erosion. Second, this factor is highly influenced by the agricultural management system. The proportion and distribution of the litter layer in each management situation were key to explaine the different soil erosion rates between the three management systems. And finally, it is proposed for the area of this study the soil management under NT with a dense and well distributed litter cover.This work was supported by the Project ‘Transfer-monitoring-evaluation of soil erosion control measures for sustainable agricultural development in rural communities with high vulnerability to climate change in Chimaltenango (Guatemala)’, (Ref. 2020UI005), funded by the Andalusian Agency for International Cooperation (AACID), Spain, and by the Association for Welfare, Progress and Development, Guatemala

    The bias bias

    Get PDF
    AbstractIn marketing and finance, surprisingly simple models sometimes predict more accurately than more complex, sophisticated models. Here, we address the question of when and why simple models succeed — or fail — by framing the forecasting problem in terms of the bias–variance dilemma. Controllable error in forecasting consists of two components, the “bias” and the “variance”. We argue that the benefits of simplicity are often overlooked because of a pervasive “bias bias”: the importance of the bias component of prediction error is inflated, and the variance component of prediction error, which reflects an oversensitivity of a model to different samples from the same population, is neglected. Using the study of cognitive heuristics, we discuss how to reduce variance by ignoring weights, attributes, and dependencies between attributes, and thus make better decisions. Bias and variance, we argue, offer a more insightful perspective on the benefits of simplicity than Occam’'s razor

    Is it the end of the technology acceptance model in the era of generative artificial intelligence?

    Get PDF
    PurposeThe technology acceptance model (TAM) is a widely used framework explaining why users accept new technologies. Still, its relevance is questioned because of evolving consumer behavior, demographics and technology. Contrary to a research paper or systematic literature review, the purpose of this critical reflection paper is to discuss TAM's relevance and limitations in hospitality and tourism research.Design/methodology/approachThis paper uses a critical reflective approach, enabling a comprehensive review and synthesis of recent academic literature on TAM. The critical evaluation encompasses its historical trajectory, evolutionary growth, identified limitations and, more specifically, its relevance in the context of hospitality and tourism research.FindingsTAM's limitations within the hospitality and tourism context revolve around its individual-centric perspective, limited scope, static nature, cultural applicability and reliance on self-reported measures.Research limitations/implicationsTo optimize TAM's efficacy, the authors propose several strategic recommendations. These include embedding TAM within the specific context of the industry, delving into TAM-driven artificial intelligence adoption, integrating industry-specific factors, acknowledging cultural nuances and using comprehensive research methods, such as mixed methods approach. It is imperative for researchers to critically assess TAM's suitability for their studies and be open to exploring alternative models or methods that can adeptly navigate the distinctive dynamics of the industry.Originality/valueThis critical reflection paper prompts a profound exploration of technology adoption within the dynamic hospitality and tourism sector, makes insightful inquiries into TAM's future potential and presents recommendations

    Machine Learning Approach for Risk-Based Inspection Screening Assessment

    Get PDF
    Risk-based inspection (RBI) screening assessment is used to identify equipment that makes a significant contribution to the system's total risk of failure (RoF), so that the RBI detailed assessment can focus on analyzing higher-risk equipment. Due to its qualitative nature and high dependency on sound engineering judgment, screening assessment is vulnerable to human biases and errors, and thus subject to output variability and threatens the integrity of the assets. This paper attempts to tackle these challenges by utilizing a machine learning approach to conduct screening assessment. A case study using a dataset of RBI assessment for oil and gas production and processing units is provided, to illustrate the development of an intelligent system, based on a machine learning model for performing RBI screening assessment. The best performing model achieves accuracy and precision of 92.33% and 84.58%, respectively. A comparative analysis between the performance of the intelligent system and the conventional assessment is performed to examine the benefits of applying the machine learning approach in the RBI screening assessment. The result shows that the application of the machine learning approach potentially improves the quality of the conventional RBI screening assessment output by reducing output variability and increasing accuracy and precision.acceptedVersio

    Computationally Modeling an Incremental Learning Account of Semantic Interference through Phonological Influence

    Get PDF
    Computer models play a vital role in providing ways to effectively simulate complex systems and to test scientific theories and hypotheses. One major area of success for neural network models in particular has been in cognitive neuroscience for modeling semantic interference effects in memory. When a person sees a picture of an object such as a car multiple times, the memory of that object is primed so that it can be retrieved more effectively. When a picture of a similar object is seen, such as a truck, sharing semantic features with the primed object, then the primed memory of a car would interfere with the retrieval of a truck. This is known as semantic interference. A recent hypothesis by Preusse et al. (2013) puts forward that semantic interference is further increased by the sharing of phonemes among two words. In this thesis a new phonological computer model of lexical retrieval is developed based on this hypothesis using a two layer feedforward Artificial Neural Network (ANN). The new model can represent semantic interference effects through increased lexical activation by phonological features. Simulations were performed in a MATLAB environment each using a different variant of the phonological model. The simulations tested three conditions of activating semantic and phonological features. Results demonstrated that semantic interference is significantly increased when phonological features are activated alongside semantic features versus activating semantic features alone thus supporting the hypothesis by Preusse et al. (2013). The characteristics of the new ANN model could make it useful in studying other phenomena related to memory and learning
    corecore