181 research outputs found

    Another reason why the efficient market hypothesis is fuzzy

    Get PDF
    This paper makes use of the performance evaluation to test the validity of the efficient market hypothesis (EMH) in hedge fund universe. The paper develops a fuzzy set based performance analysis and portfolio optimisation and compares the results with those obtained with the traditional probability methods (frequentist and Bayesian models). We consider a data set of monthly investment strategy indices published by Hedge Fund Research group. The data set spans from January 1995 to June 2012. We divide this sample period into four overlapping sub-sample periods that contain different economic market trends. To investigate the presence of managerial skills among hedge fund managers we first distinguish between outperformance, selectivity and market timing skills. We thereafter employ three different econometric models: frequentist, Bayesian and fuzzy regression, in order to estimate outperformance, selectivity and market timing skills using both linear and quadratic CAPM models. Persistence in performance is carried out in three different fashions: contingence table, chi-square test and cross-sectional auto-regression technique. The findings obtained with probabilistic methods contradict the EMH and suggest that the “market is not always efficient,” it is possible to make abnormal rate of returns if one exploits mispricing in the market, and makes use of specific investment strategies. However, the results obtained with the fuzzy set based performance analysis support the appeal of the EMH according to which no economic agent can make risk-adjusted abnormal rate of return. The set of optimal invest strategies under fuzzy set theory results in a well-diversified portfolio of investment with an expected mean return equal to that of the efficient frontier portfolio under the Markowitz’ mean-variance

    Belief Change in Reasoning Agents: Axiomatizations, Semantics and Computations

    Get PDF
    The capability of changing beliefs upon new information in a rational and efficient way is crucial for an intelligent agent. Belief change therefore is one of the central research fields in Artificial Intelligence (AI) for over two decades. In the AI literature, two different kinds of belief change operations have been intensively investigated: belief update, which deal with situations where the new information describes changes of the world; and belief revision, which assumes the world is static. As another important research area in AI, reasoning about actions mainly studies the problem of representing and reasoning about effects of actions. These two research fields are closely related and apply a common underlying principle, that is, an agent should change its beliefs (knowledge) as little as possible whenever an adjustment is necessary. This lays down the possibility of reusing the ideas and results of one field in the other, and vice verse. This thesis aims to develop a general framework and devise computational models that are applicable in reasoning about actions. Firstly, I shall propose a new framework for iterated belief revision by introducing a new postulate to the existing AGM/DP postulates, which provides general criteria for the design of iterated revision operators. Secondly, based on the new framework, a concrete iterated revision operator is devised. The semantic model of the operator gives nice intuitions and helps to show its satisfiability of desirable postulates. I also show that the computational model of the operator is almost optimal in time and space-complexity. In order to deal with the belief change problem in multi-agent systems, I introduce a concept of mutual belief revision which is concerned with information exchange among agents. A concrete mutual revision operator is devised by generalizing the iterated revision operator. Likewise, a semantic model is used to show the intuition and many nice properties of the mutual revision operator, and the complexity of its computational model is formally analyzed. Finally, I present a belief update operator, which takes into account two important problems of reasoning about action, i.e., disjunctive updates and domain constraints. Again, the updated operator is presented with both a semantic model and a computational model

    Scalable intelligent electronic catalogs

    Get PDF
    The world today is full of information systems which make huge quantities of information available. This incredible amount of information is clearly overwhelming Internet endusers. As a consequence, intelligent tools to identify worthwhile information are needed, in order to fully assist people in finding the right information. Moreover, most systems are ultimately used, not just to provide information, but also to solve problems. Encouraged by the growing popular success of Internet and the enormous business potential of electronic commerce, e-catalogs have been consolidated as one of the most relevant types of information systems. Nearly all currently available electronic catalogs are offering tools for extracting product information based on key-attribute filtering methods. The most advanced electronic catalogs are implemented as recommender systems using collaborative filtering techniques. This dissertation focuses on strategies for coping with the difficulty of building intelligent catalogs which fully support the user in his purchase decision-making process, while maintaining the scalability of the whole system. The contributions of this thesis lie on a mixed-initiative system which is inspired by observations on traditional commerce activities. Such a conversational model consists basically of a dialog between the customer and the system, where the user criticizes proposed products and the catalog suggests new products accordingly. Constraint satisfaction techniques are analyzed in order to provide a uniform framework for modeling electronic catalogs for configurable products. Within the same framework, user preferences and optimization constraints are also easily modeled. Searching strategies for proposing the adequate products according to criteria are described in detail. Another dimension of this dissertation faces the problem of scalability, i.e., the problem of supporting hundreds, or thousands of users simultaneously using intelligent electronic catalogs. Traditional wisdom would presume that in order to provide full assistance to users in complex tasks, the business logic of the system must be complex, thus preventing scalability. SmartClient is a software architectural model that uses constraint satisfaction problems for representing solution spaces, instead of traditional models which represent solution spaces by collections of single solutions. This main idea is supported by the fact that constraint solvers are extreme in their compactness and simplicity, while providing sophisticated business logic. Different SmartClient architecture configurations are provided for different uses and architectural requirements. In order to illustrate the use of constraint satisfaction techniques for complex electronic catalogs with the SmartClient architecture, a commercial Internet-based application for travel planning, called reality, has been successfully developed. Travel planning is a particularly appropriate domain for validating the results of this research, since travel information is dynamic, travel planning problems are combinatorial, and moreover, complex user preferences and optimization constraints must be taken into consideration

    DIAMS revisited: Taming the variety of knowledge in fault diagnosis expert systems

    Get PDF
    The DIAMS program, initiated in 1986, led to the development of a prototype expert system, DIAMS-1 dedicated to the Telecom 1 Attitude and Orbit Control System, and to a near-operational system, DIAMS-2, covering a whole satellite (the Telecom 2 platform and its interfaces with the payload), which was installed in the Satellite Control Center in 1993. The refinement of the knowledge representation and reasoning is now being studied, focusing on the introduction of appropriate handling of incompleteness, uncertainty and time, and keeping in mind operational constraints. For the latest generation of the tool, DIAMS-3, a new architecture has been proposed, that enables the cooperative exploitation of various models and knowledge representations. On the same baseline, new solutions enabling higher integration of diagnostic systems in the operational environment and cooperation with other knowledge intensive systems such as data analysis, planning or procedure management tools have been introduced

    Chapter 8 Persistent Optimism under Political Uncertainty

    Get PDF
    This chapter examines the social dynamics of projections about the outcomes and implications of the repeated elections in Israel. Based on a combination of a panel survey and focus groups, we analyze citizens’ evolving predictions regarding the expected largest party, the next prime minister, the coalition composition, and the future of Israel more generally. Introducing a conceptual framework that breaks political projections into several constituent elements, we study what probabilities and evaluations people assign to their predictions, how they explain them, and what their implications are for political participation. We show that despite the deepening political crisis, Israeli citizens’ political optimism did not decrease during the three 2019–2020 election campaigns. Furthermore, we find an important link between intention to vote and the expected level of happiness about the predicted outcomes. Based on these findings, we argue that persistent optimism is one explanation for the higher voter turnout in each round of elections. In the epilogue we consider additional insights from the 2021 election, which saw a reversal in voters’ growing optimism and turnout, but which eventually fulfilled hopes of the anti-Netanyahu camp for political change

    Recent advances in directional statistics

    Get PDF
    Mainstream statistical methodology is generally applicable to data observed in Euclidean space. There are, however, numerous contexts of considerable scientific interest in which the natural supports for the data under consideration are Riemannian manifolds like the unit circle, torus, sphere and their extensions. Typically, such data can be represented using one or more directions, and directional statistics is the branch of statistics that deals with their analysis. In this paper we provide a review of the many recent developments in the field since the publication of Mardia and Jupp (1999), still the most comprehensive text on directional statistics. Many of those developments have been stimulated by interesting applications in fields as diverse as astronomy, medicine, genetics, neurology, aeronautics, acoustics, image analysis, text mining, environmetrics, and machine learning. We begin by considering developments for the exploratory analysis of directional data before progressing to distributional models, general approaches to inference, hypothesis testing, regression, nonparametric curve estimation, methods for dimension reduction, classification and clustering, and the modelling of time series, spatial and spatio-temporal data. An overview of currently available software for analysing directional data is also provided, and potential future developments discussed.Comment: 61 page

    Soft computing approaches to uncertainty propagation in environmental risk mangement

    Get PDF
    Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like probabilistic reasoning, fuzzy logic, artificial neural nets etc have characteristics of approximation and dispositionality. Although in hard computing, imprecision and uncertainty are undesirable properties, in soft computing the tolerance for imprecision and uncertainty is exploited to achieve tractability, lower cost of computation, effective communication and high Machine Intelligence Quotient (MIQ). Proposed thesis has tried to explore use of different soft computing approaches to handle uncertainty in environmental risk management. The work has been divided into three parts consisting five papers. In the first part of this thesis different uncertainty propagation methods have been investigated. The first methodology is generalized fuzzy α-cut based on the concept of transformation method. A case study of uncertainty analysis of pollutant transport in the subsurface has been used to show the utility of this approach. This approach shows superiority over conventional methods of uncertainty modelling. A Second method is proposed to manage uncertainty and variability together in risk models. The new hybrid approach combining probabilistic and fuzzy set theory is called Fuzzy Latin Hypercube Sampling (FLHS). An important property of this method is its ability to separate randomness and imprecision to increase the quality of information. A fuzzified statistical summary of the model results gives indices of sensitivity and uncertainty that relate the effects of variability and uncertainty of input variables to model predictions. The feasibility of the method is validated to analyze total variance in the calculation of incremental lifetime risks due to polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/F) for the residents living in the surroundings of a municipal solid waste incinerator (MSWI) in Basque Country, Spain. The second part of this thesis deals with the use of artificial intelligence technique for generating environmental indices. The first paper focused on the development of a Hazzard Index (HI) using persistence, bioaccumulation and toxicity properties of a large number of organic and inorganic pollutants. For deriving this index, Self-Organizing Maps (SOM) has been used which provided a hazard ranking for each compound. Subsequently, an Integral Risk Index was developed taking into account the HI and the concentrations of all pollutants in soil samples collected in the target area. Finally, a risk map was elaborated by representing the spatial distribution of the Integral Risk Index with a Geographic Information System (GIS). The second paper is an improvement of the first work. New approach called Neuro-Probabilistic HI was developed by combining SOM and Monte-Carlo analysis. It considers uncertainty associated with contaminants characteristic values. This new index seems to be an adequate tool to be taken into account in risk assessment processes. In both study, the methods have been validated through its implementation in the industrial chemical / petrochemical area of Tarragona. The third part of this thesis deals with decision-making framework for environmental risk management. In this study, an integrated fuzzy relation analysis (IFRA) model is proposed for risk assessment involving multiple criteria. The fuzzy risk-analysis model is proposed to comprehensively evaluate all risks associated with contaminated systems resulting from more than one toxic chemical. The model is an integrated view on uncertainty techniques based on multi-valued mappings, fuzzy relations and fuzzy analytical hierarchical process. Integration of system simulation and risk analysis using fuzzy approach allowed to incorporate system modelling uncertainty and subjective risk criteria. In this study, it has been shown that a broad integration of fuzzy system simulation and fuzzy risk analysis is possible. In conclusion, this study has broadly demonstrated the usefulness of soft computing approaches in environmental risk analysis. The proposed methods could significantly advance practice of risk analysis by effectively addressing critical issues of uncertainty propagation problem.Los problemas del mundo real, especialmente aquellos que implican sistemas naturales, son complejos y se componen de muchos componentes indeterminados, que muestran en muchos casos una relación no lineal. Los modelos convencionales basados en técnicas analíticas que se utilizan actualmente para conocer y predecir el comportamiento de dichos sistemas pueden ser muy complicados e inflexibles cuando se quiere hacer frente a la imprecisión y la complejidad del sistema en un mundo real. El tratamiento de dichos sistemas, supone el enfrentarse a un elevado nivel de incertidumbre así como considerar la imprecisión. Los modelos clásicos basados en análisis numéricos, lógica de valores exactos o binarios, se caracterizan por su precisión y categorización y son clasificados como una aproximación al hard computing. Por el contrario, el soft computing tal como la lógica de razonamiento probabilístico, las redes neuronales artificiales, etc., tienen la característica de aproximación y disponibilidad. Aunque en la hard computing, la imprecisión y la incertidumbre son propiedades no deseadas, en el soft computing la tolerancia en la imprecisión y la incerteza se aprovechan para alcanzar tratabilidad, bajos costes de computación, una comunicación efectiva y un elevado Machine Intelligence Quotient (MIQ). La tesis propuesta intenta explorar el uso de las diferentes aproximaciones en la informática blanda para manipular la incertidumbre en la gestión del riesgo medioambiental. El trabajo se ha dividido en tres secciones que forman parte de cinco artículos. En la primera parte de esta tesis, se han investigado diferentes métodos de propagación de la incertidumbre. El primer método es el generalizado fuzzy α-cut, el cual está basada en el método de transformación. Para demostrar la utilidad de esta aproximación, se ha utilizado un caso de estudio de análisis de incertidumbre en el transporte de la contaminación en suelo. Esta aproximación muestra una superioridad frente a los métodos convencionales de modelación de la incertidumbre. La segunda metodología propuesta trabaja conjuntamente la variabilidad y la incertidumbre en los modelos de evaluación de riesgo. Para ello, se ha elaborado una nueva aproximación híbrida denominada Fuzzy Latin Hypercube Sampling (FLHS), que combina los conjuntos de la teoría de probabilidad con la teoría de los conjuntos difusos. Una propiedad importante de esta teoría es su capacidad para separarse los aleatoriedad y imprecisión, lo que supone la obtención de una mayor calidad de la información. El resumen estadístico fuzzificado de los resultados del modelo generan índices de sensitividad e incertidumbre que relacionan los efectos de la variabilidad e incertidumbre de los parámetros de modelo con las predicciones de los modelos. La viabilidad del método se llevó a cabo mediante la aplicación de un caso a estudio donde se analizó la varianza total en la cálculo del incremento del riesgo sobre el tiempo de vida de los habitantes que habitan en los alrededores de una incineradora de residuos sólidos urbanos en Tarragona, España, debido a las emisiones de dioxinas y furanos (PCDD/Fs). La segunda parte de la tesis consistió en la utilización de las técnicas de la inteligencia artificial para la generación de índices medioambientales. En el primer artículo se desarrolló un Índice de Peligrosidad a partir de los valores de persistencia, bioacumulación y toxicidad de un elevado número de contaminantes orgánicos e inorgánicos. Para su elaboración, se utilizaron los Mapas de Auto-Organizativos (SOM), que proporcionaron un ranking de peligrosidad para cada compuesto. A continuación, se elaboró un Índice de Riesgo Integral teniendo en cuenta el Índice de peligrosidad y las concentraciones de cada uno de los contaminantes en las muestras de suelo recogidas en la zona de estudio. Finalmente, se elaboró un mapa de la distribución espacial del Índice de Riesgo Integral mediante la representación en un Sistema de Información Geográfico (SIG). El segundo artículo es un mejoramiento del primer trabajo. En este estudio, se creó un método híbrido de los Mapas Auto-organizativos con los métodos probabilísticos, obteniéndose de esta forma un Índice de Riesgo Integrado. Mediante la combinación de SOM y el análisis de Monte-Carlo se desarrolló una nueva aproximación llamada Índice de Peligrosidad Neuro-Probabilística. Este nuevo índice es una herramienta adecuada para ser utilizada en los procesos de análisis. En ambos artículos, la viabilidad de los métodos han sido validados a través de su aplicación en el área de la industria química y petroquímica de Tarragona (Cataluña, España). El tercer apartado de esta tesis está enfocado en la elaboración de una estructura metodológica de un sistema de ayuda en la toma de decisiones para la gestión del riesgo medioambiental. En este estudio, se presenta un modelo integrado de análisis de fuzzy (IFRA) para la evaluación del riesgo cuyo resultado depende de múltiples criterios. El modelo es una visión integrada de las técnicas de incertidumbre basadas en diseños de valoraciones múltiples, relaciones fuzzy y procesos analíticos jerárquicos inciertos. La integración de la simulación del sistema y el análisis del riesgo utilizando aproximaciones inciertas permitieron incorporar la incertidumbre procedente del modelo junto con la incertidumbre procedente de la subjetividad de los criterios. En este estudio, se ha demostrado que es posible crear una amplia integración entre la simulación de un sistema incierto y de un análisis de riesgo incierto. En conclusión, este trabajo demuestra ampliamente la utilidad de aproximación Soft Computing en el análisis de riesgos ambientales. Los métodos propuestos podría avanzar significativamente la práctica de análisis de riesgos de abordar eficazmente el problema de propagación de incertidumbre
    corecore