3,846 research outputs found

    Capturing moral economic context

    Get PDF
    Multiple economic experiments suggest that the moral context of consumption and/or production influences willingness-to-pay and willingness-to-accept. Precisely how this influence should be modeled from a theoretical perspective, however, remains understudied. The prevailing view is that moral context can be captured using an extended utility approach in which “morality†enters the utility function as any other attribute of value. However, in our view the literature does not yet suggest practical modeling strategies that yield testable hypotheses. We show herein that the state-dependent preference approach quite naturally enables modeling of the moral concerns registered in experimental settings.

    Distributional approach to point interactions in one-dimensional quantum mechanics

    Get PDF
    We consider the one-dimensional quantum mechanical problem of defining interactions concentrated at a single point in the framework of the theory of distributions. The often ill-defined product which describes the interaction term in the Schr\"odinger and Dirac equations is replaced by a well-defined distribution satisfying some simple mathematical conditions and, in addition, the physical requirement of probability current conservation is imposed. A four-parameter family of interactions thus emerges as the most general point interaction both in the non-relativistic and in the relativistic theories (in agreement with results obtained by self-adjoint extensions). Since the interaction is given explicitly, the distributional method allows one to carry out symmetry investigations in a simple way, and it proves to be useful to clarify some ambiguities related to the so-called δ\delta^\prime interaction.Comment: Open Access link: http://journal.frontiersin.org/Journal/10.3389/fphy.2014.00023/abstrac

    Evaluating Value-at-Risk Models via Quantile Regressions

    Get PDF
    We propose an alternative backtest to evaluate the performance of Value-at-Risk (VaR) models. The presented methodology allows us to directly test the performance of many competing VaR models, as well as identify periods of an increased risk exposure based on a quantile regression model (Koenker & Xiao, 2002). Quantile regressions provide us an appropriate environment to investigate VaR models, since they can naturally be viewed as a conditional quantile function of a given return series. A Monte Carlo simulation is presented, revealing that our proposed test might exhibit more power in comparison to other backtests presented in the literature. Finally, an empirical exercise is conducted for daily S&P500 return series in order to explore the practical relevance of our methodology by evaluating five competing VaRs through four different backtests.

    Evaluating Value-at-Risk models via Quantile Regression

    Get PDF
    This paper is concerned with evaluating value at risk estimates. It is well known that using only binary variables, such as whether or not there was an exception, sacrifices too much information. However, most of the specification tests (also called backtests) available in the literature, such as Christoffersen (1998) and Engle and Maganelli (2004) are based on such variables. In this paper we propose a new backtest that does not rely solely on binary variables. It is shown that the new backtest provides a sufficient condition to assess the finite sample performance of a quantile model whereas the existing ones do not. The proposed methodology allows us to identify periods of an increased risk exposure based on a quantile regression model (Koenker & Xiao, 2002). Our theoretical findings are corroborated through a Monte Carlo simulation and an empirical exercise with daily S&P500 time series

    CAPACITAÇÃO GERENCIAL DE AGRICULTORES FAMILIARES: UMA PROPOSTA METODOLÓGICA DE EXTENSÃO RURAL

    Get PDF
    Family-based agriculture plays an important socio-economical role in Brazilian agribusiness. Its development is considered a precondition for an economically efficient and fair society. There are numerous variables influencing the performance of rural business. Several variables can not be controlled by the farmers, but, there are some variables which can be controlled, as farm management. Problems related to farm business sustainability revel inefficiencies in farm management in general, and more specifically in family based agriculture. Therefore the proposal of a rural extension course in management, acts over this deficiency, providing tools for family-based agriculture. This paper presents a methodological framework of a managerial capacitation extension course, which aggregates different tools for family-based agriculture support. It is expected that the knowledge supplied will contribute to the sustainability of the economical business and improvement of family welfare.agricultural extension, managerial capacitation, family-based agriculture.,

    Evaluating Value-at-Risk models via Quantile Regression

    Get PDF
    This paper is concerned with evaluating value at risk estimates. It is well known that using only binary variables, such as whether or not there was an exception, sacrifices too much information. However, most of the specification tests (also called backtests) available in the literature, such as Christoffersen (1998) and Engle and Maganelli (2004) are based on such variables. In this paper we propose a new backtest that does not rely solely on binary variables. It is shown that the new backtest provides a sufficient condition to assess the finite sample performance of a quantile model whereas the existing ones do not. The proposed methodology allows us to identify periods of an increased risk exposure based on a quantile regression model (Koenker & Xiao, 2002). Our theoretical findings are corroborated through a Monte Carlo simulation and an empirical exercise with daily S&P500 time series.Value-at-Risk, Backtesting, Quantile Regression

    Knowledge Elicitation in Deep Learning Models

    Get PDF
    Embora o aprendizado profundo (mais conhecido como deep learning) tenha se tornado uma ferramenta popular na solução de problemas modernos em vários domínios, ele apresenta um desafio significativo - a interpretabilidade. Esta tese percorre um cenário de elicitação de conhecimento em modelos de deep learning, lançando luz sobre a visualização de características, mapas de saliência e técnicas de destilação. Estas técnicas foram aplicadas a duas arquiteturas: redes neurais convolucionais (CNNs) e um modelo de pacote (Google Vision). A nossa investigação forneceu informações valiosas sobre a sua eficácia na elicitação e interpretação do conhecimento codificado. Embora tenham demonstrado potencial, também foram observadas limitações, sugerindo espaço para mais desenvolvimento neste campo. Este trabalho não só realça a necessidade de modelos de deep learning mais transparentes e explicáveis, como também impulsiona o desenvolvimento de técnicas para extrair conhecimento. Trata-se de garantir uma implementação responsável e enfatizar a importância da transparência e compreensão no aprendizado de máquina. Além de avaliar os métodos existentes, esta tese explora também o potencial de combinar múltiplas técnicas para melhorar a interpretabilidade dos modelos de deep learning. Uma mistura de visualização de características, mapas de saliência e técnicas de destilação de modelos foi usada de uma maneira complementar para extrair e interpretar o conhecimento das arquiteturas escolhidas. Os resultados experimentais destacam a utilidade desta abordagem combinada, revelando uma compreensão mais abrangente dos processos de tomada de decisão dos modelos. Além disso, propomos um novo modelo para a elicitação sistemática de conhecimento em deep learning, que integra de forma coesa estes métodos. Este quadro demonstra o valor de uma abordagem holística para a interpretabilidade do modelo, em vez de se basear num único método. Por fim, discutimos as implicações éticas do nosso trabalho. À medida que os modelos de deep learning continuam a permear vários setores, desde a saúde até às finanças, garantir que as suas decisões são explicáveis e justificadas torna-se cada vez mais crucial. A nossa investigação sublinha esta importância, preparando o terreno para a criação de sistemas de inteligência artificial mais transparentes e responsáveis no futuro.Though a buzzword in modern problem-solving across various domains, deep learning presents a significant challenge - interpretability. This thesis journeys through a landscape of knowledge elicitation in deep learning models, shedding light on feature visualization, saliency maps, and model distillation techniques. These techniques were applied to two deep learning architectures: convolutional neural networks (CNNs) and a black box package model (Google Vision). Our investigation provided valuable insights into their effectiveness in eliciting and interpreting the encoded knowledge. While they demonstrated potential, limitations were also observed, suggesting room for further development in this field. This work does not just highlight the need for more transparent, more explainable deep learning models, it gives a gentle nudge to developing innovative techniques to extract knowledge. It is all about ensuring responsible deployment and emphasizing the importance of transparency and comprehension in machine learning. In addition to evaluating existing methods, this thesis also explores the potential for combining multiple techniques to enhance the interpretability of deep learning models. A blend of feature visualization, saliency maps, and model distillation techniques was used in a complementary manner to extract and interpret the knowledge from our chosen architectures. Experimental results highlight the utility of this combined approach, revealing a more comprehensive understanding of the models' decision-making processes. Furthermore, we propose a novel framework for systematic knowledge elicitation in deep learning, which cohesively integrates these methods. This framework showcases the value of a holistic approach toward model interpretability rather than relying on a single method. Lastly, we discuss the ethical implications of our work. As deep learning models continue to permeate various sectors, from healthcare to finance, ensuring their decisions are explainable and justified becomes increasingly crucial. Our research underscores this importance, laying the groundwork for creating more transparent, accountable AI systems in the future
    corecore