334 research outputs found

    Characterization of Er in porous Si

    Get PDF
    The fabrication of porous Si-based Er-doped light emitting devices is a very promising developing field for all-silicon light emitters. However, while luminescence of Er-doped porous silicon devices has been demonstrated, very little attention has been devoted to the doping process itself. We have undertaken a detailed study of this process examining the porous silicon matrix from several points of view, during and after the doping. In particular, we have found that the Er doping process shows a threshold level which, as evidenced by the cross correlation of the various techniques used, does depend on the sample thickness and on the doping parameters

    Optical properties of multilayered porous silicon

    No full text
    International audienceWe present a short review of some optical devices based on multilayered porous silicon, which can be easily obtained by varying the formation current during the etching process. These include Bragg reflectors and Fabry–Pérot microcavities, which can be adjusted from the visible to the near infrared. The interface roughness, tragic in the case of multilayers, is studied. It can be drastically reduced when changing the electrolyte viscosity. The high reflectivities obtained in this way are measured by Cavity Ring–Down Spectroscopy. Problems occurring when realising thin layers and an efficient way to adjust precisely the optical thicknesses of the thin layers constituting the multilayered structure are also presented. Finally we present a method of calculation of the emission which takes absorption into account and is able to explain the angular dependence of the luminescence

    Using an artificial financial market for assessing the impact of Tobin-like transaction taxes

    Full text link
    The Tobin tax is a solution proposed by many economists for limiting the speculation in foreign exchange and stock markets and for making these markets stabler. In this paper we present a study on the effects of a transaction tax on one and on two related markets, using an artificial financial market based on heterogeneous agents. The microstructure of the market is composed of four kinds of traders: random traders, fundamentalists, momentum traders and contrarians, and the resources allocated to them are limited. In each market it is possible to levy a transaction tax. In the case of two markets, each trader can choose in which market to trade, and an attraction function is defined that drives their choice based on perceived profitability. We performed extensive simulations and found that the tax actually increases volatility and decreases trading volumes. These findings are discussed in the paper

    GLocalX - From Local to Global Explanations of Black Box AI Models

    Get PDF
    Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLOCALX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLOCALX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLOCALX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLOCALX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications.Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLOCALX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLOCALX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLOCALX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLOCALX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications

    Imidazolo [1,2-a] e 1,2,4-triazolo[4,3-a] chinossaline analoghe degli antifolici metotrexato e trimetrexato

    Get PDF
    Abbiamo progettato una nuova serie di chinossaline, nelle quali l’anello pirrolico è stato sostituito con un’anello imidazolico o con uno triazolico lasciando nelle posizioni 2, 5, 6, 7 e 8 dell’anello chinossalinico gli stessi sostituenti precedentemente esaminati. Di questi composti verranno descritti la sintesi e i risultati farmacologici relativi alla loro attività

    HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis

    Full text link
    Authorship Analysis, also known as stylometry, has been an essential aspect of Natural Language Processing (NLP) for a long time. Likewise, the recent advancement of Large Language Models (LLMs) has made authorship analysis increasingly crucial for distinguishing between human-written and AI-generated texts. However, these authorship analysis tasks have primarily been focused on written texts, not considering spoken texts. Thus, we introduce the largest benchmark for spoken texts - HANSEN (Human ANd ai Spoken tExt beNchmark). HANSEN encompasses meticulous curation of existing speech datasets accompanied by transcripts, alongside the creation of novel AI-generated spoken text datasets. Together, it comprises 17 human datasets, and AI-generated spoken texts created using 3 prominent LLMs: ChatGPT, PaLM2, and Vicuna13B. To evaluate and demonstrate the utility of HANSEN, we perform Authorship Attribution (AA) & Author Verification (AV) on human-spoken datasets and conducted Human vs. AI spoken text detection using state-of-the-art (SOTA) models. While SOTA methods, such as, character ngram or Transformer-based model, exhibit similar AA & AV performance in human-spoken datasets compared to written ones, there is much room for improvement in AI-generated spoken text detection. The HANSEN benchmark is available at: https://huggingface.co/datasets/HANSEN-REPO/HANSEN.Comment: 9 pages, EMNLP-23 findings, 5 pages appendix, 6 figures, 17 table

    Interactions between calliphoridae dipters and Helicodiceros muscivorus

    Get PDF
    This article reports on the experimental results of a research programme dealing with the reproductive strategies of Helicodiceros muscivorus (L. fil.) Engler(Araceae: Aroideae). In particular, the role played by the odorous mixture emanated by the vegetable species as olfactory information received by the insects, and the importance of that specific biological activity in governing the behavioural choices made by the pollinating insects is studied
    • …
    corecore