2,897 research outputs found

    Sequential Sampling Equilibrium

    Full text link
    This paper introduces an equilibrium framework based on sequential sampling in which players face strategic uncertainty over their opponents' behavior and acquire informative signals to resolve it. Sequential sampling equilibrium delivers a disciplined model featuring an endogenous distribution of choices, beliefs, and decision times, that not only rationalizes well-known deviations from Nash equilibrium, but also makes novel predictions supported by existing data. It grounds a relationship between empirical learning and strategic sophistication, and generates stochastic choice through randomness inherent to sampling, without relying on indifference or choice mistakes. Further, it provides a rationale for Nash equilibrium when sampling costs vanish

    From copper-based nanoparticles to carbon-based dots

    Get PDF
    Nanoparticles (NPs) prepared from earth-abundant metals have attracted significant consideration for their potential as viable replacements for expensive metals used in many commercial chemical processes. In this context, copper NPs are particularly appealing since copper (Cu) is vastly abundant and cheap. Cu also displays good electronic, optical, antimicrobial, and chemical properties. Yet, CuNPs are limited due to their intrinsic instability under atmospheric conditions, making them prone to oxidation. Various efforts have been made to increase the stability of CuNPs, including investigating Cu-based NPs associated with organic structures such as polymers. This project was focused on the preparation and characterization of long-term stable Cu-based NPs using different synthesis strategies and without employing harmful reagents or solvents. Based on previously reported methods, the particles were synthesized using the environmentally friendly ascorbic acid (AA) as a reducing agent. Polyamidoamine (PAMAM) dendrimers and polyvinylpyrrolidone (PVP) were used as both templates and capping agents, and the reactions were done at room temperature and 60°C. The samples were characterized using various techniques, including Ultraviolet-Visible Spectroscopy (UV-Vis) and Scanning Electron Microscopy (SEM) coupled with EnergyDispersive X-ray Analysis (EDX). The results indicated the presence of Cu0, Cu1+, and Cu2+-based particles with diverse shapes (e.g. polyhedral, spherical, and disk-like). Moreover, other techniques like Photoluminescence Spectroscopy (PL), X-ray Photoelectron Spectroscopy (XPS), and Transmission Electron Microscopy (TEM) indicated the presence of Carbon-based Dots (C-Dots), which are responsible for the fluorescence properties of the solution. The formation of the C-Dots is thought to be the result of the side reaction of the reduction of the Cu ions (i.e., the oxidation of AA). Preliminary cytotoxic evaluation studies using the MTT assay showed that the particles obtained using PAMAM and PVP, as well as the C-Dots, did not present significant toxicity towards HEK 293T cells at concentrations below 500, 5, and 200µg/mL, respectively.As nanopartículas (NPs) preparadas a partir de metais relativamente abundantes têm atraído uma atenção considerável como potenciais substitutos para outros metais de elevado custo usados em muitos processos químicos comerciais. Neste contexto, as NPs de cobre (CuNPs) são particularmente apelativas, uma vez que o cobre é mais abundante e barato que outros metais. O cobre também apresenta boas propriedades eletroquímicas, óticas e antimicrobianas. No entanto, as CuNPs estão limitadas pela sua instabilidade intrínseca em condições atmosféricas, tornando-as suscetíveis à oxidação. Vários esforços têm sido realizados para aumentar a estabilidade das CuNPs, incluindo a criação de NPs com estruturas orgânicas (ex. polímeros) associadas. Este projeto foca-se na preparação e caracterização de partículas de cobre com estabilidade a longo prazo. Além disso, pretende-se usar diferentes estratégias de síntese sem utilizar reagentes ou solventes prejudiciais para a saúde ou ambiente. Tendo por base alguns métodos descritos na literatura, vários tipos de partículas foram sintetizados usando o ácido ascórbico (AA) como agente redutor ecológico. Dendrímeros da família da poliamidoamina (PAMAM), e a polivinilpirrolidona (PVP) foram utilizados para suportar e estabilizar as partículas, permitindo controlar o tamanho e a forma das mesmas. As reações foram realizadas tanto à temperatura ambiente como a 60°C. As amostras foram caracterizadas usando várias técnicas como a espectroscopia de ultravioleta-visível (UV-Vis) e a microscopia eletrónica de varrimento (SEM) acoplada à espectroscopia de raios X por dispersão em energia (EDX). Os resultados sugerem a presença de partículas à base de cobre metálico (zero), cobre (I) e cobre (II) com formas variadas (ex. poliédricas, esféricas e em forma de disco). Além disso, a espectroscopia de fotoluminescência, a espectroscopia de fotoeletrões excitados por raios-X (XPS), e a microscopia eletrónica de transmissão (TEM) indicaram a presença de NPs de carbono (“Carbon-based Dots”, C-Dots) em solução. Estes são responsáveis pelas propriedades de fluorescência observadas nas amostras. A formação dos C-Dots poderá ser o resultado de reações secundárias à redução dos iões de cobre, i.e., oxidação do AA. Uma avaliação preliminar da citotoxicidade de algumas partículas foi realizada usando células da linha celular HEK 293T. A análise indicou que as partículas obtidas usando o PAMAM e PVP, assim como os C-Dots não apresentam uma toxicidade apreciável em concentrações inferiores a 500, 5 e 200µg/mL, respetivamente

    Impact of social economic indicators on RSI incidence and success

    Get PDF
    A Work Project, presented as part of the requirements for the Award of a Masters Degree in Economics from the NOVA – School of Business and EconomicsIn this project we study the influence of socio-economic characteristics on the percentage of beneficiaries of “Rendimento Social de Inserção” (RSI) and on the percentage of exits from the RSI program that occur due to a change in income. The results indicate that the % of beneficiaries tend to increase with unemployment, younger people and reduced families, whereas it tends to reduce with high education levels and GDP. As for the % of exists from the RSI, the results we obtained show evidence that, on the one hand, they tend to increase with higher education, and on the other hand, they tend to reduce with unemployment, reduced income of the beneficiaries before entering the program, nuclear families and Local Purchasing Power

    Game Wizard

    Get PDF
    Currently there is a growing demand for educational games and a consequent investment in this area. These games are mainly used to spread knowledge in an appealing and motivating way, in an attempt to reach different types of public. This demand, coupled with the need to adapt and customize the final product regarding the users’ ideas is limited, since the creation of new solutions that meet these needs is a complex process due to the specific knowledge required for its achievement. The present study emerged from these facts. As a main objective of this research work and in order to overcome the previously presented problems, a solution will be proposed to simplify the process of creating these games. For this purpose, content loading and manipulation mechanisms were used, making it a solution available to all, regardless of the level of specific technical knowledge, usually necessary for its use. As a possible solution, a game creation platform and an Android application will be developed. The game creation platform will consist of a web application for uploading content (images, sounds, questions and answers) to the server and database, the content will later be imported and used in the Android application. The Android application will consist of a mechanism that includes a set of modules that will be filled with the imported content, thus creating custom games and making them available to the player. In the end, this solution will allow the creation of a simple and intuitive product, easy to use by anyone and that can generate games by filling in the necessary components such as images, sounds and question

    Improvement of TestH: a C Library for Generating Self-Similar Series and for Estimating the Hurst Parameter

    Get PDF
    The discovery of consistent dependencies between values in certain data series paved the way for the development of algorithms that could, somehow, classify the degree of self-similarity between values and derive considerations about the behavior of these series. This self-similarity metric is typically known as the Hurst Parameter, and allows the classification of the behavior of a data series as persistent, anti-persistent, or purely random. This discovery was highly relevant in the field of computer networks, inclusively helping companies to develop equipment and infrastructure that suit their needs more efficiently. The Hurst Parameter is relevant in many other fields, and it has been for exemple applied in the study of geologic phenomena [KTC07] or even on areas related with health sciencies[VAJ08, HPS+12]. There are several algorithms for estimating the Hurst Parameter [Hur51, Hig88, RPGC06], and each one of them has its strengths and weaknesses. The usage of these algorithms is sometimes difficult, motivating the creation of tools or libraries that provide them in a more user-friendly manner. Unfortunately, and despite of being an area that has been studied for decades, the tools available have limitations and do not implement all algorithms available in the literature. The work presented in this dissertation consists on the improvement of TestH, a library written in ANSI C for the study of self-similarity in time series, which was initially developed by Fernandes et al. [FNS+14]. These improvements are materialized as the addition of algorithms to estimate the Hurst Parameter and to generate self-similar sequences. Additionally, auxiliary functions were implemented, along with code refactoring, documentation of the application programming interface and the creation of a website for the project. This dissertation is mostly focused on the algorithms that were introduced in TestH, namely the Periodogram, the Higuchi method, the Hurst Exponent by Autocorrelation Function and the Detrended Fluctuation Analysis estimators, and the Davies and Hart method for generating selfsimilar sequences. In order to turn TestH into a robust and trustable library, several tests were performed comparing the results of these implementations with the values provided by similar tools. The overall results obtained in these tests are in line with expectations and the algorithms that are simultaneously implemented in TestH and in the other tools analyzed (for example, the Periodogram) returned very similar results, corroborating the belief that the methods were well implemented.A descoberta da dependência consistente entre valores em certas séries de dados, abriu caminho para o desenvolvimento de algoritmos que permitissem, de alguma forma, classificar o grau de auto-semelhança entre valores e tecer considerações sobre o comportamento da série. A esta estatística dá-se o nome de Parâmetro de Hurst, que permite analisar e classificar o comportamento de uma série de dados como persistente, antipersistente ou puramente aleatória. Esta descoberta tem sido bastante relevante na área das redes de computadores, onde serve, p.ex., de ajuda às empresas para desenvolverem equipamentos e infraestruturas adequadas às suas necessidades. Para além do elevado interesse que a referida área apresentou por esta métrica, existem outros campos ciêntificos onde algoritmos para estimar o Parâmetro de Hurst de sequências de valores estão a ser aplicados, como por exemplo no estudo de fenómenos geológicos [KTC07], bem como em fenómenos ligados às ciências da saúde [VAJ08, HPS+12]. Existem vários algoritmos para estimar o Parâmetro de Hurst [Hur51, Hig88, RPGC06], tendo cada um deles as suas virtudes e fraquezas. A utilização destes algoritmos é por vezes difícil, motivando a criação de ferramentas e bibliotecas que os congregam e disponibilizam de uma forma mais amigável ao utilizador. Infelizmente, e apesar de ser uma área que está a ser alvo de estudos há décadas, as ferramentas existentes, para além de não implementarem a totalidade dos algoritmos mais relevantes, apresentam ainda algumas limitações. Desta forma, o trabalho apresentado nesta dissertação consiste, principalmente, na melhoria da TestH, uma biblioteca escrita em ANSI C para o estudo de séries temporais auto-semelhantes, inicialmente desenvolvida por Fernandes et al. [FNS+14]. Estas melhorias materializam-se sobretudo na adição de algoritmos para estimar o Parâmetro de Hurst e gerar séries de dados auto-semelhantes. Adicionalmente foram introduzidas funções auxiliares, foi efetuada a refactorização do código, documentação das interfaces de programação e ainda a criação de um sítio web para divulgação do projeto. Esta dissertação dá enfase aos algoritmos de estimação do Parâmetro de Hurst e geração de séries auto-semelhantes. Relativamente à estimação, foram introduzidos na TestH, no âmbito deste trabalho, o Periodograma, o método de Higuchi, a estimação através da função de autocorrelação e o método de análise através da remoção das tendências. No que respeita à geração de séries, foi também introduzido o método de Davies e Hart. Com o objetivo de tornar a TestH robusta e credível, foram realizados vários testes, comparando os resultados destas implementações com os valores fornecidos por ferramentas semelhantes. Os resultados obtidos estão alinhados com o esperado e, inclusivamente, os algoritmos que se encontram implementados na TestH e restantes ferramentas analisadas (como por exemplo, o Periodograma), apresentaram valores bastante semelhantes entre si, corroborando a crença da correção da implementação dos vários métodos

    Retrofitting Typestates into Rust

    Get PDF
    As software becomes more prevalent in our lives, bugs are able to cause significant disruption. Thus, preventing them becomes a priority when trying to develop dependable systems. While reducing their occurrence possibility to zero is infeasible, existing approaches are able to eliminate certain subsets of bugs. Rust is a systems programming language that addresses memory-related bugs by design, eliminating bugs like use-after-free. To achieve this, Rust leverages the type system along with information about object lifetimes, allowing the compiler to keep track of objects throughout the program and checking for memory misusage. While preventing memory-related bugs goes a long way in software security, other categories of bugs remain in Rust. One of which would be Application Programming Interface (API) misusage, where the developer does not respect constraints put in place by an API, thus resulting in the program crashing. Typestates elevate state to the type level, allowing for the enforcement of API constraints at compile-time, relieving the developer from the burden that is keeping track of the possible computation states at runtime, and preventing possible API misusage during development. While Rust does not support typestates by design, the type system is powerful enough to express and validate typestates. I propose a new macro-based approach to deal with typestates in Rust; this approach provides an embedded Domain-Specific Language (DSL) which allows developers to express typestates using only existing Rust syntax. Furthermore, Rust’s macro system is leveraged to extract a state machine out of the typestate specification and then perform compile-time checks over the specification. Afterwards we leverage Rust’s type system to check protocol-compliance. The DSL avoids workflow-bloat by requiring nothing but a Rust compiler and the library itself.À medida que as nossas vidas estão cada vez mais dependentes de software, os erros do mesmo têm o potencial de causar problemas significativos. Prevenir estes erros torna-se uma tarefa prioritária durante o desenvolvimento de sistemas confiáveis. Erradicar erros por completo é impossível, mas é possível eliminar certos conjuntos. Rust é uma linguagem de programação de sistemas que, por desenho, endereça erros de gestão de memória. Para o conseguir, a linguagem inclui no sistema de tipos informação sobre o tempo de vida dos objetos, permitindo assim que o compilador conheça a utilização dos mesmos e detecte erros de utilização de memória. Apesar da prevenção de erros de memória ter um papel importante na segurança de software, existem ainda outras categorias de erros em Rust, como o uso incorrecto de interfaces de programação, em que o programador não respeita as restrições impostas pela mesma, o que resulta numa falha do programa. Typestates elevam o conceito de estado para o sistema de tipos, permitindo a aplicação das restrições da interface durante a fase de compilação. Este conceito permite assim aliviar o programador da responsabilidade que é conceptualizar e manter o estado do programa em mente durante o desenvolvimento, prevenindo o mau uso das interfaces. Apesar de Rust não suportar typestates de uma forma natural, o sistema de tipos permite expressar e validar typestates. Proponho uma nova abordagem de modo a lidar com typestates em Rust, tal abordagem é baseada numa DSL embebida na linguagem, permitindo assim a descrição de typestates usando apenas a sintaxe existente. A DSL vai mais além e providencia ainda verificações estáticas sobre a especificação, tirando proveito do sistema de macros, extrai uma máquina de estados que é depois verificada, por fim, a verificação de conformidade é feita pelo compilador, tirando proveito do sistema de tipos. A DSL evita poluição do ambiente trabalho, requerendo apenas um compilador de Rust e a sua própria biblioteca

    Negotiating motherhood: practices and discourses

    Get PDF
    Processes of transition to motherhood have been devoted a great deal of attention, resulting in a consistent range of research and literature. Globally, and considering the different directions and motivations of theses studies, the consequential body of research basically points out the complex and diverse character of this personal experience, whether focused in a more quantitative approach intended to isolate the variables influencing the psychosocial adjustment to this transition (Glade, Bean & Vira, 2005), or oriented towards a qualitative exploration of the individual experience of these women (see Nelson, 2003, for a review). Nevertheless the knowledge that the transition to motherhood constitutes a highly challenging task that presents several emotional, affective and social nuances, the cultural view of this life event seems to continue emphasizing the element of self-fulfilment of the feminine nature that motherhood experiences also carries. Several authors have highlighted the fact that motherhood, more than a mere biological event, constitutes a social phenomenon, loaded with inherited cultural and ideological images and lay theories that influence the experiences of any new mother (Johnston & Swanson, 2006; Letherby, 1994; Sévon, 2005; Woollett, 1991). At the realm of social discourses, seemingly a traditional idealized view of motherhood as a source of significant personal fulfilment and enjoyment of intense positive emotions prevails (Leal, 2005; Solé & Parella, 2004). This narrow vision of motherhood also carries a set of believes and stereotypes around what is socially and culturally accepted, in contemporaneous western societies, as an adequate practice of “mothering”, which are largely sustained by the myth of motherhood as a universal need and “natural” choice of women and by the expectation of a full-time mothering (Johnston & Swanson, 2006; Fursman, 2002; Solé & Parella, 2004; Oakley, 1984). In other words, it is expected that all women long for motherhood and that they become almost exclusively devoted to their children, being present to love, educate, stimulate and care for them (Fursman, 2002). Thus, the word “motherhood”, understood as a discursive construct with deep socio-cultural roots, also involves a set of behavioural and attitudinal prescriptions necessary to what is understood as a “good” mother and which, by opposition, exclude other behaviours and attitudes that become connected with a “bad” mother (Solé & Parella, 2004). Thus, these social and cultural d1iscourses around the notion of an intensive motherhood, that is presented as the major priority in women’s lives, is extensively based in the invention of the “good” motherhood, which has strong implications in the way women live this event and reassess their life projects, limiting the possibilities of their identities and discursive practices (Breheny & Stephens, 2007)

    Why Should Central Banks Avoid the Use of the Underlying Inflation Indicator?

    Get PDF
    This paper assesses the usefulness of the commonly used underlying inflation indicator, in light of the criteria proposed in Marques et al. (2000). Empirical evidence for a group of six countries strongly suggets that the use of underlying inflation as an indicator of trend inflation should be avoided.

    The Common national curriculum base and the challenges found in the initial training of teachers

    Get PDF
    The present study aims to analyze how the literature perceives the National Curricular Common Base and how fundamental its approach is in the initial training of teachers, since it is noticeable that there are many professionals unprepared in relation to the use of this document. For this purpose, the methodology of literature review and document analysis was used, and is based on documents, articles and books, which address the BNCC and teacher training throughout Brazil. It was considered through the literature that these challenges happen as a result of the poor training of teachers, which according to the literature should be changed, given that teachers in their training must have at least basic knowledge
    corecore