446 research outputs found

    Classification of Vegetation in Aerial Imagery via Neural Network

    Full text link
    This thesis focuses on the task of trying to find a Neural Network that is best suited for identifying vegetation from aerial imagery. The goal is to find a way to quickly classify items in an image as highly likely to be vegetation(trees, grass, bushes and shrubs) and then interpolate that data and use it to mark sections of an image as vegetation. This has practical applications as well. The main motivation of this work came from the effort that our town takes in conserving water. By creating an AI that can easily recognize plants, we can better monitor the impact they make on our water resources

    A Questionnaire for Incoming High School ELL Students to Better Assist Them in Entering the American Educational System

    Get PDF
    This project is designed to help teachers get a better understanding of the incoming ELL students\u27 backgrounds to better assist these students in the education process and make the transition from their native educational system to the American educational system smoother. Teachers must be aware of ELL students\u27 family situations, lives outside the school, diverse background knowledge and how these things affect reading and writing comprehension, and be able to choose the most appropriate assessment and instruction

    Human-Interpretable Explanations for Black-Box Machine Learning Models: An Application to Fraud Detection

    Get PDF
    Machine Learning (ML) has been increasingly used to aid humans making high-stakes decisions in a wide range of areas, from public policy to criminal justice, education, healthcare, or financial services. However, it is very hard for humans to grasp the rationale behind every ML model’s prediction, hindering trust in the system. The field of Explainable Artificial Intelligence (XAI) emerged to tackle this problem, aiming to research and develop methods to make those “black-boxes” more interpretable, but there is still no major breakthrough. Additionally, the most popular explanation methods — LIME and SHAP — produce very low-level feature attribution explanations, being of limited usefulness to personas without any ML knowledge. This work was developed at Feedzai, a fintech company that uses ML to prevent financial crime. One of the main Feedzai products is a case management application used by fraud analysts to review suspicious financial transactions flagged by the ML models. Fraud analysts are domain experts trained to look for suspicious evidence in transactions but they do not have ML knowledge, and consequently, current XAI methods do not suit their information needs. To address this, we present JOEL, a neural network-based framework to jointly learn a decision-making task and associated domain knowledge explanations. JOEL is tailored to human-in-the-loop domain experts that lack deep technical ML knowledge, providing high-level insights about the model’s predictions that very much resemble the experts’ own reasoning. Moreover, by collecting the domain feedback from a pool of certified experts (human teaching), we promote seamless and better quality explanations. Lastly, we resort to semantic mappings between legacy expert systems and domain taxonomies to automatically annotate a bootstrap training set, overcoming the absence of concept-based human annotations. We validate JOEL empirically on a real-world fraud detection dataset, at Feedzai. We show that JOEL can generalize the explanations from the bootstrap dataset. Furthermore, obtained results indicate that human teaching is able to further improve the explanations prediction quality.A Aprendizagem de Máquina (AM) tem sido cada vez mais utilizada para ajudar os humanos a tomar decisões de alto risco numa vasta gama de áreas, desde política até à justiça criminal, educação, saúde e serviços financeiros. Porém, é muito difícil para os humanos perceber a razão da decisão do modelo de AM, prejudicando assim a confiança no sistema. O campo da Inteligência Artificial Explicável (IAE) surgiu para enfrentar este problema, visando desenvolver métodos para tornar as “caixas-pretas” mais interpretáveis, embora ainda sem grande avanço. Além disso, os métodos de explicação mais populares — LIME and SHAP — produzem explicações de muito baixo nível, sendo de utilidade limitada para pessoas sem conhecimento de AM. Este trabalho foi desenvolvido na Feedzai, a fintech que usa a AM para prevenir crimes financeiros. Um dos produtos da Feedzai é uma aplicação de gestão de casos, usada por analistas de fraude. Estes são especialistas no domínio treinados para procurar evidências suspeitas em transações financeiras, contudo não tendo o conhecimento em AM, os métodos de IAE atuais não satisfazem as suas necessidades de informação. Para resolver isso, apresentamos JOEL, a framework baseada em rede neuronal para aprender conjuntamente a tarefa de tomada de decisão e as explicações associadas. A JOEL é orientada a especialistas de domínio que não têm conhecimento técnico profundo de AM, fornecendo informações de alto nível sobre as previsões do modelo, que muito se assemelham ao raciocínio dos próprios especialistas. Ademais, ao recolher o feedback de especialistas certificados (ensino humano), promovemos explicações contínuas e de melhor qualidade. Por último, recorremos a mapeamentos semânticos entre sistemas legados e taxonomias de domínio para anotar automaticamente um conjunto de dados, superando a ausência de anotações humanas baseadas em conceitos. Validamos a JOEL empiricamente em um conjunto de dados de detecção de fraude do mundo real, na Feedzai. Mostramos que a JOEL pode generalizar as explicações aprendidas no conjunto de dados inicial e que o ensino humano é capaz de melhorar a qualidade da previsão das explicações

    Type E hepatitis: State of the Art

    Get PDF
    AbstractHepatitis E (HE) occurs predominantly in tropical and semitropical countries in the form of sporadic cases or epidemics of variable magnitude. In industrialized countries, only imported sporadic cases of HE have been reported, with little evidence of human-to-human transmission. HE resembles hepatitis A clinically and epidemiologically but affects young adults rather than children, showing a higher mortality rate in pregnant women. HE virus (HEV) shares many characteristics of the caliciviruses, although it is genomically distinct from this family of viruses.New diagnostic tests have been developed, based on the use of recombinant or synthetic antigens that are analogues of HEV structural proteins. These have been applied to determine the prevalence of antibodies (anti-HEV) in various epidemic and nonepidemic settings. The prevalence of anti-HEV antibodies was unexpectedly low even in endemic areas. A low but constant rate of seropositivity was observed among normal individuals permanently living in nonendemic countries of Europe and North America, while an elevated rate of anti-HEV was found in certain groups of patients and risk groups. This situation as well as other unresolved problems, such as the possible involvement of nonhuman reservoirs, the existence of subdinical forms, and potential prevention strategies, need further investigation

    Anomalies in cosmic rays: New particles versus charm?

    Get PDF
    For a long time two anomalies are observed in cosmic rays at energies E approx. = 100 TeV: (1) the generation of long-flying cascades in the hadron calorimeter (the so-called Tien-Shan effect) and; (2) the enhancement of direct muon yield as compared with the accelerator energy region. The aim is to discuss the possibility that both anomalies have common origins arising from production and decays of the same particles. the main conclusions are the following: (1) direct muons cannot be generated by any new particles with mass exceeding 10+20 GeV; and (2) if both effects are originated from the charmed hadrons, then the needed charm hadroproduction cross section is unexpectedly large as compared with the quark-gluon model predictions
    • …
    corecore