446 research outputs found
The current legislative and administrative issues of the value added tax system in Armenia
Armen Alaverdyan; Vahe Balaya
Classification of Vegetation in Aerial Imagery via Neural Network
This thesis focuses on the task of trying to find a Neural Network that is best suited for identifying vegetation from aerial imagery. The goal is to find a way to quickly classify items in an image as highly likely to be vegetation(trees, grass, bushes and shrubs) and then interpolate that data and use it to mark sections of an image as vegetation. This has practical applications as well. The main motivation of this work came from the effort that our town takes in conserving water. By creating an AI that can easily recognize plants, we can better monitor the impact they make on our water resources
A Questionnaire for Incoming High School ELL Students to Better Assist Them in Entering the American Educational System
This project is designed to help teachers get a better understanding of the incoming ELL students\u27 backgrounds to better assist these students in the education process and make the transition from their native educational system to the American educational system smoother. Teachers must be aware of ELL students\u27 family situations, lives outside the school, diverse background knowledge and how these things affect reading and writing comprehension, and be able to choose the most appropriate assessment and instruction
Human-Interpretable Explanations for Black-Box Machine Learning Models: An Application to Fraud Detection
Machine Learning (ML) has been increasingly used to aid humans making high-stakes
decisions in a wide range of areas, from public policy to criminal justice, education,
healthcare, or financial services. However, it is very hard for humans to grasp the rationale
behind every ML model’s prediction, hindering trust in the system. The field
of Explainable Artificial Intelligence (XAI) emerged to tackle this problem, aiming to
research and develop methods to make those “black-boxes” more interpretable, but there
is still no major breakthrough. Additionally, the most popular explanation methods —
LIME and SHAP — produce very low-level feature attribution explanations, being of
limited usefulness to personas without any ML knowledge.
This work was developed at Feedzai, a fintech company that uses ML to prevent financial
crime. One of the main Feedzai products is a case management application used
by fraud analysts to review suspicious financial transactions flagged by the ML models.
Fraud analysts are domain experts trained to look for suspicious evidence in transactions
but they do not have ML knowledge, and consequently, current XAI methods do not
suit their information needs. To address this, we present JOEL, a neural network-based
framework to jointly learn a decision-making task and associated domain knowledge
explanations. JOEL is tailored to human-in-the-loop domain experts that lack deep technical
ML knowledge, providing high-level insights about the model’s predictions that
very much resemble the experts’ own reasoning. Moreover, by collecting the domain
feedback from a pool of certified experts (human teaching), we promote seamless and
better quality explanations. Lastly, we resort to semantic mappings between legacy expert
systems and domain taxonomies to automatically annotate a bootstrap training set, overcoming
the absence of concept-based human annotations. We validate JOEL empirically
on a real-world fraud detection dataset, at Feedzai. We show that JOEL can generalize
the explanations from the bootstrap dataset. Furthermore, obtained results indicate that
human teaching is able to further improve the explanations prediction quality.A Aprendizagem de Máquina (AM) tem sido cada vez mais utilizada para ajudar os
humanos a tomar decisões de alto risco numa vasta gama de áreas, desde polĂtica atĂ© Ă
justiça criminal, educação, saĂşde e serviços financeiros. PorĂ©m, Ă© muito difĂcil para os
humanos perceber a razão da decisão do modelo de AM, prejudicando assim a confiança
no sistema. O campo da Inteligência Artificial Explicável (IAE) surgiu para enfrentar
este problema, visando desenvolver métodos para tornar as “caixas-pretas” mais interpretáveis,
embora ainda sem grande avanço. Além disso, os métodos de explicação mais
populares — LIME and SHAP — produzem explicações de muito baixo nĂvel, sendo de
utilidade limitada para pessoas sem conhecimento de AM.
Este trabalho foi desenvolvido na Feedzai, a fintech que usa a AM para prevenir crimes
financeiros. Um dos produtos da Feedzai é uma aplicação de gestão de casos, usada por
analistas de fraude. Estes sĂŁo especialistas no domĂnio treinados para procurar evidĂŞncias
suspeitas em transações financeiras, contudo não tendo o conhecimento em AM, os
métodos de IAE atuais não satisfazem as suas necessidades de informação. Para resolver
isso, apresentamos JOEL, a framework baseada em rede neuronal para aprender conjuntamente
a tarefa de tomada de decisão e as explicações associadas. A JOEL é orientada
a especialistas de domĂnio que nĂŁo tĂŞm conhecimento tĂ©cnico profundo de AM, fornecendo
informações de alto nĂvel sobre as previsões do modelo, que muito se assemelham
ao raciocĂnio dos prĂłprios especialistas. Ademais, ao recolher o feedback de especialistas
certificados (ensino humano), promovemos explicações contĂnuas e de melhor qualidade.
Por último, recorremos a mapeamentos semânticos entre sistemas legados e taxonomias
de domĂnio para anotar automaticamente um conjunto de dados, superando a ausĂŞncia
de anotações humanas baseadas em conceitos. Validamos a JOEL empiricamente em um
conjunto de dados de detecção de fraude do mundo real, na Feedzai. Mostramos que a
JOEL pode generalizar as explicações aprendidas no conjunto de dados inicial e que o
ensino humano é capaz de melhorar a qualidade da previsão das explicações
Type E hepatitis: State of the Art
AbstractHepatitis E (HE) occurs predominantly in tropical and semitropical countries in the form of sporadic cases or epidemics of variable magnitude. In industrialized countries, only imported sporadic cases of HE have been reported, with little evidence of human-to-human transmission. HE resembles hepatitis A clinically and epidemiologically but affects young adults rather than children, showing a higher mortality rate in pregnant women. HE virus (HEV) shares many characteristics of the caliciviruses, although it is genomically distinct from this family of viruses.New diagnostic tests have been developed, based on the use of recombinant or synthetic antigens that are analogues of HEV structural proteins. These have been applied to determine the prevalence of antibodies (anti-HEV) in various epidemic and nonepidemic settings. The prevalence of anti-HEV antibodies was unexpectedly low even in endemic areas. A low but constant rate of seropositivity was observed among normal individuals permanently living in nonendemic countries of Europe and North America, while an elevated rate of anti-HEV was found in certain groups of patients and risk groups. This situation as well as other unresolved problems, such as the possible involvement of nonhuman reservoirs, the existence of subdinical forms, and potential prevention strategies, need further investigation
Anomalies in cosmic rays: New particles versus charm?
For a long time two anomalies are observed in cosmic rays at energies E approx. = 100 TeV: (1) the generation of long-flying cascades in the hadron calorimeter (the so-called Tien-Shan effect) and; (2) the enhancement of direct muon yield as compared with the accelerator energy region. The aim is to discuss the possibility that both anomalies have common origins arising from production and decays of the same particles. the main conclusions are the following: (1) direct muons cannot be generated by any new particles with mass exceeding 10+20 GeV; and (2) if both effects are originated from the charmed hadrons, then the needed charm hadroproduction cross section is unexpectedly large as compared with the quark-gluon model predictions
- …