779 research outputs found

    A Compliance Checking Framework for DNN Models

    Full text link
    © 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. Growing awareness towards ethical use of machine learning (ML) models has created a surge for the development of fair models. Existing work in this regard assumes the presence of sensitive attributes in the data and hence can build classifiers whose decisions remain agnostic to such attributes. However, in the real world settings, the end-user of the ML model is unaware of the training data; besides, building custom models is not always feasible. Moreover, utilizing a pre-trained model with high accuracy on certain dataset can not be assumed to be fair. Unknown biases in the training data are the true culprit for unfair models (i.e., disparate performance for groups in the dataset). In this preliminary research, we propose a different lens for building fair models by enabling the user with tools to discover blind spots and biases in a pre-trained model and augment them with corrective measures

    Learning-based Analysis on the Exploitability of Security Vulnerabilities

    Get PDF
    The purpose of this thesis is to develop a tool that uses machine learning techniques to make predictions about whether or not a given vulnerability will be exploited. Such a tool could help organizations such as electric utilities to prioritize their security patching operations. Three different models, based on a deep neural network, a random forest, and a support vector machine respectively, are designed and implemented. Training data for these models is compiled from a variety of sources, including the National Vulnerability Database published by NIST and the Exploit Database published by Offensive Security. Extensive experiments are conducted, including testing the accuracy of each model, dynamically training the models on a rolling window of training data, and filtering the training data by various features. Of the chosen models, the deep neural network and the support vector machine show the highest accuracy (approximately 94% and 93%, respectively), and could be developed by future researchers into an effective tool for vulnerability analysis

    Using Write Buffers in Systolic Array Architectures to Mitigate the Number of Memory Access Produced by Row Stationary Dataflows

    Get PDF
    New applications of Deep Neural Networks are being designed such as fraud detection, short term weather precipitation forecasts, and cancer prognosis prediction. Nonetheless, their respective models are getting more complex with an increasing number of depth layers. These models require millions of computations that conventional CPU and GPU architectures will take a significant amount of computational time. The data distribution of these models is well known; they mostly consist of dot product operations between inputs and filters. Applications such as self-driving cars required fast response time and accurate predictions. Current research introduces accelerator architectures based on 2D systolic arrays as they provide high efficiency in performing multiplication and accumulation operations. Computational and power cost define performance, memory accesses attribute the highest cost to current architecture models. In order to enhance the performance of DNN accelerators, parallelism is extracted by breaking convolution into partial computations at the expense of segmenting output memory accesses. This thesis explores the implementation of an accumulator microarchitecture component based on column pipelined adder trees with the purpose of collecting and aggregating output computed values based on destination address. The results of this work showed a 3.3x and 2.15x speedup for Tiny-YOLO and AlexNet CNN using a 32x64 Systolic Array. Through the reduction of computed values developers will be able to explore novel data mappings to extract parallelism based on data locality

    Deep neural networks for image quality: a comparison study for identification photos

    Get PDF
    Many online platforms allow their users to upload images to their account profile. The fact that a user is free to upload any image of their liking to a university or a job platform, has resulted in occurrences of profile images that weren’t very professional or adequate in any of those contexts. Another problem associated with submitting a profile image is that even if there is some kind of control over each submitted image, this control is performed manually by someone, and that process alone can be very tedious and time-consuming, especially when there are cases of a large influx of new users joining those platforms. Based on international compliance standards used to validate photographs for machine-readable travel documents, there are SDKs that already perform automatic classification of the quality of those photographs, however, the classification is based on traditional computer vision algorithms. With the growing popularity and powerful performance of deep neural networks, it would be interesting to examine how would these perform in this task. This dissertation proposes a deep neural network model to classify the quality of profile images, and a comparison of this model against traditional computer vision algorithms, with respect to the complexity of the implementation, the quality of the classifications, and the computation time associated to the classification process. To the best of our knowledge, this dissertation is the first to study the use of deep neural networks on image quality classification.Muitas plataformas online permitem que os seus utilizadores façam upload de imagens para o perfil das respetivas contas. O facto de um utilizador ser livre de submeter qualquer imagem do seu agrado para uma plataforma de universidade ou de emprego, pode resultar em ocorrências de casos onde as imagens de perfil não são adequadas ou profissionais nesses contextos. Outro problema associado à submissão de imagens para um perfil é que, mesmo que haja algum tipo de controlo sobre cada imagem submetida, esse controlo é feito manualmente. Esse processo, por si, só pode ser aborrecido e demorado, especialmente em situações de grande afluxo de novos utilizadores a inscreverem-se nessas plataformas. Com base em normas internacionais utilizadas para validar fotografias de documentos de viagem de leitura óptica, existem SDKs que já realizam a classificação automática da qualidade dessas fotografias. No entanto, essa classificação é baseada em algoritmos tradicionais de visão computacional. Com a crescente popularidade e o poderoso desempenho de redes neurais profundas, seria interessante examinar como é que estas se comportam nesta tarefa. Assim, esta dissertação propõe um modelo de rede neural profunda para classificar a qualidade de imagens de perfis e faz uma comparação deste modelo com algoritmos tradicionais de visão computacional, no que respeita à complexidade da implementação, qualidade das classificações e ao tempo de computação associado ao processo de classificação. Tanto quanto sabemos, esta dissertação é a primeira a estudar o uso de redes neurais profundas na classificação da qualidade de imagem

    Credit scoring models: Evolution from standard statistical methods to machine learning techniques

    Get PDF
    openThis thesis investigates how credit scoring models have evolved over time from standard statistic techniques to advanced machine learning models and what advancements and challenges the evolution of these models has led to.This thesis investigates how credit scoring models have evolved over time from standard statistic techniques to advanced machine learning models and what advancements and challenges the evolution of these models has led to

    Artificial intelligence in construction asset management: a review of present status, challenges and future opportunities

    Get PDF
    The built environment is responsible for roughly 40% of global greenhouse emissions, making the sector a crucial factor for climate change and sustainability. Meanwhile, other sectors (like manufacturing) adopted Artificial Intelligence (AI) to solve complex, non-linear problems to reduce waste, inefficiency, and pollution. Therefore, many research efforts in the Architecture, Engineering, and Construction community have recently tried introducing AI into building asset management (AM) processes. Since AM encompasses a broad set of disciplines, an overview of several AI applications, current research gaps, and trends is needed. In this context, this study conducted the first state-of-the-art research on AI for building asset management. A total of 578 papers were analyzed with bibliometric tools to identify prominent institutions, topics, and journals. The quantitative analysis helped determine the most researched areas of AM and which AI techniques are applied. The areas were furtherly investigated by reading in-depth the 83 most relevant studies selected by screening the articles’ abstracts identified in the bibliometric analysis. The results reveal many applications for Energy Management, Condition assessment, Risk management, and Project management areas. Finally, the literature review identified three main trends that can be a reference point for future studies made by practitioners or researchers: Digital Twin, Generative Adversarial Networks (with synthetic images) for data augmentation, and Deep Reinforcement Learning
    corecore