8 research outputs found

    Virtual Reality Environment for Studying Change Blindness

    Get PDF
    Muutusepimedus on intuitsioonivastane fenomen, mis kirjeldab võimetust märgata muutusi ümbritsevas keskkonnas. Muutusepimeduse uurimiseks on kasutusel erinevaid meetodeid. Kuigi virtuaalreaalsuses läbiviidavatel katsetel on mitmeid eeliseid võrreldes tavaliste meetoditega, on virtuaalreaalsuses läbi viidud vaid üksikuid uuringuid. Antud lõputöö eesmärgiks on arendada tööriistakast, mis võimaldab katseid ettevalmistada, taasluua ja virtuaalreaalsuses läbi viia. Tööriistakasti abil saab disainida ruume, lisades uusi esemeid, muuta nende asukohta ning välimust. Lisaks eelnevale saab ka koostada ja läbi viia isikupärastatud katseid.Change blindness is a counterintuitive phenomenon which describes the inability to detect changes in the surrounding world. Change blindness is studied by using various methods. However, so far there are only a few studies in virtual reality environments. This is the case despite the fact that studies done in virtual reality have advantages over conventional means of doing experiments. The aim of the present thesis is to develop a toolbox that allows one to easily prepare, reproduce and carry out change blindness experiments in virtual reality. For this, toolbox enables to design levels by modifying objects’ position, appearance and adding new ones. Also personalised experiments can be created and executed

    Calibration of Convolutional Neural Networks

    Get PDF
    Süvanärvivõrgud koguvad aina populaarsust ja tänapäeval on need kasutusel ka mitmetes praktilistes rakendustes. Sellest hoolimata, ainult klassi märgendi ennustamine ei pruugi olla enam piisav, sest mõningatel aladel on ka tähtis teada, kui kindel mudel enda väljundis on. Hiljuti näidati, et sügavate närvivõrkude ennustused pole nii hästi kalibreeritud, võrreldes madalamate võrkudega. Näiteks sügavad närvivõrgud kipuvad olema liigselt enesekindlad.Aastal 2017, Guo et al. avaldas temperatuuri skaleerimise (temperature scaling) meetodi (Guo et al., 2017) ning võrdles seda teiste olemas olevate kalibreerimismeetoditega. Samal aastal avalikustas Kull et al. beta kalibreerismeetodi (beta calibration) (Kull et al., 2017), kuid seda ei testitud närvivõrkudel. Antud töö käigus hinnati beta kalibreerimise headust konvolutsioonilistel närvivõrkudel ja selleks, et võrrelda tulemusi teiste kalibreerimismeetoditega, on osa Guo et al. tulemustest replitseeritud.See lõputöö võrdleb histogrammimeetodit (histogram binning), isotoonilist regressiooni (isotonic regression) ja temperatuuri skaleerimine Guo et al. artiklist ja beta kalibreerimist Kull et al. artiklist erinevatel uusimatel konvolutsioonilistel närvivõrkudel. Lisaks Guo et al. poolt kasutatavatele kaomõõtudele (loss measure), lisati võrdlusesse Brieri skoor. Töös saadud tulemused olid kooskõlas Guo et al. tulemustega. Beta kalibreerimine oli enamustel mudelitel veidi halvem, kui temperatuuri skaleerimine. Vaatamata sellele oli veamäära korral beta kalibreerimine vähekene parem, kui teised võrdluses olevad kalibreerimise meetodid.Deep neural networks have become more popular over time and nowadays these are used for many practical applications. However, the precise output by itself might not be enough, as in some areas it is also important to know how confident the model is. As recently shown, deep neural network predictions are not well-calibrated, in contrast to shallow ones. For example, deep neural networks tend to be over-confident.In 2017, Guo et al. published temperature scaling method (Guo et al., 2017) and compared it to other existing confidence calibration methods. Later that year, Kull et al. published beta calibration method (Kull et al., 2017), however it was not tested on neural networks. The thesis evaluates beta calibration in context of convolutional neural networks and in order to compare the results with other calibration methods, some of the Guo et al. results were replicated.This thesis compares histogram binning, isotonic regression and temperature scaling methods from Guo et al. and beta calibration by Kull et al. on various state-of-the-art convolutional neural networks. In addition to loss measures used by Guo et al., Brier score was added. The results were in accordance with Guo et al. outcome. The beta calibration was a little bit worse for most of the models compared to temperature scaling, however, in case of error rate, it was a bit better compared to temperature scaling

    Calibrated Perception Uncertainty Across Objects and Regions in Bird's-Eye-View

    Full text link
    In driving scenarios with poor visibility or occlusions, it is important that the autonomous vehicle would take into account all the uncertainties when making driving decisions, including choice of a safe speed. The grid-based perception outputs, such as occupancy grids, and object-based outputs, such as lists of detected objects, must then be accompanied by well-calibrated uncertainty estimates. We highlight limitations in the state-of-the-art and propose a more complete set of uncertainties to be reported, particularly including undetected-object-ahead probabilities. We suggest a novel way to get these probabilistic outputs from bird's-eye-view probabilistic semantic segmentation, in the example of the FIERY model. We demonstrate that the obtained probabilities are not calibrated out-of-the-box and propose methods to achieve well-calibrated uncertainties

    Beyond temperature scaling:Obtaining well-calibrated multiclass probabilities with Dirichlet calibration

    Get PDF
    Class probabilities predicted by most multiclass classifiers are uncalibrated, often tending towards over-confidence. With neural networks, calibration can be improved by temperature scaling, a method to learn a single corrective multiplicative factor for inputs to the last softmax layer. On non-neural models the existing methods apply binary calibration in a pairwise or one-vs-rest fashion. We propose a natively multiclass calibration method applicable to classifiers from any model class, derived from Dirichlet distributions and generalising the beta calibration method from binary classification. It is easily implemented with neural nets since it is equivalent to log-transforming the uncalibrated probabilities, followed by one linear layer and softmax. Experiments demonstrate improved probabilistic predictions according to multiple measures (confidence-ECE, classwise-ECE, log-loss, Brier score) across a wide range of datasets and classifiers. Parameters of the learned Dirichlet calibration map provide insights to the biases in the uncalibrated model.Comment: Accepted for presentation at NeurIPS 201

    On the Usefulness of the Fit-on-the-Test View on Evaluating Calibration of Classifiers

    Full text link
    Every uncalibrated classifier has a corresponding true calibration map that calibrates its confidence. Deviations of this idealistic map from the identity map reveal miscalibration. Such calibration errors can be reduced with many post-hoc calibration methods which fit some family of calibration maps on a validation dataset. In contrast, evaluation of calibration with the expected calibration error (ECE) on the test set does not explicitly involve fitting. However, as we demonstrate, ECE can still be viewed as if fitting a family of functions on the test data. This motivates the fit-on-the-test view on evaluation: first, approximate a calibration map on the test data, and second, quantify its distance from the identity. Exploiting this view allows us to unlock missed opportunities: (1) use the plethora of post-hoc calibration methods for evaluating calibration; (2) tune the number of bins in ECE with cross-validation. Furthermore, we introduce: (3) benchmarking on pseudo-real data where the true calibration map can be estimated very precisely; and (4) novel calibration and evaluation methods using new calibration map families PL and PL3
    corecore