23,686 research outputs found

    Dataset bias exposed in face verification

    Get PDF
    This is the peer reviewed version of the following article: López‐López, E., Pardo, X.M., Regueiro, C.V., Iglesias, R. and Casado, F.E. (2019), Dataset bias exposed in face verification. IET Biom., 8: 249-258, which has been published in final form at https://doi.org/10.1049/iet-bmt.2018.5224. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived VersionsMost facial verification methods assume that training and testing sets contain independent and identically distributed samples, although, in many real applications, this assumption does not hold. Whenever gathering a representative dataset in the target domain is unfeasible, it is necessary to choose one of the already available (source domain) datasets. Here, a study was performed over the differences among six public datasets, and how this impacts on the performance of the learned methods. In the considered scenario of mobile devices, the individual of interest is enrolled using a few facial images taken in the operational domain, while training impostors are drawn from one of the public available datasets. This work tried to shed light on the inherent differences among the datasets, and potential harms that should be considered when they are combined for training and testing. Results indicate that a drop in performance occurs whenever training and testing are done on different datasets compared to the case of using the same dataset in both phases. However, the decay strongly depends on the kind of features. Besides, the representation of samples in the feature space reveals insights into what extent bias is an endogenous or an exogenous factorThis work has received financial support from the Xunta de Galicia, Consellería de Cultura, Educación e Ordenación Universitaria (Accreditation 2016–2019, EDG431G/01 and ED431G/08, and reference competitive group 2014–2017, GRC2014/030), the European Union: European Social Fund (ESF), European Regional Development Fund (ERDF) and FEDER funds and (AEI/FEDER, UE) grant number TIN2017‐90135‐R. Eric López had received financial support from the Xunta de Galicia and the European Union (European Social Fund ‐ ESF)S

    Economic growth and financial statement verification

    Get PDF
    We use a proprietary data set of financial statements collected by banks to examine whether economic growth is related to the use of financial statement verification in debt financing. Exploiting the distinct economic growth and contraction patterns of the construction industry over the years 2002–2011, our estimates reveal that banks reduced their collection of unqualified audited financial statements from construction firms at nearly twice the rate of firms in other industries during the housing boom period before 2008. This reduction was most severe in the regions that experienced the most significant construction growth. These trends reversed during the subsequent housing crisis in 2008–2011 when construction activity contracted. Moreover, using bank‐ and firm‐level data, we find a strong negative (positive) relation between audited financial statements during the growth period, and subsequent loan losses (construction firm survival) during the contraction period. Collectively, our results reveal that macroeconomic fluctuations produce temporal shifts in the overall level of financial statement verification and temporal shifts in verification are related to bank loan portfolio quality and borrower performance.Accepted manuscrip

    Bolometric technique for high-resolution broadband microwave spectroscopy of ultra-low-loss samples

    Full text link
    A novel low temperature bolometric method has been devised and implemented for high-precision measurements of the microwave surface resistance of small single-crystal platelet samples having very low absorption, as a continuous function of frequency. The key to the success of this non-resonant method is the in-situ use of a normal metal reference sample that calibrates the absolute rf field strength. The sample temperature can be controlled independently of the 1.2 K liquid helium bath, allowing for measurements of the temperature evolution of the absorption. However, the instrument's sensitivity decreases at higher temperatures, placing a limit on the useful temperature range. Using this method, the minimum detectable power at 1.3 K is 1.5 pW, corresponding to a surface resistance sensitivity of \approx1 μΩ\mu\Omega for a typical 1 mm×\times1 mm platelet sample.Comment: 13 pages, 12 figures, submitted to Review of Scientific Instrument

    Low-Shot Learning with Imprinted Weights

    Full text link
    Human vision is able to immediately recognize novel visual categories after seeing just one or a few training examples. We describe how to add a similar capability to ConvNet classifiers by directly setting the final layer weights from novel training examples during low-shot learning. We call this process weight imprinting as it directly sets weights for a new category based on an appropriately scaled copy of the embedding layer activations for that training example. The imprinting process provides a valuable complement to training with stochastic gradient descent, as it provides immediate good classification performance and an initialization for any further fine-tuning in the future. We show how this imprinting process is related to proxy-based embeddings. However, it differs in that only a single imprinted weight vector is learned for each novel category, rather than relying on a nearest-neighbor distance to training instances as typically used with embedding methods. Our experiments show that using averaging of imprinted weights provides better generalization than using nearest-neighbor instance embeddings.Comment: CVPR 201
    corecore