62 research outputs found

    Identifying the Machine Learning Family from Black-Box Models

    Full text link
    [EN] We address the novel question of determining which kind of machine learning model is behind the predictions when we interact with a black-box model. This may allow us to identify families of techniques whose models exhibit similar vulnerabilities and strengths. In our method, we first consider how an adversary can systematically query a given black-box model (oracle) to label an artificially-generated dataset. This labelled dataset is then used for training different surrogate models (each one trying to imitate the oracle¿s behaviour). The method has two different approaches. First, we assume that the family of the surrogate model that achieves the maximum Kappa metric against the oracle labels corresponds to the family of the oracle model. The other approach, based on machine learning, consists in learning a meta-model that is able to predict the model family of a new black-box model. We compare these two approaches experimentally, giving us insight about how explanatory and predictable our concept of family is.This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-17-1-0287, the EU (FEDER), and the Spanish MINECO under grant TIN 2015-69175-C4-1-R, the Generalitat Valenciana PROMETEOII/2015/013. F. Martinez-Plumed was also supported by INCIBE under grant INCIBEI-2015-27345 (Ayudas para la excelencia de los equipos de investigacion avanzada en ciberseguridad). J. H-Orallo also received a Salvador de Madariaga grant (PRX17/00467) from the Spanish MECD for a research stay at the CFI, Cambridge, and a BEST grant (BEST/2017/045) from the GVA for another research stay at the CFI.Fabra-Boluda, R.; Ferri Ramírez, C.; Hernández-Orallo, J.; Martínez-Plumed, F.; Ramírez Quintana, MJ. (2018). Identifying the Machine Learning Family from Black-Box Models. Lecture Notes in Computer Science. 11160:55-65. https://doi.org/10.1007/978-3-030-00374-6_6S556511160Angluin, D.: Queries and concept learning. Mach. Learn. 2(4), 319–342 (1988)Benedek, G.M., Itai, A.: Learnability with respect to fixed distributions. Theor. Comput. Sci. 86(2), 377–389 (1991)Biggio, B., et al.: Security Evaluation of support vector machines in adversarial environments. In: Ma, Y., Guo, G. (eds.) Support Vector Machines Applications, pp. 105–153. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-02300-7_4Blanco-Vega, R., Hernández-Orallo, J., Ramírez-Quintana, M.J.: Analysing the trade-off between comprehensibility and accuracy in mimetic models. In: Suzuki, E., Arikawa, S. (eds.) DS 2004. LNCS (LNAI), vol. 3245, pp. 338–346. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30214-8_29Dalvi, N., Domingos, P., Sanghai, S., Verma, D., et al.: Adversarial classification. In: Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 99–108. ACM (2004)Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository (2017). http://archive.ics.uci.edu/mlDomingos, P.: Knowledge discovery via multiple models. Intell. Data Anal. 2(3), 187–202 (1998)Duin, R.P.W., Loog, M., Pȩkalska, E., Tax, D.M.J.: Feature-based dissimilarity space classification. In: Ünay, D., Çataltepe, Z., Aksoy, S. (eds.) ICPR 2010. LNCS, vol. 6388, pp. 46–55. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-17711-8_5Fernández-Delgado, M., Cernadas, E., Barro, S., Amorim, D.: Do we need hundreds of classifiers to solve real world classification problems. J. Mach. Learn. Res. 15(1), 3133–3181 (2014)Ferri, C., Hernández-Orallo, J., Modroiu, R.: An experimental comparison of performance measures for classification. Pattern Recognit. Lett. 30(1), 27–38 (2009)Giacinto, G., Perdisci, R., Del Rio, M., Roli, F.: Intrusion detection in computer networks by a modular ensemble of one-class classifiers. Inf. Fusion 9(1), 69–82 (2008)Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58 (2011)Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 51(2), 181–207 (2003)Landis, J.R., Koch, G.G.: An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics 33, 363–374 (1977)Lowd, D., Meek, C.: Adversarial learning. In: Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery in Data mining, pp. 641–647. ACM (2005)Martınez-Plumed, F., Prudêncio, R.B., Martınez-Usó, A., Hernández-Orallo, J.: Making sense of item response theory in machine learning. In: Proceedings of 22nd European Conference on Artificial Intelligence (ECAI). Frontiers in Artificial Intelligence and Applications, vol. 285, pp. 1140–1148 (2016)Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. IEEE (2016)Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)Sesmero, M.P., Ledezma, A.I., Sanchis, A.: Generating ensembles of heterogeneous classifiers using stacked generalization. Wiley Interdiscip. Rev.: Data Min. Knowl. Discov. 5(1), 21–34 (2015)Smith, M.R., Martinez, T., Giraud-Carrier, C.: An instance level analysis of data complexity. Mach. Learn. 95(2), 225–256 (2014)Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: USENIX Security Symposium, pp. 601–618 (2016)Valiant, L.G.: A theory of the learnable. Commun. ACM 27(11), 1134–1142 (1984)Wallace, C.S., Boulton, D.M.: An information measure for classification. Comput. J. 11(2), 185–194 (1968)Wolpert, D.H.: Stacked generalization. Neural Netw. 5(2), 241–259 (1992

    Supersymmetric mass spectra and the seesaw type-I scale

    Get PDF
    We calculate supersymmetric mass spectra with cMSSM boundary conditions and a type-I seesaw mechanism added to explain current neutrino data. Using published, estimated errors on SUSY mass observables for a combined LHC+ILC analysis, we perform a theoretical χ2\chi^2 analysis to identify parameter regions where pure cMSSM and cMSSM plus seesaw type-I might be distinguishable with LHC+ILC data. The most important observables are determined to be the (left) smuon and selectron masses and the splitting between them, respectively. Splitting in the (left) smuon and selectrons is tiny in most of cMSSM parameter space, but can be quite sizeable for large values of the seesaw scale, mSSm_{SS}. Thus, for very roughly mSS≥1014m_{SS} \ge 10^{14} GeV hints for type-I seesaw might appear in SUSY mass measurements. Since our numerical results depend sensitively on forecasted error bars, we discuss in some detail the accuracies, which need to be achieved, before a realistic analysis searching for signs of type-I seesaw in SUSY spectra can be carried out.Comment: 17 pages, 7 figure

    Constrained SUSY seesaws with a 125 GeV Higgs

    Get PDF
    Motivated by the ATLAS and CMS discovery of a Higgs-like boson with a mass around 125 GeV, and by the need of explaining neutrino masses, we analyse the three canonical SUSY versions of the seesaw mechanism (type I, II and III) with CMSSM boundary conditions. In type II and III cases, SUSY particles are lighter than in the CMSSM (or the constrained type I seesaw), for the same set of input parameters at the universality scale. Thus, to explain mh0≃125GeVm_{h^0} \simeq 125 GeV at low energies, one is forced into regions of parameter space with very large values of m0m_0, M1/2M_{1/2} or A0A_0. We compare the squark and gluino masses allowed by the ATLAS and CMS ranges for mh0m_{h^0} (extracted from the 2011-2012 data), and discuss the possibility of distinguishing seesaw models in view of future results on SUSY searches. In particular, we briefly comment on the discovery potential of LHC upgrades, for squark/gluino mass ranges required by present Higgs mass constraints. A discrimination between different seesaw models cannot rely on the Higgs mass data alone, therefore we also take into account the MEG upper limit on BR(μ→eγ)(\mu \to e \gamma) and show that, in some cases, this may help to restrict the SUSY parameter space, as well as to set complementary limits on the seesaw scale.Comment: 28 pages, 7 figures. v2: comments and references added. Final version to appear in JHE

    In brief

    No full text
    • …
    corecore