525 research outputs found

    A Machine Learning-oriented Survey on Tiny Machine Learning

    Full text link
    The emergence of Tiny Machine Learning (TinyML) has positively revolutionized the field of Artificial Intelligence by promoting the joint design of resource-constrained IoT hardware devices and their learning-based software architectures. TinyML carries an essential role within the fourth and fifth industrial revolutions in helping societies, economies, and individuals employ effective AI-infused computing technologies (e.g., smart cities, automotive, and medical robotics). Given its multidisciplinary nature, the field of TinyML has been approached from many different angles: this comprehensive survey wishes to provide an up-to-date overview focused on all the learning algorithms within TinyML-based solutions. The survey is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodological flow, allowing for a systematic and complete literature survey. In particular, firstly we will examine the three different workflows for implementing a TinyML-based system, i.e., ML-oriented, HW-oriented, and co-design. Secondly, we propose a taxonomy that covers the learning panorama under the TinyML lens, examining in detail the different families of model optimization and design, as well as the state-of-the-art learning techniques. Thirdly, this survey will present the distinct features of hardware devices and software tools that represent the current state-of-the-art for TinyML intelligent edge applications. Finally, we discuss the challenges and future directions.Comment: Article currently under review at IEEE Acces

    Semantically-driven automatic creation of training sets for object recognition

    Get PDF
    In the object recognition community, much effort has been spent on devising expressive object representations and powerful learning strategies for designing effective classifiers, capable of achieving high accuracy and generalization. In this scenario, the focus on the training sets has been historically weak; by and large, training sets have been generated with a substantial human intervention, requiring considerable time. In this paper, we present a strategy for automatic training set generation. The strategy uses semantic knowledge coming from WordNet, coupled with the statistical power provided by Google Ngram, to select a set of meaningful text strings related to the text class-label (e.g., \u201ccat\u201d), that are subsequently fed into the Google Images search engine, producing sets of images with high training value. Focusing on the classes of different object recognition benchmarks (PASCAL VOC 2012, Caltech-256, ImageNet, GRAZ and OxfordPet), our approach collects novel training images, compared to the ones obtained by exploiting Google Images with the simple text class-label. In particular, we show that the gathered images are better able to capture the different visual facets of a concept, thus encoding in a more successful manner the intra-class variance. As a consequence, training standard classifiers with this data produces performances not too distant from those obtained from the classical hand-crafted training sets. In addition, our datasets generalize well and are stable, that is, they provide similar performances on diverse test datasets. This process does not require manual intervention and is completed in a few hours

    Semantically-driven automatic creation of training sets for object recognition

    Get PDF
    In the object recognition community, much effort has been spent on devising expressive object representations and powerful learning strategies for design-ing effective classifiers, capable of achieving high accuracy and generalization. In this scenario, the focus on the training sets has been historically weak; by and large, training sets have been generated with a substantial human in-tervention, requiring considerable time. In this paper, we present a strategy for automatic training set generation. The strategy uses semantic knowledge coming from WordNet, coupled with the statistical power provided by Google Ngram, to select a set of meaningful text strings related to the text class-label (e.g., “cat”), that are subsequently fed into the Google Images search engine, producing sets of images with high training value. Focusing on the classes of different object recognition benchmarks (PASCAL VOC 2012, Caltech-256, ImageNet, GRAZ and OxfordPet), our approach collects novel training im-ages, compared to the ones obtained by exploiting Google Images with the simple text class-label. In particular, we show that the gathered images are better able to capture the different visual facets of a concept, thus encod-ing in a more successful manner the intra-class variance. As a consequence, training standard classifiers with this data produces performances not too distant from those obtained from the classical hand-crafted training sets. In addition, our datasets generalize well and are stable, that is, they provide similar performances on diverse test datasets. This process does not require manual intervention and is completed in a few hours

    Erratum: The Belle II Physics Book (Progress of Theoretical and Experimental Physics (2019) 2019 (123C01) DOI: 10.1093/ptep/ptz106)

    Get PDF

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis

    The Physics of the B Factories

    Get PDF
    This work is on the Physics of the B Factories. Part A of this book contains a brief description of the SLAC and KEK B Factories as well as their detectors, BaBar and Belle, and data taking related issues. Part B discusses tools and methods used by the experiments in order to obtain results. The results themselves can be found in Part C

    The Belle II Physics Book

    Get PDF
    We present the physics program of the Belle II experiment, located on the intensity frontier SuperKEKB e+ee^+e^- collider. Belle II collected its first collisions in 2018, and is expected to operate for the next decade. It is anticipated to collect 50/ab of collision data over its lifetime. This book is the outcome of a joint effort of Belle II collaborators and theorists through the Belle II theory interface platform (B2TiP), an effort that commenced in 2014. The aim of B2TiP was to elucidate the potential impacts of the Belle II program, which includes a wide scope of physics topics: B physics, charm, tau, quarkonium, electroweak precision measurements and dark sector searches. It is composed of nine working groups (WGs), which are coordinated by teams of theorist and experimentalists conveners: Semileptonic and leptonic B decays, Radiative and Electroweak penguins, phi_1 and phi_2 (time-dependent CP violation) measurements, phi_3 measurements, Charmless hadronic B decay, Charm, Quarkonium(like), tau and low-multiplicity processes, new physics and global fit analyses. This book highlights "golden- and silver-channels", i.e. those that would have the highest potential impact in the field. Theorists scrutinised the role of those measurements and estimated the respective theoretical uncertainties, achievable now as well as prospects for the future. Experimentalists investigated the expected improvements with the large dataset expected from Belle II, taking into account improved performance from the upgraded detector.Comment: 689 page

    Measurement of nuclear modification factors of gamma(1S)), gamma(2S), and gamma(3S) mesons in PbPb collisions at root s(NN)=5.02 TeV

    Get PDF
    The cross sections for ϒ(1S), ϒ(2S), and ϒ(3S) production in lead-lead (PbPb) and proton-proton (pp) collisions at √sNN = 5.02 TeV have been measured using the CMS detector at the LHC. The nuclear modification factors, RAA, derived from the PbPb-to-pp ratio of yields for each state, are studied as functions of meson rapidity and transverse momentum, as well as PbPb collision centrality. The yields of all three states are found to be significantly suppressed, and compatible with a sequential ordering of the suppression, RAA(ϒ(1S)) > RAA(ϒ(2S)) > RAA(ϒ(3S)). The suppression of ϒ(1S) is larger than that seen at √sNN = 2.76 TeV, although the two are compatible within uncertainties. The upper limit on the RAA of ϒ(3S) integrated over pT, rapidity and centrality is 0.096 at 95% confidence level, which is the strongest suppression observed for a quarkonium state in heavy ion collisions to date. © 2019 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Funded by SCOAP3.Peer reviewe

    Electroweak production of two jets in association with a Z boson in proton-proton collisions root s =13 TeV

    Get PDF
    A measurement of the electroweak (EW) production of two jets in association with a Z boson in proton-proton collisions at root s = 13 TeV is presented, based on data recorded in 2016 by the CMS experiment at the LHC corresponding to an integrated luminosity of 35.9 fb(-1). The measurement is performed in the lljj final state with l including electrons and muons, and the jets j corresponding to the quarks produced in the hard interaction. The measured cross section in a kinematic region defined by invariant masses m(ll) > 50 GeV, m(jj) > 120 GeV, and transverse momenta P-Tj > 25 GeV is sigma(EW) (lljj) = 534 +/- 20 (stat) fb (syst) fb, in agreement with leading-order standard model predictions. The final state is also used to perform a search for anomalous trilinear gauge couplings. No evidence is found and limits on anomalous trilinear gauge couplings associated with dimension-six operators are given in the framework of an effective field theory. The corresponding 95% confidence level intervals are -2.6 <cwww/Lambda(2) <2.6 TeV-2 and -8.4 <cw/Lambda(2) <10.1 TeV-2. The additional jet activity of events in a signal-enriched region is also studied, and the measurements are in agreement with predictions.Peer reviewe
    corecore