15 research outputs found

    Gravity Effects on Information Filtering and Network Evolving

    Full text link
    In this paper, based on the gravity principle of classical physics, we propose a tunable gravity-based model, which considers tag usage pattern to weigh both the mass and distance of network nodes. We then apply this model in solving the problems of information filtering and network evolving. Experimental results on two real-world data sets, \emph{Del.icio.us} and \emph{MovieLens}, show that it can not only enhance the algorithmic performance, but can also better characterize the properties of real networks. This work may shed some light on the in-depth understanding of the effect of gravity model

    HAPI: A Large-scale Longitudinal Dataset of Commercial ML API Predictions

    Full text link
    Commercial ML APIs offered by providers such as Google, Amazon and Microsoft have dramatically simplified ML adoption in many applications. Numerous companies and academics pay to use ML APIs for tasks such as object detection, OCR and sentiment analysis. Different ML APIs tackling the same task can have very heterogeneous performance. Moreover, the ML models underlying the APIs also evolve over time. As ML APIs rapidly become a valuable marketplace and a widespread way to consume machine learning, it is critical to systematically study and compare different APIs with each other and to characterize how APIs change over time. However, this topic is currently underexplored due to the lack of data. In this paper, we present HAPI (History of APIs), a longitudinal dataset of 1,761,417 instances of commercial ML API applications (involving APIs from Amazon, Google, IBM, Microsoft and other providers) across diverse tasks including image tagging, speech recognition and text mining from 2020 to 2022. Each instance consists of a query input for an API (e.g., an image or text) along with the API's output prediction/annotation and confidence scores. HAPI is the first large-scale dataset of ML API usages and is a unique resource for studying ML-as-a-service (MLaaS). As examples of the types of analyses that HAPI enables, we show that ML APIs' performance change substantially over time--several APIs' accuracies dropped on specific benchmark datasets. Even when the API's aggregate performance stays steady, its error modes can shift across different subtypes of data between 2020 and 2022. Such changes can substantially impact the entire analytics pipelines that use some ML API as a component. We further use HAPI to study commercial APIs' performance disparities across demographic subgroups over time. HAPI can stimulate more research in the growing field of MLaaS.Comment: Preprint, to appear in NeurIPS 202

    Development of an airborne SAR real-time digital imaging processor

    No full text

    HAPI Explorer: Comprehension, Discovery, and Explanation on History of ML APIs

    No full text
    Machine learning prediction APIs offered by Google, Microsoft, Amazon, and many other providers have been continuously adopted in a plethora of applications, such as visual object detection, natural language comprehension, and speech recognition. Despite the importance of a systematic study and comparison of different APIs over time, this topic is currently under-explored because of the lack of data and user-friendly exploration tools. To address this issue, we present HAPI Explorer (History of API Explorer), an interactive system that offers easy access to millions of instances of commercial API applications collected in three years, prioritize attention on user-defined instance regimes, and explain interesting patterns across different APIs, subpopulations, and time periods via visual and natural languages. HAPI Explorer can facilitate further comprehension and exploitation of ML prediction APIs

    <i>InnerS</i> vs. recommendation length of the three algorithms for <i>Del.icio.us</i> and <i>Movielens</i>.

    No full text
    <p>The result is obtained by averaging over 50 independent realizations of random data division, and yellow lines represent the error intervals. The parameter for algorithm (III) is set to 0.001. Results on both datasets show that the gravity-model based algorithm (black) outperforms other two baselines.</p

    Evolutionary results of four corresponding networks.

    No full text
    <p> represents the size of the giant component, denotes the clustering coefficient, and are respectively the assortative coefficient and average distance of network, and denotes the network heterogeneity. In the last three rows, it presents both real value of corresponding metric and the error interval (separated by slash), which is calculated as: , where is the metric value of current model and is the corresponding value of <i>ST</i> network. Each value is obtained by averaging over 50 interdependent network realizations.</p

    , , <i>E</i>, <i>r</i>, <i>D</i> and <i>H</i> as the function of ratio of added links.

    No full text
    <p>The result is obtained by averaging over 50 interdependent network realizations. The dash line highlights the corresponding result of <i>ST</i> network. Results from five representative metrics show that the <i>GR</i> model (blue triangle) is the best one to approach the original <i>ST</i> network.</p

    as the function of for the two observed datasets, showing that the common feature, , is positively correlated with the object mass.

    No full text
    <p> as the function of for the two observed datasets, showing that the common feature, , is positively correlated with the object mass.</p

    Comparisons of <i>AUC</i> results of respectively considering the effects of mass (), common interest (), and as well as three algorithms (algorithm I, II and III).

    No full text
    <p>The result is obtained by averaging over 50 independent realizations of random data division, and the three digital numbers behind the signs are the corresponding error intervals. The parameter for algorithm (III) is set to 0.001.</p

    <i>Precision</i> vs. recommendation length of the three algorithms for <i>Del.icio.us</i> and <i>Movielens</i>.

    No full text
    <p>The result is obtained by averaging over 50 independent realizations of random data division , and yellow lines represent the error intervals. The parameter for algorithm (III) is set to 0.001. Results on both datasets show that the gravity-model based algorithm (black) outperforms other two baselines.</p
    corecore