1,034 research outputs found

    An Analysis of Features and Tendencies in Mobile Banking Apps

    Get PDF
    Mobile devices such as smartphones and tablets are being employed alongside personal computers, and even replacing them in some applications. Banks are increasingly investing on mobility, by enabling the mobile web and mobile app channels for online banking, and by providing new mobile payment services. In this paper, the services for off-branch banking offered by several Italian banks are analyzed, showing that mobile apps have surpassed the mobile web channel in completeness of the offer, due to the fact that additional capabilities of mobile devices make possible advanced features and applications. An outlook on the near future is provided, remarking that mobile marketing and mobile recommender systems can greatly take advantage of being run natively on devices, making it desirable for businesses to invest on designing mobile apps

    Combining mitigation treatments against biases in personalized rankings: Use case on item popularity

    Get PDF
    Historical interactions leveraged by recommender systems are often non-uniformly distributed across items. Though they are of interest for consumers, certain items end up therefore being biasedly under-recommended. Existing treatments for mitigating these biases act at a single step of the pipeline (either pre-, in-, or post-processing), and it remains unanswered whether simultaneously introducing treatments throughout the pipeline leads to a better mitigation. In this paper, we analyze the impact of bias treatments along the steps of the pipeline under a use case on popularity bias. Experiments show that, with small losses in accuracy, the combination of treatments leads to better trade-offs than treatments applied separately. Our findings call for treatments rooting out bias at different steps simultaneously

    SoftNet: A Package for the Analysis of Complex Networks

    Get PDF
    Identifying the most important nodes according to specific centrality indices is an important issue in network analysis. Node metrics based on the computation of functions of the adjacency matrix of a network were defined by Estrada and his collaborators in various papers. This paper describes a MATLAB toolbox for computing such centrality indices using efficient numerical algorithms based on the connection between the Lanczos method and Gauss-type quadrature rules

    Block Gauss and anti-Gauss quadrature with application to networks

    Get PDF
    Approximations of matrix-valued functions of the form WT f(A)W, where A ∈Rm×m is symmetric, W ∈ Rm×k, with m large and k ≪ m, has orthonormal columns, and f is a function, can be computed by applying a few steps of the symmetric block Lanczos method to A with initial block-vector W ∈ Rm×k. Golub and Meurant have shown that the approximants obtained in this manner may be considered block Gauss quadrature rules associated with a matrix-valued measure. This paper generalizes anti-Gauss quadrature rules, introduced by Laurie for real-valued measures, to matrix-valued measures, and shows that under suitable conditions pairs of block Gauss and block anti-Gauss rules provide upper and lower bounds for the entries of the desired matrix-valued function. Extensions to matrix-valued functions of the form WT f(A)V , where A ∈ Rm×m may be nonsymmetric, and the matrices V, W ∈ Rm×k satisfy VT W = Ik also are discussed. Approximations of the latter functions are computed by applying a few steps of the nonsymmetric block Lanczos method to A with initial block-vectors V and W. We describe applications to the evaluation of functions of a symmetric or nonsymmetric adjacency matrix for a network. Numerical examples illustrate that a combination of block Gauss and anti-Gauss quadrature rules typically provides upper and lower bounds for such problems. We introduce some new quantities that describe properties of nodes in directed or undirected networks, and demonstrate how these and other quantities can be computed inexpensively with the quadrature rules of the present paper

    Leveraging the Training Data Partitioning to Improve Events Characterization in Intrusion Detection Systems

    Get PDF
    The ever-increasing use of services based on computer networks, even in crucial areas unthinkable until a few years ago, has made the security of these networks a crucial element for anyone, also in consideration of the increasingly sophisticated techniques and strategies available to attackers. In this context, Intrusion Detection Systems (IDSs) play a primary role since they are responsible for analyzing and classifying each network activity as legitimate or illegitimate, allowing us to take the necessary countermeasures at the appropriate time. However, these systems are not infallible due to several reasons, the most important of which are the constant evolution of the attacks (e.g., zero-day attacks) and the problem that many of the attacks have behavior similar to those of legitimate activities, and therefore they are very hard to identify. This work relies on the hypothesis that the subdivision of the training data used for the IDS classification model definition into a certain number of partitions, in terms of events and features, can improve the characterization of the network events, improving the system performance. The non-overlapping data partitions train independent classification models, classifying the event according to a majority-voting rule. A series of experiments conducted on a benchmark real-world dataset support the initial hypothesis, showing a performance improvement with respect to a canonical training approach

    Influencing brain waves by evoked potentials as biometric approach: taking stock of the last six years of research

    Get PDF
    The scientific advances of recent years have made available to anyone affordable hardware devices capable of doing something unthinkable until a few years ago, the reading of brain waves. It means that through small wearable devices it is possible to perform an electroencephalography (EEG), albeit with less potential than those offered by high-cost professional devices. Such devices make it possible for researchers a huge number of experiments that were once impossible in many areas due to the high costs of the necessary hardware. Many studies in the literature explore the use of EEG data as a biometric approach for people identification, but, unfortunately, it presents problems mainly related to the difficulty of extracting unique and stable patterns from users, despite the adoption of sophisticated techniques. An approach to face this problem is based on the evoked potentials (EPs), external stimuli applied during the EEG reading, a noninvasive technique used for many years in clinical routine, in combination with other diagnostic tests, to evaluate the electrical activity related to some areas of the brain and spinal cord to diagnose neurological disorders. In consideration of the growing number of works in the literature that combine the EEG and EP approaches for biometric purposes, this work aims to evaluate the practical feasibility of such approaches as reliable biometric instruments for user identification by surveying the state of the art of the last 6 years, also providing an overview of the elements and concepts related to this research area

    Recency, Popularity, and Diversity of Explanations in Knowledge-based Recommendation

    Get PDF
    Modern knowledge-based recommender systems enable the end-to-end generation of textual explanations. These explanations are created from learnt paths between an already experience product and a recommended product in a knowledge graph, for a given user. However, none of the existing studies has investigated the extent to which properties of a single explanation (e.g., the recency of interaction with the already experience product) and of a group of explanations for a recommended list (e.g., the diversity of the explanation types) can influence the perceived explanation quality. In this paper, we summarize our previous work on conceptualizing three novel properties that model the quality of the explanations (linking interaction recency, shared entity popularity, and explanation type diversity) and proposing re-ranking approaches able to optimize for these properties. Experiments on two public data sets showed that our approaches can increase explanation quality according to the proposed properties, while preserving recommendation utility. Source code and data: https://github.com/giacoballoccu/explanation-quality-recsys

    Post Processing Recommender Systems with Knowledge Graphs for Recency, Popularity, and Diversity of Explanations

    Get PDF
    Existing explainable recommender systems have mainly modeled relationships between recommended and already experienced products, and shaped explanation types accordingly (e.g., movie "x"starred by actress "y"recommended to a user because that user watched other movies with "y"as an actress). However, none of these systems has investigated the extent to which properties of a single explanation (e.g., the recency of interaction with that actress) and of a group of explanations for a recommended list (e.g., the diversity of the explanation types) can influence the perceived explaination quality. In this paper, we conceptualized three novel properties that model the quality of the explanations (linking interaction recency, shared entity popularity, and explanation type diversity) and proposed re-ranking approaches able to optimize for these properties. Experiments on two public data sets showed that our approaches can increase explanation quality according to the proposed properties, fairly across demographic groups, while preserving recommendation utility. The source code and data are available at https: //github.com/giacoballoccu/explanation-quality-recsys

    A Region-based Training Data Segmentation Strategy to Credit Scoring

    Get PDF
    The rating of users requesting financial services is a growing task, especially in this historical period of the COVID-19 pandemic characterized by a dramatic increase in online activities, mainly related to e-commerce. This kind of assessment is a task manually performed in the past that today needs to be carried out by automatic credit scoring systems, due to the enormous number of requests to process. It follows that such systems play a crucial role for financial operators, as their effectiveness is directly related to gains and losses of money. Despite the huge investments in terms of financial and human resources devoted to the development of such systems, the state-of-the-art solutions are transversally affected by some well-known problems that make the development of credit scoring systems a challenging task, mainly related to the unbalance and heterogeneity of the involved data, problems to which it adds the scarcity of public datasets. The Region-based Training Data Segmentation (RTDS) strategy proposed in this work revolves around a divide-and-conquer approach, where the user classification depends on the results of several sub-classifications. In more detail, the training data is divided into regions that bound different users and features, which are used to train several classification models that will lead toward the final classification through a majority voting rule. Such a strategy relies on the consideration that the independent analysis of different users and features can lead to a more accurate classification than that offered by a single evaluation model trained on the entire dataset. The validation process carried out using three public real-world datasets with a different number of features. samples, and degree of data imbalance demonstrates the effectiveness of the proposed strategy. which outperforms the canonical training one in the context of all the datasets

    XRecSys: A framework for path reasoning quality in explainable recommendation

    Get PDF
    There is increasing evidence that recommendations accompanied by explanations positively impact on businesses in terms of trust, guidance, and persuasion. This advance has been made possible by traditional models representing user–product interactions augmented with external knowledge modeled as knowledge graphs. However, these models produce textual explanations on top of reasoning paths extracted from the knowledge graph without considering relevant properties of the path entities. In this paper, we present XRecSys, a Python framework for the optimization of the reasoning path selection process according to properties deemed relevant by users (e.g., time relevance of the linking interaction or popularity of the entity linked to the explanation). Our framework leads to a higher reasoning path quality in terms of the considered properties and, consequently, textual explanations more relevant for the users
    • …
    corecore