19 research outputs found

    A comparison of statistical machine learning methods in heartbeat detection and classification

    Get PDF
    In health care, patients with heart problems require quick responsiveness in a clinical setting or in the operating theatre. Towards that end, automated classification of heartbeats is vital as some heartbeat irregularities are time consuming to detect. Therefore, analysis of electro-cardiogram (ECG) signals is an active area of research. The methods proposed in the literature depend on the structure of a heartbeat cycle. In this paper, we use interval and amplitude based features together with a few samples from the ECG signal as a feature vector. We studied a variety of classification algorithms focused especially on a type of arrhythmia known as the ventricular ectopic fibrillation (VEB). We compare the performance of the classifiers against algorithms proposed in the literature and make recommendations regarding features, sampling rate, and choice of the classifier to apply in a real-time clinical setting. The extensive study is based on the MIT-BIH arrhythmia database. Our main contribution is the evaluation of existing classifiers over a range sampling rates, recommendation of a detection methodology to employ in a practical setting, and extend the notion of a mixture of experts to a larger class of algorithms

    Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    Get PDF
    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method

    Towards Scalable Personalization

    Get PDF
    The ever-growing amount of online information calls for Personalization. Among the various personalization systems, recommenders have become increasingly popular in recent years. Recommenders typically use collaborative filtering to suggest the most relevant items to their users. The most prominent challenges underlying personalization are: scalability, privacy, and heterogeneity. Scalability is challenging given the growing rate of the Internet and its dynamics, both in terms of churn (i.e., users might leave/join at any time) and changes of user interests over time. Privacy is also a major concern as users might be reluctant to expose their profiles to unknown parties (e.g., other curious users), unless they have an incentive to significantly improve their navigation experience and sufficient guarantees about their privacy. Heterogeneity poses a major technical difficulty because, to be really meaningful, the profiles of users should be extracted from a number of their navigation activities (heterogeneity of source domains) and represented in a form that is general enough to be leveraged in the context of other applications (heterogeneity of target domains). In this dissertation, we address the above-mentioned challenges. For scalability, we introduce democratization and incrementality. Our democratization approach focuses on iteratively offloading the computationally expensive tasks to the user devices (via browsers or applications). This approach achieves scalability by employing the devices of the users as additional resources and hence the throughput of the approach (i.e., number of updates per unit time) scales with the number of users. Our incrementality approach deals with incremental similarity metrics employing either explicit (e.g., ratings) or implicit (e.g., consumption sequences for users) feedback. This approach achieves scalability by reducing the time complexity of each update, and thereby enabling higher throughput. We tackle the privacy concerns from two perspectives, i.e., anonymity from either other curious users (user-level privacy) or the service provider (system-level privacy). We strengthen the notion of differential privacy in the context of recommenders by introducing distance-based differential privacy (D2P) which prevents curious users from even guessing any category (e.g., genre) in which a user might be interested in. We also briefly introduce a recommender (X-REC) which employs uniform user sampling technique to achieve user-level privacy and an efficient homomorphic encryption scheme (X-HE) to achieve system-level privacy. We also present a heterogeneous recommender (X-MAP) which employs a novel similarity metric (X-SIM) based on paths across heterogeneous items (i.e., items from different domains). To achieve a general form for any user profile, we generate her AlterEgo profile in a target domain by employing an item-to-item mapping from a source domain (e.g., movies) to a target domain (e.g., books). Moreover, X-MAP also enables differentially private AlterEgos. While X-MAP employs user-item interactions (e.g., ratings), we also explore the possibility of heterogeneous recommendation by using content-based features of users (e.g., demography, time-varying preferences) or items (e.g., popularity, price)

    Big Data and Artificial Intelligence in Digital Finance

    Get PDF
    This open access book presents how cutting-edge digital technologies like Big Data, Machine Learning, Artificial Intelligence (AI), and Blockchain are set to disrupt the financial sector. The book illustrates how recent advances in these technologies facilitate banks, FinTech, and financial institutions to collect, process, analyze, and fully leverage the very large amounts of data that are nowadays produced and exchanged in the sector. To this end, the book also describes some more the most popular Big Data, AI and Blockchain applications in the sector, including novel applications in the areas of Know Your Customer (KYC), Personalized Wealth Management and Asset Management, Portfolio Risk Assessment, as well as variety of novel Usage-based Insurance applications based on Internet-of-Things data. Most of the presented applications have been developed, deployed and validated in real-life digital finance settings in the context of the European Commission funded INFINITECH project, which is a flagship innovation initiative for Big Data and AI in digital finance. This book is ideal for researchers and practitioners in Big Data, AI, banking and digital finance

    Big Data and Artificial Intelligence in Digital Finance

    Get PDF
    This open access book presents how cutting-edge digital technologies like Big Data, Machine Learning, Artificial Intelligence (AI), and Blockchain are set to disrupt the financial sector. The book illustrates how recent advances in these technologies facilitate banks, FinTech, and financial institutions to collect, process, analyze, and fully leverage the very large amounts of data that are nowadays produced and exchanged in the sector. To this end, the book also describes some more the most popular Big Data, AI and Blockchain applications in the sector, including novel applications in the areas of Know Your Customer (KYC), Personalized Wealth Management and Asset Management, Portfolio Risk Assessment, as well as variety of novel Usage-based Insurance applications based on Internet-of-Things data. Most of the presented applications have been developed, deployed and validated in real-life digital finance settings in the context of the European Commission funded INFINITECH project, which is a flagship innovation initiative for Big Data and AI in digital finance. This book is ideal for researchers and practitioners in Big Data, AI, banking and digital finance

    Goal-driven Collaborative Filtering

    Get PDF
    Recommender systems aim to identify interesting items (e.g. movies, books, websites) for a given user, based on their previously expressed preferences. As recommender systems grow in popularity, a notable divergence emerges between research practices and the reality of deployed systems: when recommendation algorithms are designed, they are evaluated in a relatively static context, mainly concerned about a predefined error measure. This approach disregards the fact that a recommender system exists in an environment where there are a number of factors that the system needs to satisfy, some of these factors are dynamic and can only be tackled over time. Thus, this thesis intends to study recommender systems from a goal-oriented point of view, where we define the recommendation goals, their associated measures and build the system accordingly. We first start with the argument that a single fixed measure, which is used to evaluate the system’s performance, might not be able to capture the multidimensional quality of a recommender system. Different contexts require different performance measures. We propose a unified error minimisation framework that flexibly covers various (directional) risk preferences. We then extend this by simultaneously optimising multiple goals, i.e., not only considering the predicted preference scores (e.g. ratings) but also dealing with additional operational or resource related requirements such as the availability, profitability or usefulness of a recommended item. We demonstrate multiple objectives through another example where a number of requirements, namely, diversity, novelty and serendipity are optimised simultaneously. At the end of the thesis, we deal with time-dependent goals. To achieve complex goals such as keeping the recommender model up-to-date over time, we consider a number of external requirements. Generally, these requirements arise from the physical nature of the system, such as available computational resources or available storage space. Modelling such a system over time requires describing the system dynamics as a combination of the underlying recommender model and its users’ behaviour. We propose to solve this problem by applying the principles of Modern Control Theory to construct and maintain a stable and robust recommender system for dynamically evolving environments. The conducted experiments on real datasets demonstrate that all the proposed approaches are able to cope with multiple objectives in various settings. These approaches offer solutions to a variety of scenarios that recommender systems might face

    Les opérateurs sauront-ils survivre dans un monde en constante évolution? Considérations techniques conduisant à des scénarios de rupture

    Get PDF
    Le secteur des télécommunications passe par une phase délicate en raison de profondes mutations technologiques, principalement motivées par le développement de l'Internet. Elles ont un impact majeur sur l'industrie des télécommunications dans son ensemble et, par conséquent, sur les futurs déploiements des nouveaux réseaux, plateformes et services. L'évolution de l'Internet a un impact particulièrement fort sur les opérateurs des télécommunications (Telcos). En fait, l'industrie des télécommunications est à la veille de changements majeurs en raison de nombreux facteurs, comme par exemple la banalisation progressive de la connectivité, la domination dans le domaine des services de sociétés du web (Webcos), l'importance croissante de solutions à base de logiciels et la flexibilité qu'elles introduisent (par rapport au système statique des opérateurs télécoms). Cette thèse élabore, propose et compare les scénarios possibles basés sur des solutions et des approches qui sont technologiquement viables. Les scénarios identifiés couvrent un large éventail de possibilités: 1) Telco traditionnel; 2) Telco transporteur de Bits; 3) Telco facilitateur de Plateforme; 4) Telco fournisseur de services; 5) Disparition des Telco. Pour chaque scénario, une plateforme viable (selon le point de vue des opérateurs télécoms) est décrite avec ses avantages potentiels et le portefeuille de services qui pourraient être fournisThe telecommunications industry is going through a difficult phase because of profound technological changes, mainly originated by the development of the Internet. They have a major impact on the telecommunications industry as a whole and, consequently, the future deployment of new networks, platforms and services. The evolution of the Internet has a particularly strong impact on telecommunications operators (Telcos). In fact, the telecommunications industry is on the verge of major changes due to many factors, such as the gradual commoditization of connectivity, the dominance of web services companies (Webcos), the growing importance of software based solutions that introduce flexibility (compared to static system of telecom operators). This thesis develops, proposes and compares plausible future scenarios based on future solutions and approaches that will be technologically feasible and viable. Identified scenarios cover a wide range of possibilities: 1) Traditional Telco; 2) Telco as Bit Carrier; 3) Telco as Platform Provider; 4) Telco as Service Provider; 5) Telco Disappearance. For each scenario, a viable platform (from the point of view of telecom operators) is described highlighting the enabled service portfolio and its potential benefitsEVRY-INT (912282302) / SudocSudocFranceF
    corecore