399 research outputs found

    A Unified Surface Geometric Framework for Feature-Aware Denoising, Hole Filling and Context-Aware Completion

    Get PDF
    Technologies for 3D data acquisition and 3D printing have enormously developed in the past few years, and, consequently, the demand for 3D virtual twins of the original scanned objects has increased. In this context, feature-aware denoising, hole filling and context-aware completion are three essential (but far from trivial) tasks. In this work, they are integrated within a geometric framework and realized through a unified variational model aiming at recovering triangulated surfaces from scanned, damaged and possibly incomplete noisy observations. The underlying non-convex optimization problem incorporates two regularisation terms: a discrete approximation of the Willmore energy forcing local sphericity and suited for the recovery of rounded features, and an approximation of the l(0) pseudo-norm penalty favouring sparsity in the normal variation. The proposed numerical method solving the model is parameterization-free, avoids expensive implicit volumebased computations and based on the efficient use of the Alternating Direction Method of Multipliers. Experiments show how the proposed framework can provide a robust and elegant solution suited for accurate restorations even in the presence of severe random noise and large damaged areas

    Executive functions and attention processes in adolescents and young adults with intellectual disability

    Get PDF
    (1) Background: We made a comprehensive evaluation of executive functions (EFs) and attention processes in a group of adolescents and young adults with mild intellectual disability (ID). (2) Methods: 27 adolescents and young adults (14 females and 13 males) with ID, aged between 15.1 and 23 years (M = 17.4; SD = 2.04), were compared to a control group free of cognitive problems and individually matched for gender and age. (3) Results: As for EFs, individuals with ID were severely impaired on all subtests of the Behavioral Assessment of Dysexecutive Syndrome (BADS) battery. However, we also found appreciable individual differences, with eight individuals (approximately 30%) scoring within normal limits. On the attention tests, individuals with ID were not generally slower but presented specific deficits only on some attention tests (i.e., Choice Reaction Times, Color Naming and Color–Word Interference, and Shifting of Attention for Verbal and for Visual Targets). The role of a global factor (i.e., cognitive speed) was modest in contributing to the group differences; i.e., when present, group differences were selectively associated with specific task manipulations, not global differences in cognitive speed. (4) Conclusions: The study confirmed large group differences in EFs; deficits in attentional processing were more specific and occurred primarily in tasks taxing the selective dimension of attention, with performance on intensive tasks almost entirely spared

    Statistical arbitrage powered by Explainable Artificial Intelligence

    Get PDF
    Machine learning techniques have recently become the norm for detecting patterns in financial markets. However, relying solely on machine learning algorithms for decision-making can have negative consequences, especially in a critical domain such as the financial one. On the other hand, it is well-known that transforming data into actionable insights can pose a challenge even for seasoned practitioners, particularly in the financial world. Given these compelling reasons, this work proposes a machine learning approach powered by eXplainable Artificial Intelligence techniques integrated into a statistical arbitrage trading pipeline. Specifically, we propose three methods to discard irrelevant features for the prediction task. We evaluate the approaches on historical data of component stocks of the S&P500 index and aim at improving not only the prediction performance at the stock level but also overall at the stock set level. Our analysis shows that our trading strategies that include such feature selection methods improve the portfolio performances by providing predictive signals whose information content suffices and is less noisy than the one embedded in the whole feature set. By performing an in-depth risk-return analysis, we show that the proposed trading strategies powered by explainable AI outperform highly competitive trading strategies considered as baselines

    Ensembling and Dynamic Asset Selection for Risk-Controlled Statistical Arbitrage

    Get PDF
    In recent years, machine learning algorithms have been successfully employed to leverage the potential of identifying hidden patterns of financial market behavior and, consequently, have become a land of opportunities for financial applications such as algorithmic trading. In this paper, we propose a statistical arbitrage trading strategy with two key elements: an ensemble of regression algorithms for asset return prediction, followed by a dynamic asset selection. More specifically, we construct an extremely heterogeneous ensemble ensuring model diversity by using state-of-the-art machine learning algorithms, data diversity by using a feature selection process, and method diversity by using individual models for each asset, as well models that learn cross-sectional across multiple assets. Then, their predictive results are fed into a quality assurance mechanism that prunes assets with poor forecasting performance in the previous periods. We evaluate the approach on historical data of component stocks of the SP500 index. By performing an in-depth risk-return analysis, we show that this setup outperforms highly competitive trading strategies considered as baselines. Experimentally, we show that the dynamic asset selection enhances overall trading performance both in terms of return and risk. Moreover, the proposed approach proved to yield superior results during both financial turmoil and massive market growth periods, and it showed to have general application for any risk-balanced trading strategy aiming to exploit different asset classes

    Recognition of cooking activities through air quality sensor data for supporting food journaling

    Get PDF
    Abstract Unhealthy behaviors regarding nutrition are a global risk for health. Therefore, the healthiness of an individual's nutrition should be monitored in the medium and long term. A powerful tool for monitoring nutrition is a food diary; i.e., a daily list of food taken by the individual, together with portion information. Unfortunately, frail people such as the elderly have a hard time filling food diaries on a continuous basis due to forgetfulness or physical issues. Existing solutions based on mobile apps also require user's effort and are rarely used in the long term, especially by elderly people. For these reasons, in this paper we propose a novel architecture to automatically recognize the preparation of food at home in a privacy-preserving and unobtrusive way, by means of air quality data acquired from a commercial sensor. In particular, we devised statistical features to represent the trend of several air parameters, and a deep neural network for recognizing cooking activities based on those data. We collected a large corpus of annotated sensor data gathered over a period of 8 months from different individuals in different homes, and performed extensive experiments. Moreover, we developed an initial prototype of an interactive system for acquiring food information from the user when a cooking activity is detected by the neural network. To the best of our knowledge, this is the first work that adopts air quality sensor data for cooking activity recognition

    A P2P Platform for real-time multicast video streaming leveraging on scalable multiple descriptions to cope with bandwidth fluctuations

    Get PDF
    In the immediate future video distribution applications will increase their diffusion thanks tothe ever-increasing user capabilities and improvements in the Internet access speed and performance.The target of this paper is to propose a content delivery system for real-time streaming services based ona peer-to-peer approach that exploits multicast overlay organization of the peers to address thechallenges due to bandwidth heterogeneity. To improve reliability and flexibility, video is coded using ascalable multiple description approach that allows delivery of sub-streams over multiple trees andallows rate adaptation along the trees as the available bandwidth changes. Moreover, we have deployeda new algorithm for tree-based topology management of the overlay network. In fact, tree based overlaynetworks better perform in terms of end-to-end delay and ordered delivery of video flow packets withrespect to mesh based ones. We also show with a case study that the proposed system works better thansimilar systems using only either multicast or multiple trees

    Explainable Machine Learning Exploiting News and Domain-Specific Lexicon for Stock Market Forecasting

    Get PDF
    In this manuscript, we propose a Machine Learning approach to tackle a binary classification problem whose goal is to predict the magnitude (high or low) of future stock price variations for individual companies of the SP 500 index. Sets of lexicons are generated from globally published articles with the goal of identifying the most impactful words on the market in a specific time interval and within a certain business sector. A feature engineering process is then performed out of the generated lexicons, and the obtained features are fed to a Decision Tree classifier. The predicted label (high or low) represents the underlying company's stock price variation on the next day, being either higher or lower than a certain threshold. The performance evaluation we have carried out through a walk-forward strategy, and against a set of solid baselines, shows that our approach clearly outperforms the competitors. Moreover, the devised Artificial Intelligence (AI) approach is explainable, in the sense that we analyze the white-box behind the classifier and provide a set of explanations on the obtained results

    Munchausen by internet: current research and future directions.

    Get PDF
    The Internet has revolutionized the health world, enabling self-diagnosis and online support to take place irrespective of time or location. Alongside the positive aspects for an individual's health from making use of the Internet, debate has intensified on how the increasing use of Web technology might have a negative impact on patients, caregivers, and practitioners. One such negative health-related behavior is Munchausen by Internet

    Evaluation of Agreement between HRT III and iVue OCT in Glaucoma and Ocular Hypertension Patients

    Get PDF
    Purpose. To determine the agreement between Moorfields Regression Analysis (MRA), Glaucoma Probability Score (GPS) of Heidelberg retinal tomograph (HRT III), and peripapillary nerve fibers thickness by iVue Optical Coherence Tomography (OCT). Methods. 72 eyes with ocular hypertension or primary open angle glaucoma (POAG) were included in the study: 54 eyes had normal visual fields (VF) and 18 had VF damage. All subjects performed achromatic 30° VF by Octopus Program G1X dynamic strategy and were imaged with HRT III and iVue OCT. Sectorial and global MRA, GPS, and OCT parameters were used for the analysis. Kappa statistic was used to assess the agreement between methods. Results. A significant agreement between iVue OCT and GPS for the inferotemporal quadrant (κ: 0.555) was found in patients with abnormal VF. A good overall agreement between GPS and MRA was found in all the eyes tested (κ: 0.511). A good agreement between iVue OCT and MRA was shown in the superonasal (κ: 0.656) and nasal (κ: 0.627) quadrants followed by the superotemporal (κ: 0.602) and inferotemporal (κ: 0.586) sectors in all the studied eyes. Conclusion. The highest percentages of agreement were found per quadrant of the MRA and the iVue OCT confirming that in glaucoma damage starts from the temporal hemiretina
    • …
    corecore