51 research outputs found

    An Optimisation-Driven Prediction Method for Automated Diagnosis and Prognosis

    Get PDF
    open access articleThis article presents a novel hybrid classification paradigm for medical diagnoses and prognoses prediction. The core mechanism of the proposed method relies on a centroid classification algorithm whose logic is exploited to formulate the classification task as a real-valued optimisation problem. A novel metaheuristic combining the algorithmic structure of Swarm Intelligence optimisers with the probabilistic search models of Estimation of Distribution Algorithms is designed to optimise such a problem, thus leading to high-accuracy predictions. This method is tested over 11 medical datasets and compared against 14 cherry-picked classification algorithms. Results show that the proposed approach is competitive and superior to the state-of-the-art on several occasions

    Doubly Stochastic Matrix Models for Estimation of Distribution Algorithms

    Full text link
    Problems with solutions represented by permutations are very prominent in combinatorial optimization. Thus, in recent decades, a number of evolutionary algorithms have been proposed to solve them, and among them, those based on probability models have received much attention. In that sense, most efforts have focused on introducing algorithms that are suited for solving ordering/ranking nature problems. However, when it comes to proposing probability-based evolutionary algorithms for assignment problems, the works have not gone beyond proposing simple and in most cases univariate models. In this paper, we explore the use of Doubly Stochastic Matrices (DSM) for optimizing matching and assignment nature permutation problems. To that end, we explore some learning and sampling methods to efficiently incorporate DSMs within the picture of evolutionary algorithms. Specifically, we adopt the framework of estimation of distribution algorithms and compare DSMs to some existing proposals for permutation problems. Conducted preliminary experiments on instances of the quadratic assignment problem validate this line of research and show that DSMs may obtain very competitive results, while computational cost issues still need to be further investigated.Comment: Preprint of the paper accepted at ACM GECCO 202

    Using optimisation meta-heuristics for the roughness estimation problem in river flow analysis

    Get PDF
    open access articleClimate change threats make it difficult to perform reliable and quick predictions on floods forecasting. This gives rise to the need of having advanced methods, e.g., computational intelligence tools, to improve upon the results from flooding events simulations and, in turn, design best practices for riverbed maintenance. In this context, being able to accurately estimate the roughness coefficient, also known as Manning’s n coefficient, plays an important role when computational models are employed. In this piece of research, we propose an optimal approach for the estimation of ‘n’. First, an objective function is designed for measuring the quality of ‘candidate’ Manning’s coefficients relative to specif cross-sections of a river. Second, such function is optimised to return coefficients having the highest quality as possible. Five well-known meta-heuristic algorithms are employed to achieve this goal, these being a classic Evolution Strategy, a Differential Evolution algorithm, the popular Covariance Matrix Adaptation Evolution Strategy, a classic Particle Swarm Optimisation and a Bayesian Optimisation framework. We report results on two real-world case studies based on the Italian rivers ‘Paglia’ and ‘Aniene’. A comparative analysis between the employed optimisation algorithms is performed and discussed both empirically and statistically. From the hydrodynamic point of view, the experimental results are satisfactory and produced within significantly less computational time in comparison to classic methods. This shows the suitability of the proposed approach for optimal estimation of the roughness coefficient and, in turn, for designing optimised hydrological models

    Construction status and prospects of the Hyper-Kamiokande project

    Get PDF
    The Hyper-Kamiokande project is a 258-kton Water Cherenkov together with a 1.3-MW high-intensity neutrino beam from the Japan Proton Accelerator Research Complex (J-PARC). The inner detector with 186-kton fiducial volume is viewed by 20-inch photomultiplier tubes (PMTs) and multi-PMT modules, and thereby provides state-of-the-art of Cherenkov ring reconstruction with thresholds in the range of few MeVs. The project is expected to lead to precision neutrino oscillation studies, especially neutrino CP violation, nucleon decay searches, and low energy neutrino astronomy. In 2020, the project was officially approved and construction of the far detector was started at Kamioka. In 2021, the excavation of the access tunnel and initial mass production of the newly developed 20-inch PMTs was also started. In this paper, we present a basic overview of the project and the latest updates on the construction status of the project, which is expected to commence operation in 2027

    Prospects for neutrino astrophysics with Hyper-Kamiokande

    Get PDF
    Hyper-Kamiokande is a multi-purpose next generation neutrino experiment. The detector is a two-layered cylindrical shape ultra-pure water tank, with its height of 64 m and diameter of 71 m. The inner detector will be surrounded by tens of thousands of twenty-inch photosensors and multi-PMT modules to detect water Cherenkov radiation due to the charged particles and provide our fiducial volume of 188 kt. This detection technique is established by Kamiokande and Super-Kamiokande. As the successor of these experiments, Hyper-K will be located deep underground, 600 m below Mt. Tochibora at Kamioka in Japan to reduce cosmic-ray backgrounds. Besides our physics program with accelerator neutrino, atmospheric neutrino and proton decay, neutrino astrophysics is an important research topic for Hyper-K. With its fruitful physics research programs, Hyper-K will play a critical role in the next neutrino physics frontier. It will also provide important information via astrophysical neutrino measurements, i.e., solar neutrino, supernova burst neutrinos and supernova relic neutrino. Here, we will discuss the physics potential of Hyper-K neutrino astrophysics

    Automatic Classification of Text Complexity

    No full text
    This work introduces an automatic classification system for measuring the complexity level of a given Italian text under a linguistic point-of-view. The task of measuring the complexity of a text is cast to a supervised classification problem by exploiting a dataset of texts purposely produced by linguistic experts for second language teaching and assessment purposes. The commonly adopted Common European Framework of Reference for Languages (CEFR) levels were used as target classification classes, texts were elaborated by considering a large set of numeric linguistic features, and an experimental comparison among ten widely used machine learning models was conducted. The results show that the proposed approach is able to obtain a good prediction accuracy, while a further analysis was conducted in order to identify the categories of features that influenced the predictions

    Special Issue “Recent Trends in Natural Language Processing and Its Applications”

    No full text
    The recent advancements in Artificial Intelligence have paved the way for remarkable achievements in tasks that have traditionally posed challenges even for humans [...
    corecore