4,129 research outputs found

    Cancer subtype identification pipeline: a classifusion approach

    Get PDF
    Classification of cancer patients into treatment groups is essential for appropriate diagnosis to increase survival. Previously, a series of papers, largely published in the breast cancer domain have leveraged Computational Intelligence (CI) developments and tools, resulting in ground breaking advances such as the classification of cancer into newly identified classes - leading to improved treatment options. However, the current literature on the use of CI to achieve this is fragmented, making further advances challenging. This paper captures developments in this area so far, with the goal to establish a clear, step-by-step pipeline for cancer subtype identification. Based on establishing the pipeline, the paper identifies key potential advances in CI at the individual steps, thus establishing a roadmap for future research. As such, it is the aim of the paper to engage the CI community to address the research challenges and leverage the strong potential of CI in this important area. Finally, we present a small set of recent findings on the Nottingham Tenovus Primary Breast Carcinoma Series enabling the classification of a higher number of patients into one of the identified breast cancer groups, and introduce Classifusion: a combination of results of multiple classifiers

    Fuzzy Integral Driven Ensemble Classification using A Priori Fuzzy Measures

    Get PDF
    Aggregation operators are mathematical functions that enable the fusion of information from multiple sources. Fuzzy Integrals (FIs) are widely used aggregation operators, which combine information in respect to a Fuzzy Measure (FM) which captures the worth of both the individual sources and all their possible combinations. However, FIs suffer from the potential drawback of not fusing information according to the intuitively interpretable FM, leading to non-intuitive results. The latter is particularly relevant when a FM has been defined using external information (e.g. experts). In order to address this and provide an alternative to the FI, the Recursive Average (RAV) aggregation operator was recently proposed which enables intuitive data fusion in respect to a given FM. With an alternative fusion operator in place, in this paper, we define the concept of ‘A Priori’ FMs which are generated based on external information (e.g. classification accuracy) and thus provide an alternative to the traditional approaches of learning or manually specifying FMs. We proceed to develop one specific instance of such an a priori FM to support the decision level fusion step in ensemble classification. We evaluate the resulting approach by contrasting the performance of the ensemble classifiers for different FMs, including the recently introduced Uriz and the Sugeno lambda-measure; as well as by employing both the Choquet FI and the RAV as possible fusion operators. Results are presented for 20 datasets from machine learning repositories and contextualised to the wider literature by comparing them to state-of-the-art ensemble classifiers such as Adaboost, Bagging, Random Forest and Majority Voting

    Monte Carlo simulations of pulse propagation in massive multichannel optical fiber communication systems

    Full text link
    We study the combined effect of delayed Raman response and bit pattern randomness on pulse propagation in massive multichannel optical fiber communication systems. The propagation is described by a perturbed stochastic nonlinear Schr\"odinger equation, which takes into account changes in pulse amplitude and frequency as well as emission of continuous radiation. We perform extensive numerical simulations with the model, and analyze the dynamics of the frequency moments, the bit-error-rate, and the mutual distribution of amplitude and position. The results of our numerical simulations are in good agreement with theoretical predictions based on the adiabatic perturbation approach.Comment: Submitted to Physical Review E. 8 pages, 5 figure

    Automatic eduction and statistical analysis of coherent structures in the wall region of a confine plane

    Get PDF
    This paper describes a vortex detection algorithm used to expose and statistically characterize the coherent flow patterns observable in the velocity vector fields measured by Particle Image Velocimetry (PIV) in the impingement region of air curtains. The philosophy and the architecture of this algorithm are presented. Its strengths and weaknesses are discussed. The results of a parametrical analysis performed to assess the variability of the response of our algorithm to the 3 user-specified parameters in our eduction scheme are reviewed. The technique is illustrated in the case of a plane turbulent impinging twin-jet with an opening ratio of 10. The corresponding jet Reynolds number, based on the initial mean flow velocity U0 and the jet width e, is 14000. The results of a statistical analysis of the size, shape, spatial distribution and energetic content of the coherent eddy structures detected in the impingement region of this test flow are provided. Although many questions remain open, new insights into the way these structures might form, organize and evolve are given. Relevant results provide an original picture of the plane turbulent impinging jet

    Comparison of Fuzzy Integral-Fuzzy Measure based Ensemble Algorithms with the State-of-the-art Ensemble Algorithms

    Get PDF
    The Fuzzy Integral (FI) is a non-linear aggregation operator which enables the fusion of information from multiple sources in respect to a Fuzzy Measure (FM) which captures the worth of both the individual sources and all their possible combinations. Based on the expected potential of non-linear aggregation offered by the FI, its application to decision-level fusion in ensemble classifiers, i.e. to fuse multiple classifiers outputs towards one superior decision level output, has recently been explored. A key example of such a FI-FM ensemble classification method is the Decision-level Fuzzy Integral Multiple Kernel Learning (DeFIMKL) algorithm, which aggregates the outputs of kernel based classifiers through the use of the Choquet FI with respect to a FM learned through a regularised quadratic programming approach. While the approach has been validated against a number of classifiers based on multiple kernel learning, it has thus far not been compared to the state-of-the-art in ensemble classification. Thus, this paper puts forward a detailed comparison of FI-FM based ensemble methods, specifically the DeFIMKL algorithm, with state-of-the art ensemble methods including Adaboost, Bagging, Random Forest and Majority Voting over 20 public datasets from the UCI machine learning repository. The results on the selected datasets suggest that the FI based ensemble classifier performs both well and efficiently, indicating that it is a viable alternative when selecting ensemble classifiers and indicating that the non-linear fusion of decision level outputs offered by the FI provides expected potential and warrants further study

    Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    Full text link
    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ\Pi\Sigma\Pi polynomial. We first prove that the first problem is \#P-hard and then devise a O∗(3ns(n))O^*(3^ns(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n)s(n). Later, this upper bound is improved to O∗(2n)O^*(2^n) for ΠΣΠ\Pi\Sigma\Pi polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ\Pi\Sigma polynomials. On the negative side, we prove that, even for ΠΣΠ\Pi\Sigma\Pi polynomials with terms of degree ≤2\le 2, the first problem cannot be approximated at all for any approximation factor ≥1\ge 1, nor {\em "weakly approximated"} in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ\lambda-approximation algorithm for ΠΣΠ\Pi\Sigma\Pi polynomials with terms of degrees no more a constant λ≥2\lambda \ge 2. On the inapproximability side, we give a n(1−ϵ)/2n^{(1-\epsilon)/2} lower bound, for any ϵ>0,\epsilon >0, on the approximation factor for ΠΣΠ\Pi\Sigma\Pi polynomials. When terms in these polynomials are constrained to degrees ≤2\le 2, we prove a 1.04761.0476 lower bound, assuming P≠NPP\not=NP; and a higher 1.06041.0604 lower bound, assuming the Unique Games Conjecture

    Biochemical study of certain enzymes and metabolites of the carbohydrate metabolism in the skeletal muscle of the dengue virus-infected mice

    Get PDF
    Changes in enzymes and metabolites of the carbohydrate metabolism in skeletal muscles were studied in mice after intracerebral inoculation of dengue type 2 virus. It was noted that lactic dehydrogenase, aldolase, phosphogluco-isomerase, phosphoglucomutase, GO-T and GP-T activity were enhanced initially by two- to three-fold, reaching a peak on day 5. As the illness appeared in mice, all the enzyme activities were lowered and were about three times less in the paralytic stage on the 8th day as compared to controls. Fructose-1,6-diphosphatase activity was increased on the 4th and 5th days but decreased later. Acid phosphatase increased abruptly from the 6th day while alkaline phosphatase activity was irregular. Creatine increased on the 4th and 5th days but diminished later. Glycogen decreased from the beginning and was lowest on the 5th day, but the levels increased later and were maximum in paralysed muscles. On the other hand, lactic acid began accumulating in the muscles and was maximum on the 5th day, then declined. Dengue virus was detected in the muscles from the 2nd day but higher titres were seen from the 6th day. Changes similar to the preparalytic stage of mice may occur in human beings, causing myalgia

    The Effect of Time Variation in the Higgs Vacuum Expectation Value on the Cosmic Microwave Background

    Get PDF
    A time variation in the Higgs vacuum expectation value alters the electron mass and thereby changes the ionization history of the universe. This change produces a measurable imprint on the pattern of cosmic microwave background (CMB) fluctuations. The nuclear masses and nuclear binding energies, as well as the Fermi coupling constant, are also altered, with negligible impact on the CMB. We calculate the changes in the spectrum of the CMB fluctuations as a function of the change in the electron mass. We find that future CMB experiments could be sensitive to |\Delta m_e/m_e| \sim |\Delta G_F/G_F| \sim 10^{-2} - 10^{-3}. However, we also show that a change in the electron mass is nearly, but not exactly, degenerate with a change in the fine-structure constant. If both the electron mass and the fine-structure constant are time-varying, the corresponding CMB limits are much weaker, particularly for l < 1000.Comment: 6 pages, 3 figures, Fig. 3 modified, other minor correction
    • …
    corecore