782 research outputs found

    Exact algorithms for the Steiner tree problem

    Get PDF
    In this thesis, the exact algorithms for the Steiner tree problem have been investigated. The Dreyfus-Wagner algorithm is a well-known dynamic programming method for computing minimum Steiner trees in general weighted graphs in time O(3k), where k is the number of the terminals. Firstly, two exact algorithms for the Steiner tree problem will be presented. The first one improves the running time of algorithm to O(2.684k) by showing that the optimum Steiner tree T can be partitioned into T = T1 [ T2 [ T3 in a certain way such that each Ti is a minimum Steiner tree in a suitable contracted graph Gi with less than k 2 terminals. The second algorithm is in time O((2 + )k) for any > 0. Every rectilinear Steiner tree problem admits an optimal tree T which is composed of tree stars. Moreover, the currently fastest algorithm for the rectilinear Steiner tree problem proceeds by composing an optimum tree T from tree star components in the cheapest way. F¨oßmeier and Kaufmann showed that any problem instance with k terminals has a number of tree stars in between 1.32k and 1.38k. We also present additional conditions on tree stars which allow us to further reduce the number of candidate components building the optimum Steiner tree to O(1.337k)

    Application Research of Combined Forecasting Based on Induced Ordered Weighted Averaging Operator

    Get PDF
    Aiming at the shortcomings of traditional weighted arithmetic combination forecasting model, using the induced ordered weighted averaging operator, according to the fitting accuracy of each single forecasting method at each time point of the sample interval to endue weighted, and with the error sum of squares as criterion, we establish a new combination forecasting model, which effectively improves the precision of combination forecasting. And the resident consumption levels were predicted using the model analysis

    TNFR2 and Regulatory T Cells: Potential Immune Checkpoint Target in Cancer Immunotherapy

    Get PDF
    TNF has both proinflammatory and antiinflammatory effects. It binds to two structurally related but functionally distinct receptors TNFR1 and TNFR2. Unlike TNFR1 that is ubiquitously expressed, TNFR2 expression is more limited to myeloid and lymphoid cell lineages including a fraction of regulatory T cells (Treg). In general, TNFR1 is responsible for TNF-mediated cell apoptosis and death, and mostly induces proinflammatory reactions. However, TNFR2 mainly leads to functions related to cell survival and immune suppression. Treg play an indispensable role in maintaining immunological self-tolerance and restraining excessive immune reactions deleterious to the host. Impaired Treg-mediated immune regulation has been observed in various autoimmune diseases as well as in cancers. Therefore, Treg might provide an ideal therapeutic target for diseases where the immune balance is impaired and could benefit from the regulation of Treg properties. TNFR2 is highly expressed on Treg in mice and in humans, and TNFR2+ Treg reveal the most potent suppressive capacity. TNF-TNFR2 ligation benefits Treg proliferation, although the effect on Treg suppressive function remains controversial. Here, we will describe in detail the TNF-mediated regulation of Treg and the potential clinical applications in cancer immunotherapy as well as in autoimmune diseases, with the focus on human Treg subsets

    Non-specific filtering of beta-distributed data.

    Get PDF
    BackgroundNon-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias.ResultsWe compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets.ConclusionsWe found two different filter statistics that tended to prioritize features with different characteristics, each performed well for identifying clusters of cancer and non-cancer tissue, and identifying a cancer CpG island hypermethylation phenotype. Since cluster analysis is for discovery, we would suggest trying both filters on any new data sets, evaluating the overlap of features selected and clusters discovered

    Waste walnut shell valorization to iron loaded biochar and its application to arsenic removal

    Get PDF
    Iron loaded biochar (ILB) was prepared from waste walnut shell by microwave pyrolysis and its application for arsenic removal was attempted. The ILB was characterized using X-ray diffraction, scanning electron microscopy and BET Surface area analyzer. The adsorption isotherm of As (V) in ILB covering a temperature range of 25 to 45 °C, as well as the kinetics of adsorption at 25 °C were experimentally generated. The adsorption isotherms were modeled using Langmuir and Freundlich isotherm models, while the kinetics of adsorption was modeled using the pseudo-first-order, pseudo-second-order kinetic models, and intra particle diffusion model. The ILB had a surface area of 418 m2 /g with iron present in the form of hematite (Fe2O3) and magnetite (Fe3O4). The arsenic adsorption isotherm matches well with Langmuir isotherm model with a monolayer adsorption capacity of 1.91 mg/g at 25 °C. The adsorption capacity of As (V) well compares with other porous adsorbents widely reported in literature, supporting its application as a cost effective adsorbent

    The Aging Kidney: Increased Susceptibility to Nephrotoxicity

    Get PDF
    Three decades have passed since a series of studies indicated that the aging kidney was characterized by increased susceptibility to nephrotoxic injury. Data from these experimental models is strengthened by clinical data demonstrating that the aging population has an increased incidence and severity of acute kidney injury (AKI). Since then a number of studies have focused on age-dependent alterations in pathways that predispose the kidney to acute insult. This review will focus on the mechanisms that are altered by aging in the kidney that may increase susceptibility to injury, including hemodynamics, oxidative stress, apoptosis, autophagy, inflammation and decreased repair

    Surface fitting for quasi scattered data from coordinate measuring systems

    Get PDF
    Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What’s more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible
    corecore