2,220 research outputs found

    Paradox Elimination in Dempster–Shafer Combination Rule with Novel Entropy Function: Application in Decision-Level Multi-Sensor Fusion

    Get PDF
    Multi-sensor data fusion technology in an important tool in building decision-making applications. Modified Dempster–Shafer (DS) evidence theory can handle conflicting sensor inputs and can be applied without any prior information. As a result, DS-based information fusion is very popular in decision-making applications, but original DS theory produces counterintuitive results when combining highly conflicting evidences from multiple sensors. An effective algorithm offering fusion of highly conflicting information in spatial domain is not widely reported in the literature. In this paper, a successful fusion algorithm is proposed which addresses these limitations of the original Dempster–Shafer (DS) framework. A novel entropy function is proposed based on Shannon entropy, which is better at capturing uncertainties compared to Shannon and Deng entropy. An 8-step algorithm has been developed which can eliminate the inherent paradoxes of classical DS theory. Multiple examples are presented to show that the proposed method is effective in handling conflicting information in spatial domain. Simulation results showed that the proposed algorithm has competitive convergence rate and accuracy compared to other methods presented in the literature

    Time-Domain Data Fusion Using Weighted Evidence and Dempster–Shafer Combination Rule: Application in Object Classification

    Get PDF
    To apply data fusion in time-domain based on Dempster–Shafer (DS) combination rule, an 8-step algorithm with novel entropy function is proposed. The 8-step algorithm is applied to time-domain to achieve the sequential combination of time-domain data. Simulation results showed that this method is successful in capturing the changes (dynamic behavior) in time-domain object classification. This method also showed better anti-disturbing ability and transition property compared to other methods available in the literature. As an example, a convolution neural network (CNN) is trained to classify three different types of weeds. Precision and recall from confusion matrix of the CNN are used to update basic probability assignment (BPA) which captures the classification uncertainty. Real data of classified weeds from a single sensor is used test time-domain data fusion. The proposed method is successful in filtering noise (reduce sudden changes—smoother curves) and fusing conflicting information from the video feed. Performance of the algorithm can be adjusted between robustness and fast-response using a tuning parameter which is number of time-steps(ts)

    Improved Algorithms for the Point-Set Embeddability problem for Plane 3-Trees

    Full text link
    In the point set embeddability problem, we are given a plane graph GG with nn vertices and a point set SS with nn points. Now the goal is to answer the question whether there exists a straight-line drawing of GG such that each vertex is represented as a distinct point of SS as well as to provide an embedding if one does exist. Recently, in \cite{DBLP:conf/gd/NishatMR10}, a complete characterization for this problem on a special class of graphs known as the plane 3-trees was presented along with an efficient algorithm to solve the problem. In this paper, we use the same characterization to devise an improved algorithm for the same problem. Much of the efficiency we achieve comes from clever uses of the triangular range search technique. We also study a generalized version of the problem and present improved algorithms for this version of the problem as well

    An Integer Programming Formulation of the Minimum Common String Partition problem

    Full text link
    We consider the problem of finding a minimum common partition of two strings (MCSP). The problem has its application in genome comparison. MCSP problem is proved to be NP-hard. In this paper, we develop an Integer Programming (IP) formulation for the problem and implement it. The experimental results are compared with the previous state-of-the-art algorithms and are found to be promising.Comment: arXiv admin note: text overlap with arXiv:1401.453

    Image-Dependent Spatial Shape-Error Concealment

    Get PDF
    Existing spatial shape-error concealment techniques are broadly based upon either parametric curves that exploit geometric information concerning a shape's contour or object shape statistics using a combination of Markov random fields and maximum a posteriori estimation. Both categories are to some extent, able to mask errors caused by information loss, provided the shape is considered independently of the image/video. They palpably however, do not afford the best solution in applications where shape is used as metadata to describe image and video content. This paper presents a novel image-dependent spatial shape-error concealment (ISEC) algorithm that uses both image and shape information by employing the established rubber-band contour detecting function, with the novel enhancement of automatically determining the optimal width of the band to achieve superior error concealment. Experimental results corroborate both qualitatively and numerically, the enhanced performance of the new ISEC strategy compared with established techniques

    Cash Flow Trends and Their Fundamental Drivers: A Continuing Look Comprehensive Industry Review (Qtr 4, 2008)

    Get PDF
    This research report is one of a series that looks at the cash flow performance of Corporate America. Our primary focus is on free cash margin, or free cash flow measured as a percent of revenue. We also look at the drivers or components of free cash margin in an effort to determine factors behind observed changes. In the current study we conduct a comprehensive review of 20 four-digit GICS non-financial industries and their 61 six-digit GICS sub-industries for a series of rolling twelve-month periods from the first quarter of 2000 through the fourth quarter of 2008. Recession notwithstanding, due to declining capital expenditures and reduced working capital requirements, free cash margin held up reasonably well during the twelve months ended December 2008. The metric declined to 4.12%, down from a high of 5.14% reached in June 2004, and more recently, the 4.93% level reached in December 2007 and 4.44% in September 2008. With free cash margin at 4.12%, corporate America is generating 4.12 cents of free cash flow for every dollar of revenue generated. The number of industries experiencing declining free cash margin increased from our last report. For our sample as a whole, free cash margin last bottomed at 2.43% during the 2001 recession. We continue to believe that during the current recession, free cash margin will likely decline to levels that are at or below those found in the 2001 recession, suggesting a continuing contraction of free cash flow of 50% or more from current levels. However, a continuing focus on maintaining low working capital levels and reduced capital expenditures may leave companies better off on a cash flow basis than they were in 2001

    GreMuTRRR: A Novel Genetic Algorithm to Solve Distance Geometry Problem for Protein Structures

    Full text link
    Nuclear Magnetic Resonance (NMR) Spectroscopy is a widely used technique to predict the native structure of proteins. However, NMR machines are only able to report approximate and partial distances between pair of atoms. To build the protein structure one has to solve the Euclidean distance geometry problem given the incomplete interval distance data produced by NMR machines. In this paper, we propose a new genetic algorithm for solving the Euclidean distance geometry problem for protein structure prediction given sparse NMR data. Our genetic algorithm uses a greedy mutation operator to intensify the search, a twin removal technique for diversification in the population and a random restart method to recover stagnation. On a standard set of benchmark dataset, our algorithm significantly outperforms standard genetic algorithms.Comment: Accepted for publication in the 8th International Conference on Electrical and Computer Engineering (ICECE 2014

    Computing Covers Using Prefix Tables

    Get PDF
    An \emph{indeterminate string} x=x[1..n]x = x[1..n] on an alphabet Σ\Sigma is a sequence of nonempty subsets of Σ\Sigma; xx is said to be \emph{regular} if every subset is of size one. A proper substring uu of regular xx is said to be a \emph{cover} of xx iff for every i∈1..ni \in 1..n, an occurrence of uu in xx includes x[i]x[i]. The \emph{cover array} γ=γ[1..n]\gamma = \gamma[1..n] of xx is an integer array such that γ[i]\gamma[i] is the longest cover of x[1..i]x[1..i]. Fifteen years ago a complex, though nevertheless linear-time, algorithm was proposed to compute the cover array of regular xx based on prior computation of the border array of xx. In this paper we first describe a linear-time algorithm to compute the cover array of regular string xx based on the prefix table of xx. We then extend this result to indeterminate strings.Comment: 14 pages, 1 figur
    • …
    corecore