38,822 research outputs found

    Ship Wake Detection in SAR Images via Sparse Regularization

    Get PDF
    In order to analyse synthetic aperture radar (SAR) images of the sea surface, ship wake detection is essential for extracting information on the wake generating vessels. One possibility is to assume a linear model for wakes, in which case detection approaches are based on transforms such as Radon and Hough. These express the bright (dark) lines as peak (trough) points in the transform domain. In this paper, ship wake detection is posed as an inverse problem, which the associated cost function including a sparsity enforcing penalty, i.e. the generalized minimax concave (GMC) function. Despite being a non-convex regularizer, the GMC penalty enforces the overall cost function to be convex. The proposed solution is based on a Bayesian formulation, whereby the point estimates are recovered using maximum a posteriori (MAP) estimation. To quantify the performance of the proposed method, various types of SAR images are used, corresponding to TerraSAR-X, COSMO-SkyMed, Sentinel-1, and ALOS2. The performance of various priors in solving the proposed inverse problem is first studied by investigating the GMC along with the L1, Lp, nuclear and total variation (TV) norms. We show that the GMC achieves the best results and we subsequently study the merits of the corresponding method in comparison to two state-of-the-art approaches for ship wake detection. The results show that our proposed technique offers the best performance by achieving 80% success rate.Comment: 18 page

    Reduced pattern training based on task decomposition using pattern distributor

    Get PDF
    Task Decomposition with Pattern Distributor (PD) is a new task decomposition method for multilayered feedforward neural networks. Pattern distributor network is proposed that implements this new task decomposition method. We propose a theoretical model to analyze the performance of pattern distributor network. A method named Reduced Pattern Training is also introduced, aiming to improve the performance of pattern distribution. Our analysis and the experimental results show that reduced pattern training improves the performance of pattern distributor network significantly. The distributor module’s classification accuracy dominates the whole network’s performance. Two combination methods, namely Cross-talk based combination and Genetic Algorithm based combination, are presented to find suitable grouping for the distributor module. Experimental results show that this new method can reduce training time and improve network generalization accuracy when compared to a conventional method such as constructive backpropagation or a task decomposition method such as Output Parallelism

    Performance Analysis of Quickreduct, Quick Relative Reduct Algorithm and a New Proposed Algorithm

    Get PDF
    Feature Selection is a process of selecting a subset of relevant features from a huge dataset that satisfy method dependent criteria and thus minimize the cardinality and ensure that the accuracy and precision is not affected ,hence approximating the original class distribution of data from a given set of selected features. Feature selection and feature extraction are the two problems that we face when we want to select the best and important attributes from a given dataset Feature selection is a step in data mining that is done prior to other steps and is found to be very useful and effective in removing unimportant attributes so that the storage efficiency and accuracy of the dataset can be increased. From a huge pool of data available we want to extract useful and relevant information. The problem is not the unavailability of data, it is the quality of data that we lack in. We have Rough Sets Theory which is very useful in extracting relevant attributes and help to increase the importance of the information system we have. Rough set theory works on the principle of classifying similar objects into classes with respect to some features and those features may collectively and shortly be termed as reducts

    Simple Problems: The Simplicial Gluing Structure of Pareto Sets and Pareto Fronts

    Full text link
    Quite a few studies on real-world applications of multi-objective optimization reported that their Pareto sets and Pareto fronts form a topological simplex. Such a class of problems was recently named the simple problems, and their Pareto set and Pareto front were observed to have a gluing structure similar to the faces of a simplex. This paper gives a theoretical justification for that observation by proving the gluing structure of the Pareto sets/fronts of subproblems of a simple problem. The simplicity of standard benchmark problems is studied.Comment: 10 pages, accepted at GECCO'17 as a poster paper (2 pages

    PresenceSense: Zero-training Algorithm for Individual Presence Detection based on Power Monitoring

    Full text link
    Non-intrusive presence detection of individuals in commercial buildings is much easier to implement than intrusive methods such as passive infrared, acoustic sensors, and camera. Individual power consumption, while providing useful feedback and motivation for energy saving, can be used as a valuable source for presence detection. We conduct pilot experiments in an office setting to collect individual presence data by ultrasonic sensors, acceleration sensors, and WiFi access points, in addition to the individual power monitoring data. PresenceSense (PS), a semi-supervised learning algorithm based on power measurement that trains itself with only unlabeled data, is proposed, analyzed and evaluated in the study. Without any labeling efforts, which are usually tedious and time consuming, PresenceSense outperforms popular models whose parameters are optimized over a large training set. The results are interpreted and potential applications of PresenceSense on other data sources are discussed. The significance of this study attaches to space security, occupancy behavior modeling, and energy saving of plug loads.Comment: BuildSys 201

    Comparing the Performances of Neural Network and Rough Set Theory to Reflect the Improvement of Prognostic in Medical Data

    Get PDF
    In this research, I investigate and compared two of Artificial Intelligence (AI)techniques which are; Neural network and Rough set will be the best technique to be use in analyzing data. Recently, AI is one of the techniques which still in development process that produced few of intelligent systems that helped human to support their daily life such as decision making. In Malaysia, it is newly introduced by a group of researchers from University Science Malaysia. They agreed with others world-wide researchers that AI is very helpful to replaced human intelligence and do many works that can be done by human especially in medical area.In this research, I have chosen three sets of medical data; Wisoncin Prognostic Breast cancer, Parkinson’s diseases and Hepatitis Prognostic. The reason why the medical data is selected for this research because of the popularity among the researchers that done their research in AI by using medical data and the prediction or target attributes is clearly understandable. The results and findings also discussed in this paper. How the experiment has been done; the steps involved also discussed in this paper. I also conclude this paper with conclusion and future work
    • …
    corecore