1,624 research outputs found

    Perfectly normal type-2 fuzzy interpolation B-spline curve

    Full text link
    In this paper, we proposed another new form of type-2 fuzzy data points(T2FDPs) that is perfectly normal type-2 data points(PNT2FDPs). These kinds of brand-new data were defined by using the existing type-2 fuzzy set theory(T2FST) and type-2 fuzzy number(T2FN) concept since we dealt with the problem of defining complex uncertainty data. Along with this restructuring, we included the fuzzification(alpha-cut operation), type-reduction and defuzzification processes against PNT2FDPs. In addition, we used interpolation B-soline curve function to demonstrate the PNT2FDPs.Comment: arXiv admin note: substantial text overlap with arXiv:1304.786

    Fuzzy Supernova Templates I: Classification

    Full text link
    Modern supernova (SN) surveys are now uncovering stellar explosions at rates that far surpass what the world's spectroscopic resources can handle. In order to make full use of these SN datasets, it is necessary to use analysis methods that depend only on the survey photometry. This paper presents two methods for utilizing a set of SN light curve templates to classify SN objects. In the first case we present an updated version of the Bayesian Adaptive Template Matching program (BATM). To address some shortcomings of that strictly Bayesian approach, we introduce a method for Supernova Ontology with Fuzzy Templates (SOFT), which utilizes Fuzzy Set Theory for the definition and combination of SN light curve models. For well-sampled light curves with a modest signal to noise ratio (S/N>10), the SOFT method can correctly separate thermonuclear (Type Ia) SNe from core collapse SNe with 98% accuracy. In addition, the SOFT method has the potential to classify supernovae into sub-types, providing photometric identification of very rare or peculiar explosions. The accuracy and precision of the SOFT method is verified using Monte Carlo simulations as well as real SN light curves from the Sloan Digital Sky Survey and the SuperNova Legacy Survey. In a subsequent paper the SOFT method is extended to address the problem of parameter estimation, providing estimates of redshift, distance, and host galaxy extinction without any spectroscopy.Comment: 26 pages, 12 figures. Accepted to Ap

    A Nonlinear Autoregressive Exogenous (NARX) Neural Network Model for the Prediction of the Daily Direct Solar Radiation

    Get PDF
    The solar photovoltaic (PV) energy has an important place among the renewable energy sources. Therefore, several researchers have been interested by its modelling and its prediction, in order to improve the management of the electrical systems which include PV arrays. Among the existing techniques, artificial neural networks have proved their performance in the prediction of the solar radiation. However, the existing neural network models don't satisfy the requirements of certain specific situations such as the one analyzed in this paper. The aim of this research work is to supply, with electricity, a race sailboat using exclusively renewable sources. The developed solution predicts the direct solar radiation on a horizontal surface. For that, a Nonlinear Autoregressive Exogenous (NARX) neural network is used. All the specific conditions of the sailboat operation are taken into account. The results show that the best prediction performance is obtained when the training phase of the neural network is performed periodically

    An Adaptive Hilbert-Huang Transform System

    Get PDF
    This thesis presents a system which can be used to generate Intrinsic Mode Functions and the associated Hilbert spectrum resulting from techniques based on the Empirical Mode Decomposition as pioneered by N. E. Huang at the end of the 20th century. Later dubbed the Hilbert-Huang Transform by NASA, the process of decomposing data manually through repetitive detrending and subtraction followed by applying the Hilbert transform to the results was presented as a viable alternative to the wavelet transform which was gaining traction at the time but had shown significant limitations. In the last 20 years, the Hilbert-Huang Transform has received a lot of attention, but that attention has been miniscule relative to the amount of attention received by wavelet transformation. This is, in part, due to the limitations of the Empirical Mode Decomposition and also in part due to the difficulty in developing a theoretical basis for the manner in which the Empirical Mode Decomposition works. While the question of theoretical foundations is an important and tricky one, this thesis presents a system that breaks many of the previously known limits on band-width resolution, mode mixing, and viable decomposable frequency range relative to sampling frequency of the Empirical Mode Decomposition. Many recent innovations do not simply improve on N. E. Huang’s algorithm, but rather provide new approaches with different decompositional properties. By choosing the best technique at each step, a superior total decomposition can be arrived at. Using the Hilbert-Huang Transform itself during the decomposition as a guide as suggested by R. Deering in 2005, the final HHT can show distinct improvements. The AHHT System utilizes many of the properties of various Empirical Mode Decomposition techniques from literature, includes some novel innovations on those techniques, and then manages the total decomposition in an adaptive manner. The Adaptive Hilbert-Huang Transform System (AHHT) is demonstrated successfully on many different artificial signals, many with varying levels of noise down to -5dB SNR, as well as on an electrocardiogram and for comparison with a surface electromyographic study which found biopotential frequency-shifting associated with the fatigue of fast-twitch muscle fibers

    Adaptive Segmentation Of Cardiovascular Vessels

    Get PDF
    Coronary collateral vessels may contribute to survival after myocardial infarction by providing blood to the cardiac muscle after coronary arterial occlusion. However, these vessels are not present in all people and can develop after infarction and in some cases they develop prior to infarction for reasons not fully understood. The goal of this thesis is to investigate the segmentation of coronary collateral vessels from micro-computed tomography (microCT) images of a mouse's heart. A problem limiting study of collateral vessels is the exceedingly small size and correspondingly low blood flow of these vessels, making the regions of interest (ROI) below the resolution of most imaging modalities. Segmentation of vessels is a challenge for all imaging modalities and organs. There is no standard algorithm or method that works for all images, therefore, a combination of multiple approaches were used to address this problem

    Determination of the high water mark and its location along a coastline

    Get PDF
    The High Water Mark (HWM) is an important cadastral boundary that separates land and water. It is also used as a baseline to facilitate coastal hazard management, from which land and infrastructure development is offset to ensure the protection of property from storm surge and sea level rise. However, the location of the HWM is difficult to define accurately due to the ambulatory nature of water and coastal morphology variations. Contemporary research has failed to develop an accurate method for HWM determination because continual changes in tidal levels, together with unimpeded wave runup and the erosion and accretion of shorelines, make it difficult to determine a unique position of the HWM. While traditional surveying techniques are accurate, they selectively record data at a given point in time, and surveying is expensive, not readily repeatable and may not take into account all relevant variables such as erosion and accretion.In this research, a consistent and robust methodology is developed for the determination of the HWM over space and time. The methodology includes two main parts: determination of the HWM by integrating both water and land information, and assessment of HWM indicators in one evaluation system. It takes into account dynamic coastal processes, and the effect of swash or tide probability on the HWM. The methodology is validated using two coastal case study sites in Western Australia. These sites were selected to test the robustness of the methodology in two distinctly different coastal environments

    Multiple 2D self organising map network for surface reconstruction of 3D unstructured data

    Get PDF
    Surface reconstruction is a challenging task in reverse engineering because it must represent the surface which is similar to the original object based on the data obtained. The data obtained are mostly in unstructured type whereby there is not enough information and incorrect surface will be obtained. Therefore, the data should be reorganised by finding the correct topology with minimum surface error. Previous studies showed that Self Organising Map (SOM) model, the conventional surface approximation approach with Non Uniform Rational B-Splines (NURBS) surfaces, and optimisation methods such as Genetic Algorithm (GA), Differential Evolution (DE) and Particle Swarm Optimisation (PSO) methods are widely implemented in solving the surface reconstruction. However, the model, approach and optimisation methods are still suffer from the unstructured data and accuracy problems. Therefore, the aims of this research are to propose Cube SOM (CSOM) model with multiple 2D SOM network in organising the unstructured surface data, and to propose optimised surface approximation approach in generating the NURBS surfaces. GA, DE and PSO methods are implemented to minimise the surface error by adjusting the NURBS control points. In order to test and validate the proposed model and approach, four primitive objects data and one medical image data are used. As to evaluate the performance of the proposed model and approach, three performance measurements have been used: Average Quantisation Error (AQE) and Number Of Vertices (NOV) for the CSOM model while surface error for the proposed optimised surface approximation approach. The accuracy of AQE for CSOM model has been improved to 64% and 66% when compared to 2D and 3D SOM respectively. The NOV for CSOM model has been reduced from 8000 to 2168 as compared to 3D SOM. The accuracy of surface error for the optimised surface approximation approach has been improved to 7% compared to the conventional approach. The proposed CSOM model and optimised surface approximation approach have successfully reconstructed surface of all five data with better performance based on three performance measurements used in the evaluation

    A study of distributed clustering of vector time series on the grid by task farming

    Get PDF
    Traditional data mining methods were limited by availability of computing resources like network bandwidth, storage space and processing power. These algorithms were developed to work around this problem by looking at a small cross-section of the whole data available. However since a major chunk of the data is kept out, the predictions were generally inaccurate and missed out on significant features that was part of the data. Today with resources growing at almost the same pace as data, it is possible to rethink mining algorithms to work on distributed resources and essentially distributed data. Distributed data mining thus holds great promise. Using grid technologies, data mining can be extended to areas which were not previously looked at because of the volume of data being generated, like climate modeling, web usage, etc. An important characteristic of data today is that it is highly decentralized and mostly redundant. Data mining algorithms which can make efficient use of distributed data has to be thought of. Though it is possible to bring all the data together and run traditional algorithms, this has a high overhead, in terms of bandwidth usage for transmission, preprocessing steps which have to be to handle every format the received data. By processing the data locally, the preprocessing stage can be made less bulky and also the traditional data mining techniques would be able to work on the data efficiently. The focus of this project is to use an existing data mining technique, fuzzy c-means clustering to work on distributed data in a simulated grid environment and to review the performance of this approach viz., the traditional approach
    corecore