23 research outputs found

    Optimization of rain gauge network in Johor using hybrid particle swarm optimization and simulated annealing

    Get PDF
    An optimal design of rain gauge network is important as it produces fast, accurate and important rainfall data used in designing an effective and economic hydraulic structure for flood control. The use of inaccurate rainfall data may result in significant design errors in other water resources project. The objective of this study is to determine the optimal number and location of the rain gauge network in Johor by using geostatistical method integrated with hybrid method consisting of simulated annealing and particle swarm optimization. This study also explored and compared the results of all existing methods namely the coefficient of variations, the maximum covering location problems and geostatistical method with the proposed model. The use of methods such as maximal covering location problem and coefficient of variation can only provide the figure for number of rain gauge stations but not the optimal location for the stations. The geostatistics method however, can provide the optimal number of rain gauge station and its location through the minimum variant value. The integration of geostatistics with hybrid methods comprised of simulated annealing method and particle swarm optimization is successful in providing the optimum number and location of the stations. In order to identify the effect of rain gauge station locations toward rainfall data, this study considered the repositioning of the existing rain gauge into new locations to improve their effectiveness and reduce the error. The analysis analysed the density of the rain gauge, daily rainfall data from 1977 to 2008, latitude and longitude of the rain gauge location, elevation, humidity, wind speed, temperature and solar radiation to determine the new optimal network design for the rain gauge network. The minimum value of estimated variance produced by the proposed method indicates that the method is successful in determining the optimal rain gauge network from the existing 84 rain gauges in Johor. Relocation of all 84 rain gauge stations to new locations give better results in terms of the estimated variance value but, it is not necessary to relocate all of the stations due to the expensive costs. Therefore, the location of the station also influences the result. In this study, hybrid simulated annealing and particle swarm optimization as an optimization method successfully determined the optimal rain gauges network in Johor. In conclusion, this study has shown that a well-design rain gauge network will help to provide essential input for effective planning, designing and managing of water resources project such as flood frequency analysis and forecasting

    Comparison of semivariogram models in rain gauge network design

    Get PDF
    The well-known geostatistics method (variance-reduction method) is com- monly used to determine the optimal rain gauge network. The main problem in geostatis- tics method to determine the best semivariogram model in order to be used in estimating the variance. An optimal choice of the semivariogram model is an important point for a good data evaluation process. Three different semivariogram models which are Spherical, Gaussian and Exponential are used and their performances are compared in this study. Cross validation technique is applied to compute the errors of the semivariograms. Rain- fall data for the period of 1975 – 2008 from the existing 84 rain gauge stations covering the state of Johor are used in this study. The result shows that the exponential model is the best semivariogram model and chosen to determine the optimal number and location of rain gauge station

    Rainfall network optimization in Johor

    Get PDF
    This paper presents a method for establishing an optimal network design of rain gauge station for the estimation of areal rainfall in Johor. The main problem in this study is minimizing an objective function to determine the optimal number and location for the rain gauge stations. The well-known geostatistics method (variance-reduction method) is used in combination with simulated annealing as an algorithm of optimization. Rainfall data during monsoon season (November – February) for 1975 – 2008 from existing 84 rain gauge stations covering all Johor were used in this study. Result shows that the combination of geostatistics method with simulated annealing successfully managed to determine the optimal number and location of rain gauge station

    Clustering of rainfall data using k-means algorithm

    Get PDF
    Clustering algorithms in data mining is the method for extracting useful information for a given data. It can precisely analyze the volume of data produced by modern applications. The main goal of clustering is to categorize data into clusters according to similarities, traits and behavior. This study aims to describe regional cluster pattern of rainfall based on maximum daily rainfall in Johor, Malaysia. K-Means algorithm is used to obtain optimal rainfall clusters. This clustering is expected to serve as an analysis tool for a decision making to assist hydrologist in the water research problem

    Parameter estimation of Stochastic Logistic Model : Levenberg-Marquardt Method

    Get PDF
    In this paper, we estimate the drift and diffusion parameters of the stochas- tic logisticmodels for the growth of Clostridium Acetobutylicum P262 using Levenberg- Marquardt optimization method of non linear least squares. The parameters are esti- mated for five different substrates. The solution of the deterministic models has been approximated using Fourth Order Runge-Kutta and for the solution of the stochastic differential equations, Milstein numerical scheme has been used. Small values of Mean Square Errors (MSE) of stochastic models indicate good fits. Therefore the use of stochastic models are shown to be appropriate in modelling cell growth of Clostridium Acetobutylicum P26

    Graph coloring program of exam scheduling modeling based on bitwise coloring algorithm using python

    Get PDF
    A graph coloring is the process of assigning labels to the vertices of a graph in such a way that no two adjacent vertices have the same color. The chromatic number of a graph G is the smallest number of colors that can be assigned to it. Graph coloring has a wide range of applications and is commonly used to solve scheduling issues. In this article, the researchers design an algorithm and apply it to a computer program (Python) to solve graph coloring and to visualize the variation of exam scheduling modeling at Binus University in graphs based on the Bitwise Graph Coloring Algorithm. The researchers develop a graph coloring algorithm by considering some of the graph vertices to be binary numbers. Bitwise operations make this algorithm run very fast. The algorithm constructed by the researcher is a modification of Komosko, etc.’s algorithm in 2015 and it is the key result of this research. The researchers try to offer an alternative method in the process of making the final semester exam schedule. Next, the researcher tested the program on the data of subjects and students who took it at the Study program of TI-Stat-Math in Binus University. Our results show that from the program created and the simulations performed, 8 schedule slots are generated in about 0.675 sec

    Implementation of a logistic map to calculate the bits required for digital image steganography using the least significant bit (LSB) method

    Get PDF
    The LSB method in steganography usually only uses the last bit or the last few bits that are the same for all pixels. This is very easy to solve by using a bitwise shift left operation so that the last bit becomes the leading bit (MSB). Some techniques combine steganography and cryptography through two different processes. In this study, a new technique is proposed to perform steganography and cryptography together. The random sequence obtained from the logistic map is used to determine the number of bits in the LSB method. Furthermore, testing was carried out on several grayscale images. The result obtained is that the hidden images cannot be opened easily. The level of sensitivity is very small, reaching 10-15

    A comparison study between Doane’s and Freedman-Diaconis’ binning rule in characterizing potential water resources availability

    Get PDF
    One of the primary constraints for development and management of water resources is the spatial and temporal uncertainty of rainfall. This is due to the stability and reliability of water supply is dynamically associated with the spatial and temporal uncertainty of rainfall. However, this spatial and temporal uncertainty can be assessed using the intensity entropy (IE) and apportionment entropy (AE). The main objective of this study is to investigate the implications of the use of Doane's and Freedman-Diaconis' binning rule in characterizing potential water resource availability (PWRA), which the PWRA is assessed via the standardized intensity entropy (IE') against the standardized apportionment entropy (AE') scatter diagram. To pursue the objective of this study, the daily rainfall data recorded ranging from January 2008 to December 2016 at four rainfall monitoring stations located Coastal region of Kuantan District Pahang are analyzed. The analysis results illustrated that the use of Doane's binning rule is more appropriate than Freedman-Diaconis' binning rule. This is due to the resulted PWRA characteristics using Doane's binning rule is relatively consistent with practical climate such that the study region is experiencing poor-in-water zone with less amount and high uncertainty of rainfall during the Southwest Monsoon, while abundant and perennial rainfall during the Northeast Monsoon. Furthermore, the use of Doane's binning rule is more advantages compared to the Freedman-Diaconis' binning rule with the abstraction of computational cost and time

    Advancing machine learning for identifying cardiovascular disease via granular computing

    Get PDF
    Machine learning in cardiovascular disease (CVD) has broad applications in healthcare, automatically identifying hidden patterns in vast data without human intervention. Early-stage cardiovascular illness can benefit from machine learning models in drug selection. The integration of granular computing, specifically z-numbers, with machine learning algorithms, is suggested for CVD identification. Granular computing enables handling unpredictable and imprecise situations, akin to human cognitive abilities. Machine learning algorithms such as Naïve Bayes, k-nearest neighbor, random forest, and gradient boosting are commonly used in constructing these models. Experimental findings indicate that incorporating granular computing into machine learning models enhances the ability to represent uncertainty and improves accuracy in CVD detection

    Stochastic modeling of the C. Acetobutylicum and solvent productions in fermentation

    Get PDF
    Recent decade has seen great progress in the use of stochastic models in biological process. Researchers are now realising that stochastic models have important roles to play in biological process especially in the analysis of population dynamics. This progress encourages many researchers to develop new methods and techniques to improve the stochastic model. In recent study, logistic equations have been used to model the cell growth of C.acetobutylicum while the Luedeking-Piret equation incorporating the logistic equation used to model the formation of solvent. However, it was found that the Luedeking-Piret equation is not adequate for modeling the production of acetone and butanol. In this study, stochastic power law logistic model has been considered to model the cell growth of C.acetobutylicum and the solvent production in five different yeast cultures. In order to solve the SDEs, simulated maximum likelihood estimation method and Euler-Maruyama approximation method have been used. Finally, the stochastic models and deterministic models are compared by using their root mean square errors of the growth model and solvent productions model. The stochastic models have smaller value of root mean square errors, thus showed that the stochastic power law logistic models are better models than their deterministic counterparts to describe the growth of C.acetobutylicum and solvent productions in fermentation
    corecore