62 research outputs found

    Uncertainty of temperature measured by thermocouple

    Get PDF
    Reliability of data is important for researchers to verify their research result. For temperature measurement involving thermocouple, uncertainty needs to be determined before deciding the reliability of data. In this research, four error sources were proposed to have contributed in the uncertainty of temperature measured by a thermocouple which were resolution limit of data acquisition device, error in temperature measurement based on voltage measurement, reference junction compensation error and data fluctuation. Experiments were carried out to obtain reference junction compensation error and data fluctuation using HIOKI data logger (LR8400-20). The procedure to obtain the uncertainty of the measured temperature including the reference junction compensation uncertainty is proposed. The uncertainty was obtained by combining all the error values with root-sum-square equation. The uncertainty for K thermocouple obtained from this research was 0.42 °C

    A comparison study between Doane’s and Freedman-Diaconis’ binning rule in characterizing potential water resources availability

    Get PDF
    One of the primary constraints for development and management of water resources is the spatial and temporal uncertainty of rainfall. This is due to the stability and reliability of water supply is dynamically associated with the spatial and temporal uncertainty of rainfall. However, this spatial and temporal uncertainty can be assessed using the intensity entropy (IE) and apportionment entropy (AE). The main objective of this study is to investigate the implications of the use of Doane's and Freedman-Diaconis' binning rule in characterizing potential water resource availability (PWRA), which the PWRA is assessed via the standardized intensity entropy (IE') against the standardized apportionment entropy (AE') scatter diagram. To pursue the objective of this study, the daily rainfall data recorded ranging from January 2008 to December 2016 at four rainfall monitoring stations located Coastal region of Kuantan District Pahang are analyzed. The analysis results illustrated that the use of Doane's binning rule is more appropriate than Freedman-Diaconis' binning rule. This is due to the resulted PWRA characteristics using Doane's binning rule is relatively consistent with practical climate such that the study region is experiencing poor-in-water zone with less amount and high uncertainty of rainfall during the Southwest Monsoon, while abundant and perennial rainfall during the Northeast Monsoon. Furthermore, the use of Doane's binning rule is more advantages compared to the Freedman-Diaconis' binning rule with the abstraction of computational cost and time

    Principal component analysis on meteorological data in UTM KL

    Get PDF
    The high usage of fossil fuel to produce energy for the increasing demand of energy has been the primary culprit behind global warming. Renewable energies such as solar energy can be a solution in preventing the situation from worsening. Solar energy can be harnessed using available system such as solar thermal cogeneration systems. However, for the system to function smoothly and continuously, knowledge on solar radiation’s intensity several minutes in advance are required. Though there exist various solar radiation forecast models, most of the existing models requires high computational time. In this research, principal component analysis were applied on the meteorological data collected in Universiti Teknologi Malaysia Kuala Lumpur to reduce the dimension of the data. Dominant factors obtained from the analysis is expected to be useful for the development of solar radiation forecast model

    A comparative effectiveness of hierarchical and nonhierarchical regionalisation algorithms in regionalising the homogeneous rainfall regions

    Get PDF
    Descriptive data mining has been widely applied in hydrology as the regionalisation algorithms to identify the statistically homogeneous rainfall regions. However, previous studies employed regionalisation algorithms, namely agglomerative hierarchical and non-hierarchical regionalisation algorithms requiring post-processing techniques to validate and interpret the analysis results. The main objective of this study is to investigate the effectiveness of the automated agglomerative hierarchical and non-hierarchical regionalisation algorithms in identifying the homogeneous rainfall regions based on a new statistically significant difference regionalised feature set. To pursue this objective, this study collected 20 historical monthly rainfall time-series data from the rain gauge stations located in the Kuantan district. In practice, these 20 rain gauge stations can be categorised into two statistically homogeneous rainfall regions, namely distinct spatial and temporal variability in the rainfall amounts. The results of the analysis show that Forgy K-means non-hierarchical (FKNH), HartiganWong K-means non-hierarchical (HKNH), and Lloyd K-means non-hierarchical (LKNH) regionalisation algorithms are superior to other automated agglomerative hierarchical and non-hierarchical regionalisation algorithms. Furthermore, FKNH, HKNH, and LKNH yielded the highest regionalisation accuracy compared to other automated agglomerative hierarchical and non-hierarchical regionalisation algorithms. Based on the regionalisation results yielded in this study, the reliability and accuracy that assessed the risk of extreme hydro-meteorological events for the Kuantan district can be improved. In particular, the regional quantile estimates can provide a more accurate estimation compared to at-site quantile estimates using an appropriate statistical distribution

    A comparative effectiveness of hierarchical and non-hierarchical regionalisation algorithms in regionalising the homogeneous rainfall regions

    Get PDF
    Descriptive data mining has been widely applied in hydrology as the regionalisation algorithms to identify the statistically homogeneous rainfall regions. However, previous studies employed regionalisation algorithms, namely agglomerative hierarchical and non-hierarchical regionalisation algorithms requiring post-processing techniques to validate and interpret the analysis results. The main objective of this study is to investigate the effectiveness of the automated agglomerative hierarchical and non-hierarchical regionalisation algorithms in identifying the homogeneous rainfall regions based on a new statistically significant difference regionalised feature set. To pursue this objective, this study collected 20 historical monthly rainfall time-series data from the rain gauge stations located in the Kuantan district. In practice, these 20 rain gauge stations can be categorised into two statistically homogeneous rainfall regions, namely distinct spatial and temporal variability in the rainfall amounts. The results of the analysis show that Forgy K-means non-hierarchical (FKNH), Hartigan- Wong K-means non-hierarchical (HKNH), and Lloyd K-means non-hierarchical (LKNH) regionalisation algorithms are superior to other automated agglomerative hierarchical and non-hierarchical regionalisation algorithms. Furthermore, FKNH, HKNH, and LKNH yielded the highest regionalisation accuracy compared to other automated agglomerative hierarchical and non-hierarchical regionalisation algorithms. Based on the regionalisation results yielded in this study, the reliability and accuracy that assessed the risk of extreme hydro-meteorological events for the Kuantan district can be improved. In particular, the regional quantile estimates can provide a more accurate estimation compared to at-site quantile estimates using an appropriate statistical distribution

    Velocity analysis on moving objects detection using multi-scale histogram of oriented gradient

    Get PDF
    An autonomous car is a one-of-a-kind specimen in today's technology. It is an automatic system in which most of the duties that humans undertake in the car can be done automatically with minimum human supervision for road safety features. Moving automobile detections, on the other hand, are prone to more mistakes and can result in undesirable situations such as minor car wrecks. Moving vehicle identification is now done using high-speed cameras or LiDAR, for example, whereas self-driving cars are produced with deep learning, which requires much larger datasets. As a result, there may be greater space for improvement in the moving vehicle detection model. This research intends to create another moving car recognition model that uses multi-scale feature-based detection to improve the model's accuracy while also determining the maximum speed at which the model can detect moving objects. The recommended methodology was to create a lab-scale model that can be used as a guide for video and image capture on the lab-scale model, as well as the speed of the toy vehicles captured from the Arduino Uno machine before testing the car recognition model. According to the data, Multi-Scale Histogram of Oriented Gradient can recognize more objects than Histogram of Oriented Gradient with higher object identification accuracies and precision

    Numerical Model For Napl Migration In Double-Porosity Subsurface Systems

    Get PDF
    The double-porosity concept has been successfully applied by many researchers to simulate fluid flow in oil reservoirs over the past few decades. These oil reservoirs were typically considered to be made of fractured or fissured rock, hence the usance of the double-porosity concept. Nonetheless, double-porosity may also exist in soil either through soil aggregation, or through soil features such as wormholes, cracks and root holes. These attributes in soil that cause the occurrence of double-porosity are also known as secondary porosity features and are akin to the reservoir rock fractures or fissures. In the case of groundwater contamination, the occurrence of double-porosity in soil is highly influential since immiscible fluids have been found to flow preferentially through the secondary porosity features. Ergo, a numerical model for non-aqueous phase liquids (NAPL) migration in double-porosity groundwater systems was developed. This model was modified from the conventional double-porosity model applied in the petroleum industry. The difference is that while the standard double-porosity models usually simulate the fluid flows in both continua making up the doubleporosity medium, the double-porosity model presented here focuses the modelling on the secondary porosity features in the soil, therefore making it more pertinent in the context of groundwater contamination. In the modified model, the phase saturations and relative permeabilities are expressed as functions of the capillary pressures. The resultant nonlinear governing partial differential equations are solved using numerical methods. The problem is discretized spatially using the Galerkin’s weighted-residual finite element method whereas a fully implicit scheme is used for temporal discretization. Verification of the developed model has been done against similar works in the open literature and the preferential flow of NAPL through the secondary porosity features was validated

    Dynamic topological description of brainstorm during epileptic seizure

    Get PDF
    Electroencephalograph is one of the useful and favoured instruments in diagnosing various brain disorders especially in epilepsy due to its non-invasive characteristic and ability in providing wealthy information about brain functions. At present, a large amount of quantitative methods for extracting “hidden” information which cannot be seen by “naked” eye from an electroencephalogram has been invented by scientist around the world. Among those, Flat Electroencephalography (Flat EEG) is one of the novel methods developed by Fuzzy Research Group (FRG), UTM which has been intended to localize epileptic foci of epilepsy patients. The emergence of this invention has led to the development of several Flat EEG based research (e.g., Non Polar CEEG and Fuzzy Neighborhood Clustering on Flat EEG). The verification of the method has been made via comparison with some substantial clinical results. However, in this thesis, theoretical foundation of the method is justified via the construction of a dynamic mathematical transformation called topological conjugacy whereby isomorphism between dynamics of epileptic seizure and Flat EEG is established. Firstly, these two dynamic events are composed into sets of points. Then, they are forced to be strictly linearly ordered and composed into topological spaces. Subsequently, an isomorphism is constructed between corresponding mathematical structures to show that their properties are preserved and conjugate topologically. The constructed topological conjugacy is generalized into a class of dynamical systems. Within this class of dynamical system, Flat EEG’s flow is shown to be structurally stable. Additionally, topological properties on the event of epileptic seizure and Flat EEG have also been established

    Structural Stability of Flat Electroencephalography

    No full text
    Flat Electroencephalography is a way of viewing electroencephalography signals on the first component of Fuzzy Topograhic Topological Mapping (FTTM), a model which was designed to solve neuromagnetic inverse problem. This novel method is well known for its ability to preserves the orientation and magnitude of EEG sensors and signals. However, this preservation renders Flat EEG to contain unwanted signals captured during recording from the surroundings. Consequently, its accuracy in depicting actual electrical activity inside the brain is affected. Present of artifacts would pose a serious problem if it is large. Thusly, this study will investigate the persistence of Flat EEG to surrounding “noises” from dynamic viewpoint by means of structural stability. Basically, it will be showed that Flat EEG in the presence of “noises” may still reflects the actual electrical activity inside the brain, if the contaminated Flat EEG falls within a class of dynamical systems

    Structural stability of flat electroencephalography

    No full text
    Flat Electroencephalography is a way of viewing electroencephalography signals on the first component of Fuzzy Topograhic Topological Mapping (FTTM), a model which was designed to solve neuromagnetic inverse problem. This novel method is well known for its ability to preserves the orientation and magnitude of EEG sensors and signals. However, this preservation renders Flat EEG to contain unwanted signals captured during recording from the surroundings. Consequently, its accuracy in depicting actual electrical activity inside the brain is affected. Present of artifacts would pose a serious problem if it is large. Thusly, this study will investigate the persistence of Flat EEG to surrounding "noises" from dynamic viewpoint by means of structural stability. Basically, it will be showed that Flat EEG in the presence of "noises" may still reflects the actual electrical activity inside the brain, if the contaminated Flat EEG falls within a class of dynamical systems
    • …
    corecore