34 research outputs found

    Segmentation Techniques based on Image Quality and Edge Detection Algorithms

    Get PDF
    Segmentation is one of the fundamental tasks in the area of digital image processing and analysis. Segmentation highlights parts of the image that have common features. Such areas of the image are called Region of Interest (ROI). The choice of segmentation algorithm depends on the nature of the origin images and there is no single, universal method that can always be applied. When choosing a segmentation algorithm for a particular image, it is very important to test multiple methods and choose the one that gives the best results. This paper presents a comparison of several segmentation algorithms on different origin images. The comparison was performed based on standard parameters like Mean-Square Error (MSE), Signal to Noise Ratio (SNR), Peak Signal-To-Noise Ratio (PSNR), Structure Similarity Index (SSIM) etc. for image quality assessment.The results of this work can help in the selection of the edge detection algorithm and as a preparation for image segmentation

    Improvement of the Accuracy of Prediction Using Unsupervised Discretization Method: Educational Data Set Case Study

    Get PDF
    This paper presents a comparison of the efficacy of unsupervised and supervised discretization methods for educational data from blended learning environment. NaĆÆve Bayes classifier was trained for each discretized data set and comparative analysis of prediction models was conducted. The research goal was to transform numeric features into maximum independent discrete values with minimum loss of information and reduction of classification error. Proposed unsupervised discretization method was based on the histogram distribution and implementation of oversampling technique. The main contribution of this research is improvement of accuracy prediction using the unsupervised discretization method which reduces the effect of ignoring class feature for educational data set

    QODA ā€“ Methodology and Legislative Background for Assessment of Open Government Datasets Quality

    Get PDF
    In last few years, many open government data portals have been emerging in the world. These portals publish open government datasets which can be accessed and used by everyone for their own needs. In this paper, we propose methodology named QODA (Quality of Open government DAtasets) for assessment of quality of published datasets via two aspects. First one is assessment of quality of pure open government datasets, and second is assessment of quality features on the platforms which contributes to the publication of quality datasets. It provides a step-by-step dataset analysis guidance and summarization of results. Research presented in this paper shows that open government dataset quality depends on data provider as well as proper definition of metadata behind datasets. Our findings result in recommendations to open government data (OGD) publishers, to constantly supervise the use of published datasets, with aim to have timely and punctual information on OGD portals, with special attention on quality features

    A Comparison of Query Execution Speeds for Large Amounts of Data Using Various DBMS Engines Executing on Selected RAM and CPU Configurations

    Get PDF
    In modern economies, most important business decisions are based on detailed analysis of available data. In order to obtain a rapid response from analytical tools, data should be pre-aggregated over dimensions that are of most interest to each business. Sometimes however, important decisions may require analysis of business data over seemingly less important dimensions which have not been pre-aggregated during the ETL process. On these occasions, the ad-hoc "online" aggregation is performed whose execution time is dependent on the overall DBMS performance. This paper describes how the performance of several commercial and non-commercial DBMSs was tested by running queries designed for data analysis using "ad-hoc" aggregations over large volumes of data. Each DBMS was installed on a separate virtual machine and was run on several computers, and two amounts of RAM memory were allocated for each test. Measurements of query execution times were recorded which demonstrated that, as expected, column-oriented databases out-performed classical row-oriented database systems

    Average Bit Error Rate at Signal Transmission with OOK Modulation Scheme in Different FSO Channels

    Get PDF
    In this paper, the Average Bit Error Rate of the signal in the Free Space Optical system modulated with On-Off keying scheme is calculated and analysed. The Average Bit Error Rate is determined in the case of an atmospheric channel modelled with Gamma-Gamma distribution, Log-Normal distribution, K distribution and I-K distribution. The results are presented both analytically and graphically for different lengths of the Free Space Optical link and the strength of the atmospheric turbulence. The quality of the received signal based on the Average Bit Error Rate for weak, moderate and strong atmospheric turbulences, different lengths of the transmission section and different Signal-to-Noise Ratio values was analysed. The operation of the Free Space Optical system in the observed environment was simulated and the transmission quality was analysed based on Bit Error Rate and Q factor

    Mobile ad-hoc networks: MANET

    Get PDF
    Currently in 2018, 7.6 billion people are in the world, and 8.6 billion mobile devices. As mobile phones have completely changed the meaning of the term to be available, a similar change in the wait and laptops users is, so it is only a matter of time before the new way to use your notebooks will change your habits and make life easier. As modern man is becoming increasingly accustomed to the availability of the 'network' (the Internet), where he can find almost every necessary information, he is increasingly integrating in his life, for example, if one wants to quickly find out where one can buy a book of a particular author, or on a specific topic, which pharmacies are open, how to find the street etc. Today, wireless internet access at airports, restaurants, hotels is almost everywhere , and such approaches are based on previously installed infrastructure such as wireless access point (wireless access point) through which you connect your device to the Internet or communicate with another person and exchange data. This way of accessing the Internet while on the go-via a laptop computer would require an infrastructure like the GSM network. That is why we started to develop a different network model, which are ad-hoc networks, that is, Mobile Ad-hoc Networks, as an ad-hoc network

    COMPARISON OF DATA MINING ALGORITHMS, INVERTED INDEX SEARCH AND SUFFIX TREE CLUSTERING SEARCH

    Get PDF
    New documents are created every day and the number of digital documents in the world is exponentially growing. Search engines do a great job by making these documents easily available to the world population. Data mining works with large amount of data sets and offers data to the end user; it consists of many different techniques and algorithms. These techniques allow faster and better search for large amounts of data. Clustering is one of the techniques used in a data mining process; it is based on data grouping according to the features, or any property they have in common, thus, a search process is faster, and a user gets better search results. On the other hand, an inverted index is a structure that provides fast search too, but this structure does not create clusters or groups of similar data. Instead, it processes all data in a document and measures appearance of specific terms in a document. The goal of this paper is to compare these two algorithms. The authors created applications that use these two algorithms and tested them on the same corpus of documents. For both algorithms, the authors are presenting improvements that provide faster search and better search results

    On the macrodiversity reception in the correlated Gamma shadowed Nakagami-m fading

    Get PDF
    U ovom radu je izložena analiza makroviÅ”estrukog prijama za slučaj uporabe tehnike prostornog raŔčlanjenja sa selektivnim kombiniranjem (SC ā€“ selection combining) u prisustvu korelacijske Gamma sjene. Na mikrorazinama razmatrane su tehnike prostornog raŔčlanjenja kombiniranja s maksimalnim odnosom (MRC ā€“ maximal ratio combining) za ulazne korelirane grane, kako bi se sprječio utjecaj brzog Nakagami-m fedinga. Prvo su izvedeni izrazi u zatvorenom obliku za statističke karakteristike sustava drugog reda: srednji broj osnih presjeka (LCR ā€“ level crossing rate) i srednje vrijeme trajanja slabljenja (AFD ā€“ average fade duration). Na osnovu ovih izraza, kroz njihove priraÅ”taje analiziran je utjecaj korelacije na makrorazini (korelacije zasjenjenja) na karakteristike sustava. Analiza prezentirana u ovom radu, može biti od značaja u procesu projektiranja makroraŔčlanjenih sustava.In this paper an analysis of selection combining (SC) macrodiversity reception performed in correlated Gamma shadowing environment will be presented. At each microlevel maximal ratio combining (MRC) with correlated branches is observed, for mitigating effects of Nakagami-m short-time fading. First, novel closed form expressions are derived for second order statistical measures, level crossing rate (LCR) and average fade duration (AFD). Capitalizing on these expressions, the influence of correlation at macrolevel (shadowing correlation) will be analysed through their derivatives. Provided analysis could find application in current macrodiversity system design
    corecore