17 research outputs found
Textured Renyl Entropy for Image Thresholding
This paper introduces Textured Renyi Entropy for
image thresholding based on a novel combination
mechanism
Semantic Lexical Alignment For Domain-Specific Ontologies.
Domain-specific ontologies encode reusable domain vocabulary and represent established domain semantics
Multimodal Semantics Integration Using Ontologies Enhanced By Ontology Extraction And Cross Modality Disambiguation
The increasing amount of multimodal data such as text documents, annotated images and web pages have necessitated the development of effective techniques for their manipulation. The ineffectiveness of low-level image and textual features is one of the main issues as these features are commonly insufficient for effective data manipulation. Therefore, obtaining sufficient and significant information from the multimodal data, and then further using this information in the proper manner is penultimate in data manipulation tasks. This thesis proposes a multimodal semantics integration (MSI) process to extract and integrate the semantics from the image and text modalities, and to use these semantics for manipulation tasks. The proposed process firstly extracts a textual representation from the textual and image modalities, followed by mapping the representation to concepts in a condensed knowledge source using a semantic-based alignment sub-process
Visual Domain Ontology using OWL Lite for Semantic Image Processing
In this paper, a visual domain ontology (VDO) is constructed using OWL-Lite Language. The VDO passes through two execution phases, namely, construction and inferring phases. In the construction phase, OWL classes are initialized, with reference to annotated scenes, and connected by hierarchical, spatial, and content-based relationships (presence/absence of some objects depends on other objects). In the inferring phase, the VDO is used to infer knowledge about an unknown scene. This paper aims to use a standard language, namely, OWL, to represent non-standard visual knowledge; facilitate straightforward ontology enrichment; and define the rules for inferring based on the constructed ontology. The OWL standardizes the constructed knowledge and facilitates advanced inferring because it is built on top of the first-order logic and description logic. The VDO then allows an efficient representation and reasoning of complex visual knowledge. In addition to representation, the VDO enables easy extension, sharing, and reuse of the represented visual knowledge
An innovative approach for enhancing capacity utilization in point-to-point voice over internet protocol calls
Voice over internet protocol (VoIP) calls are increasingly transported over computer-based networking due to several factors, such as low call rates. However, point-to-point (P-P) calls, as a division of VoIP, are encountering a capacity utilization issue. The main reason for that is the giant packet header, especially when compared to the runt P-P calls packet payload. Therefore, this research article introduced a method to solve the liability of the giant packet header of the P-P calls. The introduced method is named voice segment compaction (VSC). The VSC method employs the unneeded P-P calls packet header elements to carry the voice packet payload. This, in turn, reduces the size of the voice payload and improves network capacity utilization. The preliminary results demonstrated the importance of the introduced VSC method, while network capacity improved by up to 38.33%
A four-state Markov model for modelling bursty traffic and benchmarking of random early detection
Active Queue Management (AQM) techniques are crucial for managing packet transmission efficiently, maintaining network performance, and preventing congestion in routers. However, achieving these objectives demands precise traffic modeling and simulations in extreme and unstable conditions. The internet traffic has distinct characteristics, such as aggregation, burstiness, and correlation. This paper presents an innovative approach for modeling internet traffic, addressing the limitations of conventional modeling and conventional AQM methods' development, which are primarily designed to stabilize the network traffic. The proposed model leverages the power of multiple Markov Modulated Bernoulli Processes (MMBPs) to tackle the challenges of traffic modeling and AQM development. Multiple states with varying probabilities are used to model packet arrivals, thus capturing the burstiness inherent in internet traffic. Yet, the overall probability is maintained identical, irrespective of the number of states (one, two, or four), by solving linear equations with multiple variables. Random Early Detection (RED) was used as a case study method with different packet arrival probabilities based on MMBPs with one, two, and four states. The results showed that the proposed model influences the outcomes of AQM methods. Furthermore, it was found that RED might not effectively address network burstiness due to its relatively slow reaction time. As a result, it can be concluded that RED performs optimally only with a single-state model
TOPSIS-based Regression Algorithms Evaluation
This paper developed a multi-criteria decision-making approach using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to benchmark the regression alternatives. Regression is used in diverse fields to predict consumer behavior, analyze business profitability, assess risk, analyze automobile engine performance, predict biological system behavior, and analyze weather data. Each of these applications has its own set of concerns, resulting in various metrics utilizations or those of similar measures but with diverse preferences. Multi-criteria decision-making analyzes, compares, and ranks a set of alternatives utilizing mathematical and logical processes with a complicated and contradictory set of criteria. The developed approach established the weights, which were the core of the evaluation process, to various values to mimic and address the regression’s utilization in multiple applications with different concerns and using distinct datasets. The alternative judgment identified positive and negative ideal alternatives in the alternative space. The compared regression alternatives were scored and ranked based on their distance from these alternatives. The results showed that different preferences led to varying algorithm rankings, but top-ranked algorithms were distinguished using a specific dataset. Following that, using three datasets, namely Combined Cycle Power Plant, Real Estate, and Concrete, Voting using multiple classifiers (k-means-based classifiers) was the top-ranked in the Combined Cycle Power Plant and Real Estate datasets. In contrast, Decision Stump was the top-ranked in the Concrete dataset
Restaurant Recommendations Based on Multi-Criteria Recommendation Algorithm
Recent years have witnessed a rapid explosion of online information sources about restaurants, and the selection of an appropriate restaurant has become a tedious and time-consuming task. A number of online platforms allow users to share their experiences by rating restaurants based on more than one criterion, such as food, service, and value. For online users who do not have enough information about suitable restaurants, ratings can be decisive factors when choosing a restaurant. Thus, personalized systems such as recommender systems are needed to infer the preferences of each user and then satisfy those preferences. Specifically, multi-criteria recommender systems can utilize the multi-criteria ratings of users to learn their preferences and suggest the most suitable restaurants for them to explore. Accordingly, this paper proposes an effective multi-criteria recommender algorithm for personalized restaurant recommendations. The proposed Hybrid User-Item based Multi-Criteria Collaborative Filtering algorithm exploits users’ and items’ implicit similarities to eliminate the sparseness of rating information. The experimental results based on three real-word datasets demonstrated the validity of the proposed algorithm concerning prediction accuracy, ranking performance, and prediction coverage, specifically, when dealing with extremely sparse datasets, in relation to other baseline CF-based recommendation algorithms.
A Trust-Based Recommender System for Personalized Restaurants Recommendation
Several online restaurant applications, such as TripAdvisor and Yelp, provide potential consumers with reviews and ratings based on previous customers’ experiences. These reviews and ratings are considered the most important factors that determine the customer’s choice of restaurants. However, the selection of a restaurant among many unknown choices is still an arduous and time- consuming task, particularly for tourists and travellers. Recommender systems utilize the ratings provided by users to assist them in selecting the best option from many options based on their preferences. In this paper, we propose a trust-based recommendation model for helping consumers select suitable restaurants in accordance with their preferences. The proposed model utilizes multi- criteria ratings of restaurants and implicit trust relationships among consumers to produce personalized restaurant recommendations. The experimental results based on a real-world restaurant dataset demonstrated the superiority of the proposed model, in terms of prediction accuracy and coverage, in overcoming the sparsity and new user problems when compared to other baseline CF-based recommendation algorithms
Fuzzy-Based Active Queue Management Using Precise Fuzzy Modeling and Genetic Algorithm
Active Queue Management (AQM) methods significantly impact the network performance, as they manage the router queue and facilitate the traffic flow through the network. This paper presents a novel fuzzy-based AQM method developed with a computationally efficient precise fuzzy modeling optimized using the Genetic Algorithm. The proposed method focuses on the concept of symmetry as a means to achieve a more balanced and equitable distribution of the resources and avoid bandwidth wasting resulting from unnecessary packet dropping. The proposed method calculates the dropping probability of each packet using a precise fuzzy model that was created and tuned in advance and based on the previous dropping probability value and the queue length. The tuning process is implemented as an optimization problem formulated for the b0, b1, and b2 variables of the precise rules with an objective function that maximizes the performance results in terms of loss, dropping, and delay. To prove the efficiency of the developed method, the simulation was not limited to the common Bernoulli process simulation; instead, the Markov-modulated Bernoulli process was used to mimic the burstiness nature of the traffic. The simulation is conducted on a machine operated with 64-bit Windows 10 with an Intel Core i7 2.0 GHz processor and 16 GB of RAM. The simulation used Java programming language in Apache NetBeans Integrated Development Environment (IDE) 11.2. The results showed that the proposed method outperformed the existing methods in terms of computational complexity, packet loss, dropping, and delay. As such, in low congested networks, the proposed method maintained no packet loss and dropped 22% of the packets with an average delay of 7.57, compared to the best method, LRED, which dropped 21% of the packets with a delay of 10.74, and FCRED, which dropped 21% of the packets with a delay of 16.54. In highly congested networks, the proposed method also maintained no packet loss and dropped 48% of the packets, with an average delay of 16.23, compared to the best method LRED, which dropped 47% of the packets with a delay of 28.04, and FCRED, which dropped 46% of the packets with a delay of 40.23