2,422 research outputs found

    Revisiting the Evolution and Application of Assignment Problem: A Brief Overview

    Get PDF
    The assignment problem (AP) is incredibly challenging that can model many real-life problems. This paper provides a limited review of the recent developments that have appeared in the literature, meaning of assignment problem as well as solving techniques and will provide a review on   a lot of research studies on different types of assignment problem taking place in present day real life situation in order to capture the variations in different types of assignment techniques. Keywords: Assignment problem, Quadratic Assignment, Vehicle Routing, Exact Algorithm, Bound, Heuristic etc

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Energy sink-holes avoidance method based on fuzzy system in wireless sensor networks

    Get PDF
    The existence of a mobile sink for gathering data significantly extends wireless sensor networks (WSNs) lifetime. In recent years, a variety of efficient rendezvous points-based sink mobility approaches has been proposed for avoiding the energy sink-holes problem nearby the sink, diminishing buffer overflow of sensors, and reducing the data latency. Nevertheless, lots of research has been carried out to sort out the energy holes problem using controllable-based sink mobility methods. However, further developments can be demonstrated and achieved on such type of mobility management system. In this paper, a well-rounded strategy involving an energy-efficient routing protocol along with a controllable-based sink mobility method is proposed to extirpate the energy sink-holes problem. This paper fused the fuzzy A-star as a routing protocol for mitigating the energy consumption during data forwarding along with a novel sink mobility method which adopted a grid partitioning system and fuzzy system that takes account of the average residual energy, sensors density, average traffic load, and sources angles to detect the optimal next location of the mobile sink. By utilizing diverse performance metrics, the empirical analysis of our proposed work showed an outstanding result as compared with fuzzy A-star protocol in the case of a static sink

    Distance modulation competitive co-evolution method to find initial configuration independent cellular automata rules

    Get PDF
    IEEE International Conference on Systems, Man, and Cybernetics. Tokyo, 12-15 October 1999.One of the main problems in machine learning methods based on examples is the over-adaptation. This problem supposes the exact adaptation to the training examples losing the capability of generalization. A solution of these problems arises in using large sets of examples. In most of the problems, to achieve generalized solutions, almost infinity examples sets are needed. This make the method useless in practice. In this paper, one way to overcome this problem is proposed, based on biological competitive evolution ideas. The evolution is produced as a result of a competition between sets of solutions and sets of examples, trying to beat each other. This mechanism allows the generation of generalized solutions using short example sets

    Unifying Sparsest Cut, Cluster Deletion, and Modularity Clustering Objectives with Correlation Clustering

    Get PDF
    Graph clustering, or community detection, is the task of identifying groups of closely related objects in a large network. In this paper we introduce a new community-detection framework called LambdaCC that is based on a specially weighted version of correlation clustering. A key component in our methodology is a clustering resolution parameter, λ\lambda, which implicitly controls the size and structure of clusters formed by our framework. We show that, by increasing this parameter, our objective effectively interpolates between two different strategies in graph clustering: finding a sparse cut and forming dense subgraphs. Our methodology unifies and generalizes a number of other important clustering quality functions including modularity, sparsest cut, and cluster deletion, and places them all within the context of an optimization problem that has been well studied from the perspective of approximation algorithms. Our approach is particularly relevant in the regime of finding dense clusters, as it leads to a 2-approximation for the cluster deletion problem. We use our approach to cluster several graphs, including large collaboration networks and social networks

    A Hybrid Segmentation Pattern of Partial Transmission in Computer Networks to Reduce the Complexity Level

    Get PDF
    Partial transmission sequence (PTS) is seen as a related project in the framework of the Orthogonal Frequency Division ‎Multiplexing (OFDM) to suppress the medium to high Peak-to-Average Power Ratio problem. The PTS chart data is based on dividing the back into subdivisions and their weight by combining step-by-step factors. Despite the fact that PTS can reduce the high specifications. The Computational Complexity Level (CC) limits the scope of application to match PTS use with ground applications. In PTS, there are three main distribution schemes. Interleaving projects (IL-PTS), arbitrary and alternate (PR-PTS) and Ad-PTS. In this paper, another algorithm called the Hybrid Pseudo-Random and Interleaving Cosine Wave Shape ‎‎(H-PRC-PTS) is presented and the PR-PTS equilibrium is established by stabilizing the cousin waveform between languages (S-IL-C- PTS), which was suggested in the previous work. The results showed that the proposed algorithm could reduce the validity of PAPR as a PR-PTS scheme, although the CC level was significantly reduced

    Reinforcement Learning: A Survey

    Full text link
    This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file

    IoMT innovations in diabetes management: Predictive models using wearable data

    Get PDF
    Diabetes Mellitus (DM) represents a metabolic disorder characterized by consistently elevated blood glucose levels due to inadequate pancreatic insulin production. Type 1 DM (DM1) constitutes the insulin-dependent manifestation from disease onset. Effective DM1 management necessitates daily blood glucose monitoring, pattern recognition, and cognitive prediction of future glycemic levels to ascertain the requisite exogenous insulin dosage. Nevertheless, this methodology may prove imprecise and perilous. The advent of groundbreaking developments in information and communication technologies (ICT), encompassing Big Data, the Internet of Medical Things (IoMT), Cloud Computing, and Machine Learning algorithms (ML), has facilitated continuous DM1 management monitoring. This investigation concentrates on IoMT-based methodologies for the unbroken observation of DM1 management, thereby enabling comprehensive characterization of diabetic individuals. Integrating machine learning techniques with wearable technology may yield dependable models for forecasting short-term blood glucose concentrations. The objective of this research is to devise precise person-specific short-term prediction models, utilizing an array of features. To accomplish this, inventive modeling strategies were employed on an extensive dataset comprising glycaemia-related biological attributes gathered from a large-scale passive monitoring initiative involving 40 DM1 patients. The models produced via the Random Forest approach can predict glucose levels within a 30-minute horizon with an average error of 18.60 mg/dL for six-hour data, and 26.21 mg/dL for a 45-minute prediction horizon. These findings have also been corroborated with data from 10 Type 2 DM patients as a proof of concept, thereby demonstrating the potential of IoMT-based methodologies for continuous DM monitoring and management.Funding for open Access charge: Universidad de Málaga / CBUA. Plan Andaluz de Investigación, Desarrollo e Innovación (PAIDI), Junta de Andalucía, Spain. María Campo-Valera is grateful for postdoctoral program Margarita Salas – Spanish Ministry of Universities (financed by European Union – NextGenerationEU). The authors would like to acknowledge project PID2022-137461NB-C32 financed by MCIN/AEI/10.13039/501100011033/FEDER(“Una manera de hacer Europa”), EU
    • …
    corecore