7 research outputs found

    Kinetic Gas Molecule Optimization based Cluster Head Selection Algorithm for minimizing the Energy Consumption in WSN

    Get PDF
    As the amount of low-cost and low-power sensor nodes increases, so does the size of a wireless sensor network (WSN). Using self-organization, the sensor nodes all connect to one another to form a wireless network. Sensor gadgets are thought to be extremely difficult to recharge in unfavourable conditions. Moreover, network longevity, coverage area, scheduling, and data aggregation are the major issues of WSNs. Furthermore, the ability to extend the life of the network, as well as the dependability and scalability of sensor nodes' data transmissions, demonstrate the success of data aggregation. As a result, clustering methods are thought to be ideal for making the most efficient use of resources while also requiring less energy. All sensor nodes in a cluster communicate with each other via a cluster head (CH) node. Any clustering algorithm's primary responsibility in these situations is to select the ideal CH for solving the variety of limitations, such as minimising energy consumption and delay. Kinetic Gas Molecule Optimization (KGMO) is used in this paper to create a new model for selecting CH to improve network lifetime and energy. Gas molecule agents move through a search space in pursuit of an optimal solution while considering characteristics like energy, distance, and delay as objective functions. On average, the KGMO algorithm results in a 20% increase in network life expectancy and a 19.84% increase in energy stability compared to the traditional technique Bacterial Foraging Optimization Algorithm (BFO)

    Metodología de desarrollo de técnicas de agrupamiento de datos usando aprendizaje automático

    Get PDF
    Context: Today, the usage of large amounts of data acquired from various electronic, optical, or other measurement devices and equipment brings the problem of data analysis at the time of extracting the aimed information from the acquired samples. Where to correctly group the data is necessary to obtain relevant and accurate information to evidence the physical phenomenon that you want to address. Methodology: The work presents the development and evolution of a five-stage methodology for the development of a data grouping technique, using machine learning techniques and artificial intelligence. It consists of five phases called analysis, design, development, evaluation, and distribution, using open-source standards, and based on unified languages for the interpretation of software in engineering. Results: The validation of the methodology was developed through the creation of two data analysis methods, with an average execution time of 20 weeks, obtaining precision values 40% and 29% higher with the classic data grouping algorithms of k-means and fuzzy cmeans. Additionally, there is a massive experimentation methodology on automated unit tests, which managed to group, label, and validate 3.6 million samples accumulated in the total of 100 group runs of 900 samples in approximately 2 hours. Conclusions: Finally, with the results of the research was determined that the methodology intends to guide the systematic development in specific problems in quantitative databases, such as the channel parameters in a communication system or the segmentation of images using the RGB values of the pixels. Even when software is developed both hardware, the execution will be more versatile than in cases with theoretical applications.Contexto: Hoy en día, el uso de grandes cantidades de datos adquiridos desde diversos dispositivos y equipos electrónicos, ópticos u otra tecnología de medición, generan un problema de análisis de datos en el momento de extraer la información de interés desde las muestras adquiridas. En ellos, agrupar correctamente los datos es necesario para obtener información relevante y precisa para evidenciar el fenómeno físico que se desea abordar. Metodología: El trabajo presenta la evolución de una metodología de cinco etapas para el desarrollo de una técnica de agrupamiento de datos, a través de técnicas de aprendizaje automático e inteligencia artificial. Esta se compone de cinco fases denominadas análisis, diseño, desarrollo, evaluación y distribución, con estándares de código abierto y fundamentadas en los lenguajes unificados para la interpretación del software en ingeniería. Resultados: La validación de la metodología se ha desarrollado mediante la creación de dos métodos de análisis de datos, con un tiempo de ejecución promedio de 20 semanas, obteniendo valores de precisión 40 % y 29 % superiores con los algoritmos clásicos de agrupamiento de datos de k-means y fuzzy c-means. Adicionalmente, se encuentra una metodología de experimentación masiva sobre pruebas unitarias automatizadas, las cuales lograron agrupar, etiquetar y validar 3,6 millones de muestras, acumulado un total de 100 ejecuciones de grupos de 900 muestras, en aproximadamente 2 horas. Conclusiones: Con los resultados de la investigación se ha determinado que la metodología pretende orientar el desarrollo sistemático de técnicas de agrupamiento de datos, en problemas específicos para bases integradas por muestras con atributos cuantitativos, como los casos de parámetros de canal en un sistema de comunicaciones o la segmentación de imágenes usando los valoras RGB de los pixeles; incluso, cuando se desarrolla software y hardware, la ejecución será más versátil que en casos con aplicaciones teóricas

    Identification of Clear Text Data Obfuscated Within Active File Slack

    Get PDF
    Obfuscating text on a hard drive can be done by utilizing the slack space of files. Text can be inserted into the area between the end of the file data and the New Technology File System (NTFS) cluster (the smallest drive space allocated to a file) that in which the file is stored, the data is hidden from traditional methods of viewing. If the hard drive is large, how does a digital forensics expert know where to look to find text that has been obfuscated? Searching through a large hard drive could take up a substantial amount of time that the expert possibly could not justify. If the digital forensics expert lacks the knowledge on how to properly search a hard drive for obfuscated clear text using data carving concepts, how will the obfuscated clear text be located on the drive and identified? To address this, an algorithm was proposed and tested, which resulted in the successful identification of clear text data in slack space with a percentage average of 99.31% identified. This algorithm is a reliable form of slack space analysis which can be used in conjunction with other data extraction methods to see the full scope of evidence on a drive

    Financial Risks of Russian Oil Companies in Conditions of Volatility of Global Oil Prices

    Get PDF
    The development of scientific approaches to assessing and diagnosing the financial risks of oil industry in the Russian Federation becomes a high priority task in conditions of high level of volatility in oil prices in the world energy market and preservation of sanctions regime. The article shows the main threats to financial stability of oil companies in Russia. Using cluster analysis, a system of indicators is proposed that determines the level of financial risk of oil companies in Russia. Based on the method of expert assessments and fuzzy sets, the classification of financial risk levels of oil industry is proposed. The integrated financial risk level of oil industry was calculated and scenarios of its development for 2018–2020 were forecast by means of regression modeling. The system of measures to improve the stability of oil companies and prevent functional financial risks is argued. The practical implementation of research results will be the basis for timely diagnosis of financial risks and qualitative development of preventive measures to neutralize them in the oil industry of Russia. Keywords: Oil Industry, Oil Companies, Financial Risks, Oil Prices, Financial Stability of Oil Industry JEL Classifications: Q43; Q41; G32; L52 DOI: https://doi.org/10.32479/ijeep.735

    A Hybrid Chimp Optimization Algorithm and Generalized Normal Distribution Algorithm with Opposition-Based Learning Strategy for Solving Data Clustering Problems

    Full text link
    This paper is concerned with data clustering to separate clusters based on the connectivity principle for categorizing similar and dissimilar data into different groups. Although classical clustering algorithms such as K-means are efficient techniques, they often trap in local optima and have a slow convergence rate in solving high-dimensional problems. To address these issues, many successful meta-heuristic optimization algorithms and intelligence-based methods have been introduced to attain the optimal solution in a reasonable time. They are designed to escape from a local optimum problem by allowing flexible movements or random behaviors. In this study, we attempt to conceptualize a powerful approach using the three main components: Chimp Optimization Algorithm (ChOA), Generalized Normal Distribution Algorithm (GNDA), and Opposition-Based Learning (OBL) method. Firstly, two versions of ChOA with two different independent groups' strategies and seven chaotic maps, entitled ChOA(I) and ChOA(II), are presented to achieve the best possible result for data clustering purposes. Secondly, a novel combination of ChOA and GNDA algorithms with the OBL strategy is devised to solve the major shortcomings of the original algorithms. Lastly, the proposed ChOAGNDA method is a Selective Opposition (SO) algorithm based on ChOA and GNDA, which can be used to tackle large and complex real-world optimization problems, particularly data clustering applications. The results are evaluated against seven popular meta-heuristic optimization algorithms and eight recent state-of-the-art clustering techniques. Experimental results illustrate that the proposed work significantly outperforms other existing methods in terms of the achievement in minimizing the Sum of Intra-Cluster Distances (SICD), obtaining the lowest Error Rate (ER), accelerating the convergence speed, and finding the optimal cluster centers.Comment: 48 pages, 14 Tables, 12 Figure

    Utilisation des données d'élévation LiDAR à haute résolution pour la cartographie numérique du matériel parental des sols

    Full text link
    Les connaissances sur la morphologie de la Terre sont essentielles à la compréhension d’une variété de processus géomorphologiques et hydrologiques. Des avancées récentes dans le domaine de la télédétection ont significativement fait progresser notre habilité à se représenter la surface de la Terre. Parmi celles-ci, les données d’élévation LiDAR permettent la production de modèles numériques d’altitude (MNA) à haute résolution sur de grands territoires. Le LiDAR est une avancée technologique majeure permettant aux scientifiques de visualiser en détail la morphologie de la Terre et de représenter des reliefs peu prononcés, et ce, même sous la canopée des arbres. Une telle avancée technologique appelle au développement de nouvelles approches innovantes afin d’en réaliser le potentiel scientifique. Dans ce contexte, le présent travail vise à développer deux approches de cartographie numérique utilisant des données d’élévation LiDAR et servant à l’évaluation de la composition du sous-sol. La première approche à être développée utilise la localisation de crêtes de plage identifiées sur des MNA LiDAR afin de modéliser l’étendue maximale de la mer de Champlain, une large paléo-mer régionalement importante. Cette approche nous a permis de cartographier avec précision les 65 000 km2 autrefois inondés par la mer. Ce modèle sert à l’évaluation de la distribution des sédiments marins et littoraux dans les basses-terres du Saint-Laurent. La seconde approche utilise la relation entre des échantillons de matériel parental des sols (MPS) et des attributs topographiques dérivés de données LiDAR afin de cartographier à haute résolution et à une échelle régionale le MPS sur le Bouclier canadien. Pour ce faire, nous utilisons une approche novatrice combinant l’analyse d’image orientée-objet (AIOO) avec une classification par arbre décisionnel. Cette approche nous a permis de produire une carte du MPS à haute résolution sur plus de 185 km2 dans un environnement hétérogène de post-glaciation. Les connaissances issues de la production de ces deux modèles ont permis de conceptualiser la composition du sous-sol dans les régions limitrophes entre les basses-terres du Saint-Laurent et le Bouclier canadien. Ce modèle fournit aux chercheurs et aux gestionnaires de ressources des connaissances détaillées sur la géomorphologie de cette région et contribue à l’amélioration de notre capacité à saisir les services écosystémiques et à prédire les aléas environnementaux liés aux processus du sous-sol.Knowledge of the earth’s morphology is essential to the understanding of many geomorphic and hydrologic processes. Recent advancements in the field of remote sensing have significantly improved our ability to assess the earth’s surface. From these, LiDAR elevation data permits the production of high-resolution digital elevation models (DEMs) over large areas. LiDAR is a major technological advance as it allows geoscientists to visualize the earth’s morphology in high detail, even allowing us to resolve low-relief landforms in forested areas where the surface is obstructed by vegetation cover. Such a technological advance calls for the development of new and novel approaches to realize the scientific potential of this new spatial data. In this context, the present work aims to develop two digital mapping approaches that use LiDAR elevation data for assessing the earth’s subsurface composition. The first approach to be developed uses the location of low-relief beach ridges observed on LiDAR-derived DEMs to map the extent of a large and regionally important paleo-sea, the Champlain Sea. This approach allowed us to accurately map the 65,000 km2 area once inundated by sea water. The model serves to the assessment of the distribution of marine and littoral sediments in the St. Lawrence Lowlands. The second approach uses the relationship between field-acquired samples of soil parent material (SPM) and LiDAR-derived topographic attributes to map SPM at high-resolution and at a regional scale on the Canadian Shield. To do so, we used a novel approach that combined object-based image analysis (OBIA) with a classification tree algorithm. This approach allowed us to produce a fine-resolution 185 km2 map of SPM in a heterogeneous post-glaciation Precambrian Shield setting. The knowledge obtained from producing these two models allowed us to conceptualize the subsurface composition at the limit between the St. Lawrence Lowlands and the Canadian Shield. This insight provides researchers and resource managers with a more detailed understanding of the geomorphology of this area and contributes to improve our capacity to grasp ecosystem services and predict environmental hazards related to subsurface processes
    corecore