231 research outputs found

    Current Studies and Applications of Krill Herd and Gravitational Search Algorithms in Healthcare

    Full text link
    Nature-Inspired Computing or NIC for short is a relatively young field that tries to discover fresh methods of computing by researching how natural phenomena function to find solutions to complicated issues in many contexts. As a consequence of this, ground-breaking research has been conducted in a variety of domains, including synthetic immune functions, neural networks, the intelligence of swarm, as well as computing of evolutionary. In the domains of biology, physics, engineering, economics, and management, NIC techniques are used. In real-world classification, optimization, forecasting, and clustering, as well as engineering and science issues, meta-heuristics algorithms are successful, efficient, and resilient. There are two active NIC patterns: the gravitational search algorithm and the Krill herd algorithm. The study on using the Krill Herd Algorithm (KH) and the Gravitational Search Algorithm (GSA) in medicine and healthcare is given a worldwide and historical review in this publication. Comprehensive surveys have been conducted on some other nature-inspired algorithms, including KH and GSA. The various versions of the KH and GSA algorithms and their applications in healthcare are thoroughly reviewed in the present article. Nonetheless, no survey research on KH and GSA in the healthcare field has been undertaken. As a result, this work conducts a thorough review of KH and GSA to assist researchers in using them in diverse domains or hybridizing them with other popular algorithms. It also provides an in-depth examination of the KH and GSA in terms of application, modification, and hybridization. It is important to note that the goal of the study is to offer a viewpoint on GSA with KH, particularly for academics interested in investigating the capabilities and performance of the algorithm in the healthcare and medical domains.Comment: 35 page

    Lung Disease Classification using Dense Alex Net Framework with Contrast Normalisation and Five-Fold Geometric Transformation

    Get PDF
    lung disease is one of the leading causes of death worldwide. Most cases of lung diseases are found when the disease is in an advanced stage. Therefore, the development of systems and methods that begin to diagnose quickly and prematurely plays a vital role in today's world. Currently, in detecting differences in lung cancer, an accurate diagnosis of cancer types is needed. However, improving the accuracy and reducing training time of the diagnosis remains a challenge. In this study, we have developed an automated classification scheme for lung cancer presented in histopathological images using a dense Alex Net framework. The proposed methodology carries out several phases includes pre-processing, contrast normalization, data augmentation and classification. Initially, the pre-processing step is accompanied to diminish the noisy contents present in the image. Contrast normalization has been explored to maintain the same illumination factor among histopathological lung images next to pre-processing. Afterwards, data augmentation phase has been carried out to enhance the dataset further to avoid over-fitting problems. Finally, the Dense Alex Net is utilized for classification that comprises five convolutional layers, one multi-scale convolution layer, and three fully connected layers. In evaluation experiments, the proposed approach was trained using our original database to provide rich and meaningful features. The accuracy attained by the proposed methodology is93%, which is maximum compared with the existing algorithm

    Enhancement of Metaheuristic Algorithm for Scheduling Workflows in Multi-fog Environments

    Get PDF
    Whether in computer science, engineering, or economics, optimization lies at the heart of any challenge involving decision-making. Choosing between several options is part of the decision- making process. Our desire to make the "better" decision drives our decision. An objective function or performance index describes the assessment of the alternative's goodness. The theory and methods of optimization are concerned with picking the best option. There are two types of optimization methods: deterministic and stochastic. The first is a traditional approach, which works well for small and linear problems. However, they struggle to address most of the real-world problems, which have a highly dimensional, nonlinear, and complex nature. As an alternative, stochastic optimization algorithms are specifically designed to tackle these types of challenges and are more common nowadays. This study proposed two stochastic, robust swarm-based metaheuristic optimization methods. They are both hybrid algorithms, which are formulated by combining Particle Swarm Optimization and Salp Swarm Optimization algorithms. Further, these algorithms are then applied to an important and thought-provoking problem. The problem is scientific workflow scheduling in multiple fog environments. Many computer environments, such as fog computing, are plagued by security attacks that must be handled. DDoS attacks are effectively harmful to fog computing environments as they occupy the fog's resources and make them busy. Thus, the fog environments would generally have fewer resources available during these types of attacks, and then the scheduling of submitted Internet of Things (IoT) workflows would be affected. Nevertheless, the current systems disregard the impact of DDoS attacks occurring in their scheduling process, causing the amount of workflows that miss deadlines as well as increasing the amount of tasks that are offloaded to the cloud. Hence, this study proposed a hybrid optimization algorithm as a solution for dealing with the workflow scheduling issue in various fog computing locations. The proposed algorithm comprises Salp Swarm Algorithm (SSA) and Particle Swarm Optimization (PSO). In dealing with the effects of DDoS attacks on fog computing locations, two Markov-chain schemes of discrete time types were used, whereby one calculates the average network bandwidth existing in each fog while the other determines the number of virtual machines existing in every fog on average. DDoS attacks are addressed at various levels. The approach predicts the DDoS attackā€™s influences on fog environments. Based on the simulation results, the proposed method can significantly lessen the amount of offloaded tasks that are transferred to the cloud data centers. It could also decrease the amount of workflows with missed deadlines. Moreover, the significance of green fog computing is growing in fog computing environments, in which the consumption of energy plays an essential role in determining maintenance expenses and carbon dioxide emissions. The implementation of efficient scheduling methods has the potential to mitigate the usage of energy by allocating tasks to the most appropriate resources, considering the energy efficiency of each individual resource. In order to mitigate these challenges, the proposed algorithm integrates the Dynamic Voltage and Frequency Scaling (DVFS) technique, which is commonly employed to enhance the energy efficiency of processors. The experimental findings demonstrate that the utilization of the proposed method, combined with the Dynamic Voltage and Frequency Scaling (DVFS) technique, yields improved outcomes. These benefits encompass a minimization in energy consumption. Consequently, this approach emerges as a more environmentally friendly and sustainable solution for fog computing environments

    An Evaluation of Tools, Parameters, and Objectives in Building Facade Optimization Research

    Get PDF
    This research paper presents an analysis of building facade optimization studies. The shift toward simulation-based design methods empowers architects to conduct detailed environmental performance simulations prior to construction, enabling design adjustments based on simulation outcomes. Various quantitative methods have emerged for assessing environmental factors, including daylight availability, glare mitigation, and improving thermal comfort. Moreover, combining simulation tools with optimization algorithms has enhanced the design process, facilitating the generation of multiple solutions aligned with specific performance criteria. To gain an overall perspective on the present state of building facade optimization, a comprehensive review of related peer-reviewed papers was conducted. This review encompasses an evaluation of building types, geographical locations, design parameters, optimization objectives, as well as the simulation and optimization tools employed in each study. The primary aim is to identify frequently addressed optimization objectives in building performance research and critical parameters within the building facade. The results of this analysis hold significant implications for professionals within the fields of building science and design. By identifying commonly explored optimization objectives, such as maximizing daylighting, controlling glare, and enhancing thermal comfort, this research provides valuable insights for future research endeavors and design methodologies. Furthermore, recognizing pivotal factors within the building facade, such as architectural form, wall composition, insulation materials, glazing specifications, and shading strategies, contributes to a more profound understanding of the key determinants influencing building performance

    Biomimicry green faƧade : integrating nature into building faƧades for enhanced building envelope efficiency

    Get PDF
    Incorporating natural elements into the design of building faƧades, such as green faƧades, has emerged as a promising strategy for achieving sustainable and energy-efficient buildings. Biomimicry has become a key inspiration for the development of innovative green faƧade systems. However, there is still progress to be made in maximising their aesthetic and structural performance, and the application of advanced and generative design methods is imperative for optimising green faƧade architecture. This research aims to present a generative design-based prototype of a biomimicry green faƧade substrate with photosynthetic microorganisms to enhance building faƧade efficiency. The concept of green faƧades offers numerous advantages, as it can be adapted to a wide range of building structures and implemented in various climates. To achieve this, Rhino and Grasshopper were utilized to design the generative and parametric substrate, optimizing the architectural form using a genetic algorithm. Consequently, a bio-faƧade prototype was developed, determining the optimal number and shape of coral envelopes to maintain cyanobacteria within a generative and parametric faƧade. Furthermore, the photosynthetic microorganism faƧade acted as an adaptive faƧade, effectively improving visual and thermal comfort, daylighting, and Indoor Environmental Quality performance

    Algorithm Selection in Multimodal Medical Image Registration

    Get PDF
    Medical image acquisition technology has improved significantly throughout the last several decades, and clinicians now rely on medical images to diagnose illnesses, and to determine treatment protocols, and surgical planning. Medical images have been divided by researchers into two types of structures: functional and anatomical. Anatomical imaging, such as magnetic resonance imaging (MRI), computed tomography imaging (C.T.), ultrasound, and other systems, enables medical personnel to examine a body internally with great accuracy, thereby avoiding the risks associated with exploratory surgery. Functional (or physiological) imaging systems contain single-photon emission computed tomography (SPECT), positron emission tomography (PET), and other methods, which refer to a medical imaging system for discovering or evaluating variations in absorption, blood flow, metabolism, and regional chemical composition. Notably, one of these medical imaging models alone cannot usually supply doctors with adequate information. Additionally, data obtained from several images of the same subject generally provide complementary information via a process called medical image registration. Image registration may be defined as the process of geometrically mapping one -imageā€™s coordinate system to the coordinate system of another image acquired from a different perspective and with a different sensor. Registration performs a crucial role in medical image assessment because it helps clinicians observe the developing trend of the disease and make proper measures accordingly. Medical image registration (MIR) has several applications: radiation therapy, tumour diagnosis and recognition, template atlas application, and surgical guidance system. There are two types of registration: manual registration and registration-based computer system. Manual registration is when the radiologist /physician completes all registration tasks interactively with visual feedback provided by the computer system, which can result in serious problems. For instance, investigations conducted by two experts are not identical, and registration correctness is determined by the user's assessment of the relationship between anatomical features. Furthermore, it may take a long time for the user to achieve proper alignment, and the outcomes vary according to the user. As a result, the outcomes of manual alignment are doubtful and unreliable. The second registration approach is computer-based multimodal medical image registration that targets various medical images, and an arraof application types. . Additionally, automatic registration in medical pictures matches the standard recognized characteristics or voxels in pre- and intra-operative imaging without user input. Registration of multimodal pictures is the initial step in integrating data from several images. Automatic image processing has emerged to mitigate (Husein, do you mean ā€œmitigateā€ or ā€œimproveā€?) the manual image registration reliability, robustness, accuracy, and processing time. While such registration algorithms offer advantages when applied to some medical images, their use with others is accompanied by disadvantages. No registration technique can outperform all input datasets due to the changeability of medical imaging and the diverse demands of applications. However, no algorithm is preferable under all possible conditions; given many available algorithms, choosing the one that adapts the best to the task is vital. The essential factor is to choose which method is most appropriate for the situation. The Algorithm Selection Problem has emerged in numerous research disciplines, including medical diagnosis, machine learning, optimization, and computations. The choice of the most powerful strategy for a particular issue seeks to minimize these issues. This study delivers a universal and practical framework for multimodal registration algorithm choice. The primary goal of this study is to introduce a generic structure for constructing a medical image registration system capable of selecting the best registration process from a range of registration algorithms for various used datasets. Three strategies were constructed to examine the framework that was created. The first strategy is based on transforming the problem of algorithm selection into a classification problem. The second strategy investigates the effect of various parameters, such as optimization control points, on the optimal selection. The third strategy establishes a framework for choosing the optimal registration algorithm for a delivered dataset based on two primary criteria: registration algorithm applicability, and performance measures. The approach mentioned in this section has relied on machine learning methods and artificial neural networks to determine which candidate is most promising. Several experiments and scenarios have been conducted, and the results reveal that the novel Framework strategy leads to achieving the best performance, such as high accuracy, reliability, robustness, efficiency, and low processing time

    Multicasting Model for Efficient Data Transmission in VANET

    Get PDF
    VANETs (Vehicle Ad hoc Networks) are networks made up of a number of vehicular nodes that are free to enter and leave the network. The Location Aided Routing (LAR) protocol is the one that is most frequently utilized among them. Here, the route request packets are flooded across many pathways to the source node using the broadcasting strategy. The vehicles that have a direct path to the destination send the route reply packets back to the source. The least number of hops and the sequence number are used to determine the route from source to destination. This research study has used the multicasting approach to construct a path from the source node to the destination node. Within this multicasting strategy, the root nodes from the network are selected for data routing. The path between the source and the destination is chosen using a root node. The suggested approach is put into practice using the NS2, and some parametric values are computed to produce analytical findings

    Algorithm selection in structural optimization

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 153-162).Structural optimization is largely unused as a practical design tool, despite an extensive academic literature which demonstrates its potential to dramatically improve design processes and outcomes. Many factors inhibit optimization's application. Among them is the requirement for engineers-who generally lack the requisite expertise-to choose an optimization algorithm for a given problem. A suitable choice of algorithm improves the resulting design and reduces computational cost, yet the field of optimization does little to guide engineers in selecting from an overwhelming number of options. The goal of this dissertation is to aid, and ultimately to automate, algorithm selection, thus enhancing optimization's applicability in real-world design. The initial chapters examine the extent of the problem by reviewing relevant literature and by performing a short, empirical study of algorithm performance variation. We then specify hundreds of bridge design problems by methodically varying problem characteristics, and solve each of them with eight commonly-used nonlinear optimization algorithms. The resulting, extensive data set is used to address the algorithm selection problem. The results are first interpreted from an engineering perspective to ensure their validity as solutions to realistic problems. Algorithm performance trends are then analyzed, showing that no single algorithm outperforms the others on every problem. Those that achieve the best solutions are often computationally expensive, and those that converge quickly often arrive at poor solutions. Some problem features, such as the numbers of design variables and constraints, the structural type, and the nature of the objective function, correlate with algorithm performance. This knowledge and the generated data set are then used to develop techniques for automatic selection of optimization algorithms, based on a range supervised learning methods. Compared to a set of current, manual selection strategies, these techniques select the best algorithm almost twice as often, lead to better-quality solutions and reduced computational cost, and-on a randomly-chosen set of mass minimization problems-reduce average material use by 9.4%. The dissertation concludes by outlining future research on algorithm selection, on integrating these techniques in design software, and on adapting structural optimization to the realities of design. Keywords: Algorithm selection, structural optimization, structural design, machine learningby Rory Clune.Ph.D

    An Analysis Review: Optimal Trajectory for 6-DOF-based Intelligent Controller in Biomedical Application

    Get PDF
    With technological advancements and the development of robots have begun to be utilized in numerous sectors, including industrial, agricultural, and medical. Optimizing the path planning of robot manipulators is a fundamental aspect of robot research with promising future prospects. The precise robot manipulator tracks can enhance the efficacy of a variety of robot duties, such as workshop operations, crop harvesting, and medical procedures, among others. Trajectory planning for robot manipulators is one of the fundamental robot technologies, and manipulator trajectory accuracy can be enhanced by the design of their controllers. However, the majority of controllers devised up to this point were incapable of effectively resolving the nonlinearity and uncertainty issues of high-degree freedom manipulators in order to overcome these issues and enhance the track performance of high-degree freedom manipulators. Developing practical path-planning algorithms to efficiently complete robot functions in autonomous robotics is critical. In addition, designing a collision-free path in conjunction with the physical limitations of the robot is a very challenging challenge due to the complex environment surrounding the dynamics and kinetics of robots with different degrees of freedom (DoF) and/or multiple arms. The advantages and disadvantages of current robot motion planning methods, incompleteness, scalability, safety, stability, smoothness, accuracy, optimization, and efficiency are examined in this paper
    • ā€¦
    corecore