51 research outputs found

    An optimized deep learning model for optical character recognition applications

    Get PDF
    The convolutional neural networks (CNN) are among the most utilized neural networks in various applications, including deep learning. In recent years, the continuing extension of CNN into increasingly complicated domains has made its training process more difficult. Thus, researchers adopted optimized hybrid algorithms to address this problem. In this work, a novel chaotic black hole algorithm-based approach was created for the training of CNN to optimize its performance via avoidance of entrapment in the local minima. The logistic chaotic map was used to initialize the population instead of using the uniform distribution. The proposed training algorithm was developed based on a specific benchmark problem for optical character recognition applications; the proposed method was evaluated for performance in terms of computational accuracy, convergence analysis, and cost

    RERS-CC: Robotic facial recognition system for improving the accuracy of human face identification using HRI

    Get PDF
    BACKGROUND: Human-Computer Interaction (HCI) is incorporated with a variety of applications for input processing and response actions. Facial recognition systems in workplaces and security systems help to improve the detection and classification of humans based on the vision experienced by the input system. OBJECTIVES: In this manuscript, the Robotic Facial Recognition System using the Compound Classifier (RERS-CC) is introduced to improve the recognition rate of human faces. The process is differentiated into classification, detection, and recognition phases that employ principal component analysis based learning. In this learning process, the errors in image processing based on the extracted different features are used for error classification and accuracy improvements. RESULTS: The performance of the proposed RERS-CC is validated experimentally using the input image dataset in MATLAB tool. The performance results show that the proposed method improves detection and recognition accuracy with fewer errors and processing time. CONCLUSION: The input image is processed with the knowledge of the features and errors that are observed with different orientations and time instances. With the help of matching dataset and the similarity index verification, the proposed method identifies precise human face with augmented true positives and recognition rate

    DependData: data collection dependability through three-layer decision-making in BSNs for healthcare monitoring

    Get PDF
    Recently, there have been extensive studies on applying security and privacy protocols in Body Sensor Networks (BSNs) for patient healthcare monitoring (BSN-Health). Though these protocols provide adequate security to data packets, the collected data may still be compromised at the time of acquisition and before aggregation/storage in the severely resource-constrained BSNs. This leads to data collection frameworks being meaningless or undependable, i.e., an undependable BSN-Health. We study data dependability concerns in the BSN-Health and propose a data dependability verification framework named DependData with the objective of verifying data dependability through the decision-making in three layers. The 1st decision-making (1-DM) layer verifies signal-level data at each health sensor of the BSN locally to guarantee that collected signals ready for processing and transmission are dependable so that undependable processing and transmission in the BSN can be avoided. The 2nd decision-making (2-DM) layer verifies data before aggregation at each local aggregator (like clusterhead) of the BSN to guarantee that data received for aggregation is dependable so that undependable data aggregation can be avoided. The 3rd decision-making (3-DM) layer verifies the stored data before the data appears to a remote healthcare data user to guarantee that data available to the owner end (such as smartphone) is dependable so that undependable information viewing can be avoided. Finally, we evaluate the performance of DependData through simulations regarding 1-DM, 2-DM, and 3-DM and show that up to 92% of data dependability concerns can be detected in the three layers. To the best of our knowledge, DependData would be the first framework to address data dependability aside from current substantial studies of security and privacy protocols. We believe the three layers decision-making framework would attract a wide range of applications in the future

    Intraperitoneal drain placement and outcomes after elective colorectal surgery: international matched, prospective, cohort study

    Get PDF
    Despite current guidelines, intraperitoneal drain placement after elective colorectal surgery remains widespread. Drains were not associated with earlier detection of intraperitoneal collections, but were associated with prolonged hospital stay and increased risk of surgical-site infections.Background Many surgeons routinely place intraperitoneal drains after elective colorectal surgery. However, enhanced recovery after surgery guidelines recommend against their routine use owing to a lack of clear clinical benefit. This study aimed to describe international variation in intraperitoneal drain placement and the safety of this practice. Methods COMPASS (COMPlicAted intra-abdominal collectionS after colorectal Surgery) was a prospective, international, cohort study which enrolled consecutive adults undergoing elective colorectal surgery (February to March 2020). The primary outcome was the rate of intraperitoneal drain placement. Secondary outcomes included: rate and time to diagnosis of postoperative intraperitoneal collections; rate of surgical site infections (SSIs); time to discharge; and 30-day major postoperative complications (Clavien-Dindo grade at least III). After propensity score matching, multivariable logistic regression and Cox proportional hazards regression were used to estimate the independent association of the secondary outcomes with drain placement. Results Overall, 1805 patients from 22 countries were included (798 women, 44.2 per cent; median age 67.0 years). The drain insertion rate was 51.9 per cent (937 patients). After matching, drains were not associated with reduced rates (odds ratio (OR) 1.33, 95 per cent c.i. 0.79 to 2.23; P = 0.287) or earlier detection (hazard ratio (HR) 0.87, 0.33 to 2.31; P = 0.780) of collections. Although not associated with worse major postoperative complications (OR 1.09, 0.68 to 1.75; P = 0.709), drains were associated with delayed hospital discharge (HR 0.58, 0.52 to 0.66; P < 0.001) and an increased risk of SSIs (OR 2.47, 1.50 to 4.05; P < 0.001). Conclusion Intraperitoneal drain placement after elective colorectal surgery is not associated with earlier detection of postoperative collections, but prolongs hospital stay and increases SSI risk

    Solving large-scale problems using multi-swarm particle swarm approach

    Get PDF
    Several metaheuristics have been previously proposed and several improvements have been implemented as well. Most of these methods were either inspired by nature or by the behavior of certain swarms such as birds, ants, bees, or even bats. In the metaheuristics, two key components (exploration and exploitation) are significant and their interaction can significantly affect the efficiency of a metaheuristic. How-ever, there is no rule on how to balance these important components. In this paper, a new balancing mechanism based on multi-swarm approach is proposed for balancing exploration and exploitation in metaheuristics. The new approach is inspired by the concept of a group(s) of people controlled by their leader(s). The leaders of the groups communicate in a meeting room where the overall best leader makes the final decisions. The proposed approach applied on Particle Swarm Optimization (PSO) to balance the exploration and exploitation search called multi-swarm cooperative PSO (MPSO). The proposed approach strived to scale up the application of the (PSO) algorithm towards solving large-scale optimization tasks of up to 1000 real-valued variables. In the simulation part, several benchmark functions were performed with different numbers of dimensions. The proposed algorithm was tested on several test functions, with four different number of dimensions (100, 500, and 1000) it was evaluated in terms of performance efficiency and compared to standard PSO (SPSO), and mastersalve PSO algorithm. The results showed that the proposed PSO algorithm outperformed the other algorithms in terms of the optimal solutions and the convergence

    A new algorithm for normal and large-scale optimization problems: Nomadic People Optimizer

    Get PDF
    Metaheuristic algorithms have received much attention recently for solving different optimization and engineering problems. Most of these methods were inspired by nature or the behavior of certain swarms, such as birds, ants, bees, or even bats, while others were inspired by a specific social behavior such as colonies, or political ideologies. These algorithms faced an important issue, which is the balancing between the global search (exploration) and local search (exploitation) capabilities. In this research, a novel swarm-based metaheuristic algorithm which depends on the behavior of nomadic people was developed, it is called ‘‘Nomadic People Optimizer (NPO)’’. The proposed algorithm simulates the nature of these people in their movement and searches for sources of life (such as water or grass for grazing), and how they have lived hundreds of years, continuously migrating to the most comfortable and suitable places to live. The algorithm was primarily designed based on the multi-swarm approach, consisting of several clans and each clan looking for the best place, in other words, for the best solution depending on the position of their leader. The algorithm is validated based on 36 unconstrained benchmark functions. For the comparison purpose, six well-established nature-inspired algorithms are performed for evaluating the robustness of NPO algorithm. The proposed and the benchmark algorithms are tested for large-scale optimization problems which are associated with high-dimensional variability. The attained results demonstrated a remarkable solution for the NPO algorithm. In addition, the achieved results evidenced the potential high convergence, lower iterations, and less time-consuming required for finding the current best solution

    An enhanced version of black hole algorithm via levy flight for optimization and data clustering problems

    Get PDF
    The processes of retrieving useful information from a dataset are an important data mining technique that is commonly applied, known as Data Clustering. Recently, nature-inspired algorithms have been proposed and utilized for solving the optimization problems in general, and data clustering problem in particular. Black Hole (BH) optimization algorithm has been underlined as a solution for data clustering problems, in which it is a population-based metaheuristic that emulates the phenomenon of the black holes in the universe. In this instance, every solution in motion within the search space represents an individual star. The original BH has shown a superior performance when applied on a benchmark dataset, but it lacks exploration capabilities in some datasets. Addressing the exploration issue, this paper introduces the levy flight into BH algorithm to result in a novel data clustering method “Levy Flight Black Hole (LBH)”, which was then presented accordingly. In LBH, the movement of each star depends mainly on the step size generated by the Levy distribution. Therefore, the star explores an area far from the current black hole when the value step size is big, and vice versa. The performance of LBH in terms of finding the best solutions, prevent getting stuck in local optimum, and the convergence rate has been evaluated based on several unimodal and multimodal numerical optimization problems. Additionally, LBH is then tested using six real datasets available from UCI machine learning laboratory. The experimental outcomes obtained indicated the designed algorithm's suitability for data clustering, displaying effectiveness and robustness

    The Capacity of the Hybridizing Wavelet Transformation Approach With Data-Driven Models for Modeling Monthly-Scale Streamflow

    No full text
    Hybrid models that combine wavelet transformation (WT) as a pre-processing tool with data-driven models (DDMs) as modeling approaches have been widely investigated for forecasting streamflow. The WT approach has been applied to original time series for decomposing processes prior to the application of DDM modeling. This procedure has been applied to eliminate redundant patterns or information that lead to a dramatic increase in the model performance. In this study, three experiments were implemented, including stand-alone data-driven modeling, hind cast decomposing using WT divided and entered into the extreme learning machine (ELM), and the extreme gradient boosting (XGB) model to forecast streamflow data. The WT method was applied in two forms: discrete and continuous (DWT and CWT). In this paper, a new hybrid model is proposed based on an integrative prediction model where XGB is used as an input selection tool for the importance attributes of the prediction matrix that are then supplied to the ELM model as a predictive model. The monthly streamflow, upstream flow, rainfall, temperature, and potential evapotranspiration of a basin named in 1805 and located in the south east of Turkey, are used for development of the model. The modeling results show that applying the WT method improved the performance in the hindcast experiment based on the CWT form with minimum root mean square error (RMSE = 4.910 m 3 /s). On the contrary, WT deteriorated the performance of the forecasting and the stand-alone models exhibited a better performance. WT increased the performance of the hindcast experiment due to the inclusion of future information caused by convolution of the time series. However, the forecast experiment experienced deterioration due to the border effect at the end of the time series. Hence, WT was found not to be a useful pre-processing technique in forecasting the streamflow.Validerad;2020;Nivå 2;2020-06-15 (alebob)</p

    Optimized parameter estimation of a PEMFC model based on improved Grass Fibrous Root Optimization Algorithm

    No full text
    This paper presents a new optimal methodology for parameter identification of a 50 kW polymer membrane fuel cell (PEMFC) based on the economical-functional model. The objective of the study is to optimal estimation of the system parameters such that the minimum total cost has been needed for the stack construction. The total cost here is the sum of the fuel cell stack cost and its auxiliaries by considering air and hydrogen stoichiometric coefficient, system pressure, the current density, and the system temperature. For solving the minimization problem, a newly modified model of the Grass Fibrous Root Optimization Algorithm (MGRA) has been presented. Final results are compared with some several well-known algorithms to indicate the system efficiency and the reliability of the system toward different parameters has been indicated by applying sensitivity analysis
    corecore