3,120 research outputs found

    Milling of Inconel 718: an experimental and integrated modeling approach for surface roughness

    Get PDF
    Inconel 718, a hard-to-cut superalloy is reputed for having poor machining performance due to its low thermal conductivity. Consequently, the surface quality of the machined parts suffers. The surface roughness value must fall within the stringent limits to ensure the functional performance of the components used in aerospace and bioimplant applications. One doable way to enhance its machinability is the adequate dissipation of heat from the machining zone through efficient and ecofriendly cooling environment. With this perspective, an experimental and integrated green-response surface machiningbased- evolutionary optimization (G-RSM-EO) approach is presented during this investigation. The results are compared with two base-line techniques: the traditional flooded approach with Hocut WS 8065 mineral oil, and the dry green approach. A Box-Behnken response surface methodology (RSM) is employed to design the milling tests considering three control parameters, i.e., cutting speed (vs), feed/flute (fz), and axial depth of cut (ap). These control parameters are used in the various experiments conducted during this research work. The parametric analysis is then accomplished through surface plots, and the analysis of variance (ANOVA) is presented to assess the effects of these control parameters. Afterwards, a multiple regression model is developed to identify the parametric relevance of vs, fz, and ap, with surface roughness (SR) as the response attribute. A residual analysis is performed to validate the statistical adequacy of the predicted model. Lastly, the surface roughness regression model is considered as the objective function of the particle swarm optimization (PSO) model to minimize the surface roughness of the machined parts. The optimized SR results are compared to the widely employed genetic algorithm (GA) and RSM-based desirability function approach (DF). The confirmatory machining tests proved that the integrated optimization approach with PSO being an evolutionary technique is more effective compared to GA and DF with respect to accuracy (0.05% error), adequacy, and processing time (3.19 min). Furthermore, the study reveals that the Mecagreen 450 biodegradable oil-enriched flooded strategy has significantly improved the milling of Inconel 718 in terms of eco-sustainability and productivity, i.e., 42.9% cost reduction in cutting fluid consumption and 73.5% improvement in surface quality compared to the traditional flooded approach and the dry green approach. Moreover, the G-RSM-EO approach presents a sustainable alternative by achieving a Ra of 0.3942 μm that is finer than a post-finishing operation used to produce close tolerance reliable components for aerospace industry

    Optimal Service Provisioning in IoT Fog-based Environment for QoS-aware Delay-sensitive Application

    Full text link
    This paper addresses the escalating challenges posed by the ever-increasing data volume, velocity, and the demand for low-latency applications, driven by the proliferation of smart devices and Internet of Things (IoT) applications. To mitigate service delay and enhance Quality of Service (QoS), we introduce a hybrid optimization of Particle Swarm (PSO) and Chemical Reaction (CRO) to improve service delay in FogPlan, an offline framework that prioritizes QoS and enables dynamic fog service deployment. The method optimizes fog service allocation based on incoming traffic to each fog node, formulating it as an Integer Non-Linear Programming (INLP) problem, considering various service attributes and costs. Our proposed algorithm aims to minimize service delay and QoS degradation. The evaluation using real MAWI Working Group traffic data demonstrates a substantial 29.34% reduction in service delay, a 66.02% decrease in service costs, and a noteworthy 50.15% reduction in delay violations compared to the FogPlan framework

    Applications of Nature-Inspired Algorithms for Dimension Reduction: Enabling Efficient Data Analytics

    Get PDF
    In [1], we have explored the theoretical aspects of feature selection and evolutionary algorithms. In this chapter, we focus on optimization algorithms for enhancing data analytic process, i.e., we propose to explore applications of nature-inspired algorithms in data science. Feature selection optimization is a hybrid approach leveraging feature selection techniques and evolutionary algorithms process to optimize the selected features. Prior works solve this problem iteratively to converge to an optimal feature subset. Feature selection optimization is a non-specific domain approach. Data scientists mainly attempt to find an advanced way to analyze data n with high computational efficiency and low time complexity, leading to efficient data analytics. Thus, by increasing generated/measured/sensed data from various sources, analysis, manipulation and illustration of data grow exponentially. Due to the large scale data sets, Curse of dimensionality (CoD) is one of the NP-hard problems in data science. Hence, several efforts have been focused on leveraging evolutionary algorithms (EAs) to address the complex issues in large scale data analytics problems. Dimension reduction, together with EAs, lends itself to solve CoD and solve complex problems, in terms of time complexity, efficiently. In this chapter, we first provide a brief overview of previous studies that focused on solving CoD using feature extraction optimization process. We then discuss practical examples of research studies are successfully tackled some application domains, such as image processing, sentiment analysis, network traffics / anomalies analysis, credit score analysis and other benchmark functions/data sets analysis

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    A Review of the Family of Artificial Fish Swarm Algorithms: Recent Advances and Applications

    Full text link
    The Artificial Fish Swarm Algorithm (AFSA) is inspired by the ecological behaviors of fish schooling in nature, viz., the preying, swarming, following and random behaviors. Owing to a number of salient properties, which include flexibility, fast convergence, and insensitivity to the initial parameter settings, the family of AFSA has emerged as an effective Swarm Intelligence (SI) methodology that has been widely applied to solve real-world optimization problems. Since its introduction in 2002, many improved and hybrid AFSA models have been developed to tackle continuous, binary, and combinatorial optimization problems. This paper aims to present a concise review of the family of AFSA, encompassing the original ASFA and its improvements, continuous, binary, discrete, and hybrid models, as well as the associated applications. A comprehensive survey on the AFSA from its introduction to 2012 can be found in [1]. As such, we focus on a total of {\color{blue}123} articles published in high-quality journals since 2013. We also discuss possible AFSA enhancements and highlight future research directions for the family of AFSA-based models.Comment: 37 pages, 3 figure

    Metaheuristic algorithms for damage identification in real sized structures

    Get PDF
    107 σ.Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.)Ο σκοπός αυτής της εργασίας είναι η εφαρμογή μεταευρετικών αλγορίθμων για την αναγνώριση ζημιών σε ρεαλιστικές, όσον αφορά το μέγεθος , την απόκριση των μελών και τον τρόπο προσέγγισης ιδιοτιμών, ( μια περίπτωση ενός διώροφου μεταλλικού κτιρίου εξετάζεται προσεγγίζοντας τις ιδιοσυχνότητες με τη μέθοδο των υποφορέων) κατασκευές πολιτικού μηχανικού καθώς και να επανεξετάσει τις βασικές θεωρίες και υποθέσεις. Οι δύο τεχνικές για την αναγνώριση ζημιών προτείνονται. Το πρόβλημα της αναγνώρισης ζημιών αποτελεί αντίστροφο πρόβλημα , όπου μπορεί κανείς να αναμένει πολλαπλές λύσεις. Προτείνεται , ένας αλγόριθμος διακριτών τιμών ώστε να ελέγχεται ο μέγιστος αριθμός βλαμμένων στοιχείων για την αναζήτηση . Όταν το μέγεθος ή / και ο αριθμός των ζημιών αυξάνει οι υπάρχουσες μέθοδοι ( κυρίως ευαισθησίας μεθόδων που απορρέουν από την πρώτη θεωρία διαταραχών) παράγουν περισσότερες ζημιές , από αυτές που υπετέθησαν . Μια τεχνική χρησιμοποιώντας τον χώρο του πυρήνα του πίνακα ευαισθησίας (ο οποίος θεωρείται συνάρτηση των συντελεστών ζημιάς) προτείνεται έτσι ώστε να μπορεί κανείς να παρακολουθεί τις πολλαπλές λύσεις βρίσκοντας σενάρια με λιγότερα βλαμμένα στοιχεία .The scope of this thesis is to apply metaheuristic algorithms for damage identification in realistic regarding size, member response and eigenvalue approximation (a case of a two-storey steel frame building is examined approximating the eigenvalues via substructuring) civil engineer structures as well as reviewing some of the basic theories and assumptions made. Two techniques for damage identification are proposed. The problem of damage identification is an inverse problem where one may expect multiple solutions. A discrete value algorithm is proposed in order to control the maximum number of damaged elements for the search. When size and/or number of damages increases the existing methods (mainly sensitivity methods derived from first order perturbation theory) produce more damages then the ones alleged. A technique using the null space of the sensitivity matrix (which is considered a function of the damage factors) is proposed so one can track the multiple solutions finding cases with fewer damaged elements.Σταύρος Ε. Χατζηελευθερίο

    A Survey on Soft Subspace Clustering

    Full text link
    Subspace clustering (SC) is a promising clustering technology to identify clusters based on their associations with subspaces in high dimensional spaces. SC can be classified into hard subspace clustering (HSC) and soft subspace clustering (SSC). While HSC algorithms have been extensively studied and well accepted by the scientific community, SSC algorithms are relatively new but gaining more attention in recent years due to better adaptability. In the paper, a comprehensive survey on existing SSC algorithms and the recent development are presented. The SSC algorithms are classified systematically into three main categories, namely, conventional SSC (CSSC), independent SSC (ISSC) and extended SSC (XSSC). The characteristics of these algorithms are highlighted and the potential future development of SSC is also discussed.Comment: This paper has been published in Information Sciences Journal in 201

    Adequate model complexity and data resolution for effective constraint of simulation models by 4D seismic data

    Get PDF
    4D seismic data bears valuable spatial information about production-related changes in the reservoir. It is a challenging task though to make simulation models honour it. Strict spatial tie of seismic data requires adequate model complexity in order to assimilate details of seismic signature. On the other hand, not all the details in the seismic signal are critical or even relevant to the flow characteristics of the simulation model so that fitting them may compromise the predictive capability of models. So, how complex should be a model to take advantage of information from seismic data and what details should be matched? This work aims to show how choices of parameterisation affect the efficiency of assimilating spatial information from the seismic data. Also, the level of details at which the seismic signal carries useful information for the simulation model is demonstrated in light of the limited detectability of events on the seismic map and modelling errors. The problem of the optimal model complexity is investigated in the context of choosing model parameterisation which allows effective assimilation of spatial information in the seismic map. In this study, a model parameterisation scheme based on deterministic objects derived from seismic interpretation creates bias for model predictions which results in poor fit of historic data. The key to rectifying the bias was found to be increasing the flexibility of parameterisation by either increasing the number of parameters or using a scheme that does not impose prior information incompatible with data such as pilot points in this case. Using the history matching experiments with a combined dataset of production and seismic data, a level of match of the seismic maps is identified which results in an optimal constraint of the simulation models. Better constrained models were identified by quality of their forecasts and closeness of the pressure and saturation state to the truth case. The results indicate that a significant amount of details in the seismic maps is not contributing to the constructive constraint by the seismic data which is caused by two factors. First is that smaller details are a specific response of the system-source of observed data, and as such are not relevant to flow characteristics of the model, and second is that the resolution of the seismic map itself is limited by the seismic bandwidth and noise. The results suggest that the notion of a good match for 4D seismic maps commonly equated to the visually close match is not universally applicable
    corecore