177 research outputs found
Energy-based numerical models for assessment of soil liquefaction
AbstractThis study presents promising variants of genetic programming (GP), namely linear genetic programming (LGP) and multi expression programming (MEP) to evaluate the liquefaction resistance of sandy soils. Generalized LGP and MEP-based relationships were developed between the strain energy density required to trigger liquefaction (capacity energy) and the factors affecting the liquefaction characteristics of sands. The correlations were established based on well established and widely dispersed experimental results obtained from the literature. To verify the applicability of the derived models, they were employed to estimate the capacity energy values of parts of the test results that were not included in the analysis. The external validation of the models was verified using statistical criteria recommended by researchers. Sensitivity and parametric analyses were performed for further verification of the correlations. The results indicate that the proposed correlations are effectively capable of capturing the liquefaction resistance of a number of sandy soils. The developed correlations provide a significantly better prediction performance than the models found in the literature. Furthermore, the best LGP and MEP models perform superior than the optimal traditional GP model. The verification phases confirm the efficiency of the derived correlations for their general application to the assessment of the strain energy at the onset of liquefaction
Text-Based Product Matching -- Semi-Supervised Clustering Approach
Matching identical products present in multiple product feeds constitutes a
crucial element of many tasks of e-commerce, such as comparing product
offerings, dynamic price optimization, and selecting the assortment
personalized for the client. It corresponds to the well-known machine learning
task of entity matching, with its own specificity, like omnipresent
unstructured data or inaccurate and inconsistent product descriptions. This
paper aims to present a new philosophy to product matching utilizing a
semi-supervised clustering approach. We study the properties of this method by
experimenting with the IDEC algorithm on the real-world dataset using
predominantly textual features and fuzzy string matching, with more standard
approaches as a point of reference. Encouraging results show that unsupervised
matching, enriched with a small annotated sample of product links, could be a
possible alternative to the dominant supervised strategy, requiring extensive
manual data labeling
An introduction of Krill Herd algorithm for engineering optimization
A new metaheuristic optimization algorithm, called Krill Herd (KH), has been recently proposed by Gandomi and Alavi (2012). In this study, KH is introduced for solving engineering optimization problems. For more verification, KH is applied to six design problems reported in the literature. Further, the performance of the KH algorithm is compared with that of various algorithms representative of the state-of-the-art in the area. The comparisons show that the results obtained by KH are better than the best solutions obtained by the existing methods.
First published online: 25 Aug 201
Seismic Failure Probability and Vulnerability Assessment of Steel-Concrete Composite Structures
Building collapse in earthquakes caused huge losses, both in human and economic terms. To assess the risk posed by using the composite members, this paper investigates seismic failure probability and vulnerability assessment of steel-concrete composite structures constituted by rectangular concrete filled steel tube (RCFT) columns and steel beams. To enable numerical simulation of RCFT-structure, the details of components modeling are developed using OpenSEES finite element analysis package and the validation of proposed procedure is investigated through comparisons with available experimental results. The seismic fragility and vulnerability curves of RCFT-structures are created through nonlinear dynamic analysis using an appropriate suite of ground motions for seismic loss assessment. These curves developed for three-, six- and nine-story prototypes of RCFT-structure. Fragility curves are an appropriate tool for representing the seismic failure probabilities and vulnerability curves demonstrate a probability of exceeding loss to a measure of ground motion intensity
GNBG-Generated Test Suite for Box-Constrained Numerical Global Optimization
This document introduces a set of 24 box-constrained numerical global
optimization problem instances, systematically constructed using the
Generalized Numerical Benchmark Generator (GNBG). These instances cover a broad
spectrum of problem features, including varying degrees of modality,
ruggedness, symmetry, conditioning, variable interaction structures, basin
linearity, and deceptiveness. Purposefully designed, this test suite offers
varying difficulty levels and problem characteristics, facilitating rigorous
evaluation and comparative analysis of optimization algorithms. By presenting
these problems, we aim to provide researchers with a structured platform to
assess the strengths and weaknesses of their algorithms against challenges with
known, controlled characteristics. For reproducibility, the MATLAB source code
for this test suite is publicly available
GNBG: A Generalized and Configurable Benchmark Generator for Continuous Numerical Optimization
As optimization challenges continue to evolve, so too must our tools and
understanding. To effectively assess, validate, and compare optimization
algorithms, it is crucial to use a benchmark test suite that encompasses a
diverse range of problem instances with various characteristics. Traditional
benchmark suites often consist of numerous fixed test functions, making it
challenging to align these with specific research objectives, such as the
systematic evaluation of algorithms under controllable conditions. This paper
introduces the Generalized Numerical Benchmark Generator (GNBG) for
single-objective, box-constrained, continuous numerical optimization. Unlike
existing approaches that rely on multiple baseline functions and
transformations, GNBG utilizes a single, parametric, and configurable baseline
function. This design allows for control over various problem characteristics.
Researchers using GNBG can generate instances that cover a broad array of
morphological features, from unimodal to highly multimodal functions, various
local optima patterns, and symmetric to highly asymmetric structures. The
generated problems can also vary in separability, variable interaction
structures, dimensionality, conditioning, and basin shapes. These customizable
features enable the systematic evaluation and comparison of optimization
algorithms, allowing researchers to probe their strengths and weaknesses under
diverse and controllable conditions
Gene expression programming approach to cost estimation formulation for utility projects
This article utilizes gene expression programming (GEP) technique to develop a prediction model in order to automate estimating the construction cost of water and sewer replacement/rehabilitation projects. A database gathered for developing the model was established on the basis of data related to 210 actual water and sewer projects obtained from the City of San Diego, California, USA. To verify the predictability of the GEP model, it was examined to estimate the cost of the projects that were not included in the modelling process. Sensitivity analysis technique and professional experiences were employed to determine the contributions of the qualitative factors and quantifiable parameters affecting the cost estimate. The proposed model with correlation coefficient of 0.8467 is adequately capable of estimating the cost of water and sewer replacement/rehabilitation projects. The GEP-based design equation can easily be used for predesign purposes to help allocate budgets and available limited resources effectively
Risk analysis of BOT contracts using soft computing
Build-Operate-Transfer (BOT) contracts have been widely implemented in developing countries facing budget constraints. Analysing the expected variability in project viability requires extensive risk analysis. An objective analysis of various risk variables and their influence on a BOT project evaluation requires study and integration of many scenarios into the concession terms, which is complicated and time-consuming. If the process of negotiating the financial parameters and uncertainties of a BOT project could be automated, this would be a milestone in objective decision-making from various stakeholders’ points of view. A soft computing model would let the user incorporate as many scenarios as could be provided. Extensive risk analysis could then be easily performed, leading to more accurate and dependable results. In this research, an artificial neural network model with correlation coefficient of 0.9064 has been used to model the relationship between important project parameters and risk variables. This information was extracted from sensitivity analysis and Monte Carlo simulation results obtained from conventional spreadsheet data. The resulting consensus would yield to fair contractual agreements for both the government and the concession company.
First published online: 01 Jul 201
A quantum-inspired sensor consolidation measurement approach for cyber-physical systems.
Cyber-Physical System (CPS) devices interconnect to grab data over a common platform from industrial applications. Maintaining immense data and making instant decision analysis by selecting a feasible node to meet latency constraints is challenging. To address this issue, we design a quantum-inspired online node consolidation (QONC) algorithm based on a time-sensitive measurement reinforcement system for making decisions to evaluate the feasible node, ensuring reliable service and deploying the node at the appropriate position for accurate data computation and communication. We design the Angular-based node position analysis method to localize the node through rotation and t-gate to mitigate latency and enhance system performance. We formalize the estimation and selection of the feasible node based on quantum formalization node parameters (node contiguity, node optimal knack rate, node heterogeneity, probability of fusion variance error ratio). We design a fitness function to assess the probability of node fitness before selection. The simulation results convince us that our approach achieves an effective measurement rate of performance index by reducing the average error ratio from 0.17-0.22, increasing the average coverage ratio from 29% to 42%, and the qualitative execution frequency of services. Moreover, the proposed model achieves a 74.3% offloading reduction accuracy and a 70.2% service reliability rate compared to state-of-the-art approaches. Our system is scalable and efficient under numerous simulation frameworks
A Comprehensive Bibliometric Analysis on Social Network Anonymization: Current Approaches and Future Directions
In recent decades, social network anonymization has become a crucial research
field due to its pivotal role in preserving users' privacy. However, the high
diversity of approaches introduced in relevant studies poses a challenge to
gaining a profound understanding of the field. In response to this, the current
study presents an exhaustive and well-structured bibliometric analysis of the
social network anonymization field. To begin our research, related studies from
the period of 2007-2022 were collected from the Scopus Database then
pre-processed. Following this, the VOSviewer was used to visualize the network
of authors' keywords. Subsequently, extensive statistical and network analyses
were performed to identify the most prominent keywords and trending topics.
Additionally, the application of co-word analysis through SciMAT and the
Alluvial diagram allowed us to explore the themes of social network
anonymization and scrutinize their evolution over time. These analyses
culminated in an innovative taxonomy of the existing approaches and
anticipation of potential trends in this domain. To the best of our knowledge,
this is the first bibliometric analysis in the social network anonymization
field, which offers a deeper understanding of the current state and an
insightful roadmap for future research in this domain.Comment: 73 pages, 28 figure
- …