55 research outputs found
Assessing the Reliability of Artificial Intelligence Systems: Challenges, Metrics, and Future Directions
Purpose: As artificial intelligence (AI) systems become integral to diverse applications, ensuring their reliability is of paramount importance. This paper explores the multifaceted landscape of AI reliability, encompassing challenges, evaluation metrics, and prospective advancements.
Methodology:Â This paper employs a comprehensive literature review approach to assess the existing body of knowledge on the reliability of AI systems. The review aims to synthesize insights into the challenges faced in evaluating AI reliability, the metrics used for assessment, and the potential future directions in this critical research domain.
Findings: In this paper, challenges in AI reliability assessment, including explainability, data quality, and susceptibility to adversarial attacks, are scrutinized. Metrics for evaluating AI reliability, such as robustness, accuracy, precision, and explainability, are also elucidated. In addition, case studies illustrate instances where AI reliability has been successfully assessed or has fallen short, offering valuable insights.
Originality/value: This paper sheds light on the complexities surrounding the assessment of artificial intelligence (AI) reliability and contributes to the ongoing discourse on AI reliability by providing a comprehensive examination of its challenges, metrics, and future trajectories
Markowitz-based cardinality constrained portfolio selection using Asexual Reproduction Optimization (ARO)
The Markowitz-based portfolio selection turns to an NP-hard problem when considering cardinality constraints. In this case, existing exact solutions like quadratic programming may not be efficient to solve the problem. Many researchers, therefore, used heuristic and metaheuristic approaches in order to deal with the problem. This work presents Asexual Reproduction Optimization (ARO), a model free metaheuristic algorithm inspired by the asexual reproduction, in order to solve the portfolio optimization problem including cardinality constraint to ensure the investment in a given number of different assets and bounding constraint to limit the proportions of fund invested in each asset. This is the first time that this relatively new metaheuristic is in the field of portfolio optimization, and we show that ARO results in better quality solutions in comparison with some of the well-known metaheuristics stated in the literature. To validate our proposed algorithm, we measured the deviation of obtained results from the standard efficient frontier. We report our computational results on a set of publicly available benchmark test problems relating to five main market indices containing 31, 85, 89, 98, and 225 assets. These results are used in order to test the efficiency of our proposed method in comparison to other existing metaheuristic solutions. The experimental results indicate that ARO outperforms Genetic Algorithm(GA), Tabu Search (TS), Simulated Annealing (SA), and Particle Swarm Optimization (PSO) in most of test problems. In terms of the obtained error, by using ARO, the average error of the aforementioned test problems is reduced by approximately 20 percent of the minimum average error calculated for the above-mentioned algorithms
Credit Card Fraud Detection Using Asexual Reproduction Optimization
As the number of credit card users has increased, detecting fraud in this
domain has become a vital issue. Previous literature has applied various
supervised and unsupervised machine learning methods to find an effective fraud
detection system. However, some of these methods require an enormous amount of
time to achieve reasonable accuracy. In this paper, an Asexual Reproduction
Optimization (ARO) approach was employed, which is a supervised method to
detect credit card fraud. ARO refers to a kind of production in which one
parent produces some offspring. By applying this method and sampling just from
the majority class, the effectiveness of the classification is increased. A
comparison to Artificial Immune Systems (AIS), which is one of the best methods
implemented on current datasets, has shown that the proposed method is able to
remarkably reduce the required training time and at the same time increase the
recall that is important in fraud detection problems. The obtained results show
that ARO achieves the best cost in a short time, and consequently, it can be
considered a real-time fraud detection system
Startup’s critical failure factors dynamic modeling using FCM
The emergence of startups and their influence on a country's economic growth has become a significant concern for governments. The failure of these ventures leads to substantial depletion of financial resources and workforce, resulting in detrimental effects on a country's economic climate. At various stages of a startup's lifecycle, numerous factors can affect the growth of a startup and lead to failure. Numerous scholars and authors have primarily directed their attention toward studying the successes of these ventures. Previous research review of critical failure factors (CFFs) reveals a dearth of research that comprehensively investigates the introduction of all failure factors and their interdependent influences. This study investigates and categorizes the failure factors across various stages of a startup's life cycle to provide a deeper insight into how they might interact and reinforce one another. Employing expert perspectives, the authors construct fuzzy cognitive maps (FCMs) to visualize the CFFs within entrepreneurial ventures and examine these factors' influence across the four growth stages of a venture. Our primary aim is to construct a model that captures the complexities and uncertainties surrounding startup failure, unveiling the concealed interconnections among CFFs. The FCMs model empowers entrepreneurs to anticipate potential failures under diverse scenarios based on the dynamic behavior of these factors. The proposed model equips entrepreneurs and decision-makers with a comprehensive understanding of the collective influence exerted by various factors on the failure of entrepreneurial ventures
Global burden of chronic respiratory diseases and risk factors, 1990–2019: an update from the Global Burden of Disease Study 2019
Background: Updated data on chronic respiratory diseases (CRDs) are vital in their prevention, control, and treatment in the path to achieving the third UN Sustainable Development Goals (SDGs), a one-third reduction in premature mortality from non-communicable diseases by 2030. We provided global, regional, and national estimates of the burden of CRDs and their attributable risks from 1990 to 2019. Methods: Using data from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019, we estimated mortality, years lived with disability, years of life lost, disability-adjusted life years (DALYs), prevalence, and incidence of CRDs, i.e. chronic obstructive pulmonary disease (COPD), asthma, pneumoconiosis, interstitial lung disease and pulmonary sarcoidosis, and other CRDs, from 1990 to 2019 by sex, age, region, and Socio-demographic Index (SDI) in 204 countries and territories. Deaths and DALYs from CRDs attributable to each risk factor were estimated according to relative risks, risk exposure, and the theoretical minimum risk exposure level input. Findings: In 2019, CRDs were the third leading cause of death responsible for 4.0 million deaths (95% uncertainty interval 3.6–4.3) with a prevalence of 454.6 million cases (417.4–499.1) globally. While the total deaths and prevalence of CRDs have increased by 28.5% and 39.8%, the age-standardised rates have dropped by 41.7% and 16.9% from 1990 to 2019, respectively. COPD, with 212.3 million (200.4–225.1) prevalent cases, was the primary cause of deaths from CRDs, accounting for 3.3 million (2.9–3.6) deaths. With 262.4 million (224.1–309.5) prevalent cases, asthma had the highest prevalence among CRDs. The age-standardised rates of all burden measures of COPD, asthma, and pneumoconiosis have reduced globally from 1990 to 2019. Nevertheless, the age-standardised rates of incidence and prevalence of interstitial lung disease and pulmonary sarcoidosis have increased throughout this period. Low- and low-middle SDI countries had the highest age-standardised death and DALYs rates while the high SDI quintile had the highest prevalence rate of CRDs. The highest deaths and DALYs from CRDs were attributed to smoking globally, followed by air pollution and occupational risks. Non-optimal temperature and high body-mass index were additional risk factors for COPD and asthma, respectively. Interpretation: Albeit the age-standardised prevalence, death, and DALYs rates of CRDs have decreased, they still cause a substantial burden and deaths worldwide. The high death and DALYs rates in low and low-middle SDI countries highlights the urgent need for improved preventive, diagnostic, and therapeutic measures. Global strategies for tobacco control, enhancing air quality, reducing occupational hazards, and fostering clean cooking fuels are crucial steps in reducing the burden of CRDs, especially in low- and lower-middle income countries
Regional and Optimal Estimation of Total Electron Content using Pseudorange Observations
Total Electron Content (TEC) is one of the most important factors in monitoring the variable structure of ionosphere. Global Positioning System (GPS) is a useful and affordable instrument in TEC prediction through ground receivers. In this research, Vertical Total Electron Content (VTEC) is calculated in a GPS station using code observations and an approach is introduced for precise, and local modeling of this quantity. For doing this, a geometry-free combination of P1 and P2 observables is made and TEC for every satellite in an epoch is obtained using this combination and Differential Code Biases (DCB) of satellite and receiver. The calculated parameter shows total electron content in the direction of signal propagation in ionosphere layer. Besides, a mapping function is used for transforming TEC to VTEC. For doing this transformation, there are various mapping functions the common examples of which are geometric mapping function and empirical mapping function. In order to increase the precision and reduce systematic errors of calculations, the geometric mapping function is used. After calculation of VTEC for every satellite, it is necessary to obtain the amount of VTEC in zenith direction of the station. For doing this, a weighted function is used that inversely relates to the elevation angle of the satellite. The proposed weighted function provides an optimum and precise formula for calculation of VTEC in zenith direction of the station. In order to investigate the accuracy of calculations, all of the results are compared with the VTEC grids of International GNSS Service (IGS), and finally, the conclusions for every specific method are shown like weighted average, normal average and nearest vertex. In other words, IGS ionospheric products are considered as accurate and precise VTEC and the results are compared with these VTECs. As everybody knows, IGS VTECs are produced in a grid and thus, for calculation of VTEC in a specific point, mathematical approaches like weighted average of VTECs in surrounded vertexes of the point should be used. Conclusions illustrate the calculation of VTEC using proposed approach has a good adaption with the weighted average of VTECs around the Ankara grid station. The other results are also illustrated in diagrams. In addition, the periodic behavior of ionosphere at different times are also modeled, and the method is improved for optimum estimation of VTEC at various times. The only thing that is important is the local nature of this method, which is useful in one cell of IGS ionospheric network only
Regional and Optimal Estimation of Total Electron Content using Pseudorange Observations
Total Electron Content (TEC) is one of the most important factors in monitoring the variable structure of ionosphere. Global Positioning System (GPS) is a useful and affordable instrument in TEC prediction through ground receivers. In this research, Vertical Total Electron Content (VTEC) is calculated in a GPS station using code observations and an approach is introduced for precise, and local modeling of this quantity. For doing this, a geometry-free combination of P1 and P2 observables is made and TEC for every satellite in an epoch is obtained using this combination and Differential Code Biases (DCB) of satellite and receiver. The calculated parameter shows total electron content in the direction of signal propagation in ionosphere layer. Besides, a mapping function is used for transforming TEC to VTEC. For doing this transformation, there are various mapping functions the common examples of which are geometric mapping function and empirical mapping function. In order to increase the precision and reduce systematic errors of calculations, the geometric mapping function is used. After calculation of VTEC for every satellite, it is necessary to obtain the amount of VTEC in zenith direction of the station. For doing this, a weighted function is used that inversely relates to the elevation angle of the satellite. The proposed weighted function provides an optimum and precise formula for calculation of VTEC in zenith direction of the station. In order to investigate the accuracy of calculations, all of the results are compared with the VTEC grids of International GNSS Service (IGS), and finally, the conclusions for every specific method are shown like weighted average, normal average and nearest vertex. In other words, IGS ionospheric products are considered as accurate and precise VTEC and the results are compared with these VTECs. As everybody knows, IGS VTECs are produced in a grid and thus, for calculation of VTEC in a specific point, mathematical approaches like weighted average of VTECs in surrounded vertexes of the point should be used. Conclusions illustrate the calculation of VTEC using proposed approach has a good adaption with the weighted average of VTECs around the Ankara grid station. The other results are also illustrated in diagrams. In addition, the periodic behavior of ionosphere at different times are also modeled, and the method is improved for optimum estimation of VTEC at various times. The only thing that is important is the local nature of this method, which is useful in one cell of IGS ionospheric network only
Dynamic optimization of acetylene hydrogenation reactors with considering catalyst deactivation
Ethylene is a very important material in petrochemical industries, whose chief application is producing polymers. The steam cracking of naphtha or ethane is usually applied to produce ethylene. A small amount of acetylene is produced in this process. The amount of acetylene in the product stream should not exceed 1 ppm, because it is harmful to polymerization catalysts in downstream units. The acetylene hydrogenation unit is designed for acetylene removal in industrial plants. In this unit, the removal of acetylene up to 1 ppm in the product stream and ethylene’s selectivity are of great importance. In this paper, the dynamic optimization of acetylene hydrogenation reactors of Marun petrochemical complex with considering catalyst deactivation is presented. In this study, the differential evolution (DE) method is used as a powerful method for determination of a dynamic optimal temperature profile to achieve maximum of ethylene’s selectivity in a period of 720 operating days. Then, the optimal results are compared with the condition wherein the inlet temperatures of the reactors are maintained constant at 55 ˚C and with the condition wherein the inlet temperatures of them increase linearly from 55 to 90 ˚C. The results showed when the inlet temperatures are kept 55 ˚C, the outlet acetylene exceed 1 ppm, but the best selectivity is achieved. With a linear increase in the inlet temperatures, the outlet acetylene is below 1 ppm but the selectivity is decreased. An optimal temperature profile maximizes the selectivity when the outlet acetylene is below 1 ppm
Using of anabolic steroids and its association with mental health and body image concept in men referring to sports clubs in Iran
Background: Abusing anabolic steroids due to distorted body image in athletes can impose irreversible side effects on their physical health as well as their mental health. The purpose of this study was to determine the using of anabolic steroids usage and its association with mental health and body image perception in men referring to sports clubs. Methods: 192 athletes were recruited from Shahrekord sports clubs and were selected using stratified multi-stage sampling. Data were collected by a demographic questionnaire, Body Image Concern Inventory (BICI), and General Health Questionnaire (GHQ). All statistical analyses were conducted using SPSS software. Results: Totally, 32.8% of the subjects had experience of using the drug in the past, present, or both, and 25.5% of all the subjects were using the drug at the time of data collection. Significant differences were observed between the two groups of drug users and non-users in the individuals’ desired weight, the Littleton score, and GHQ scale including anxiety, depression, and social aspects (p<0.05). Based on the results, the prevalence of anabolic steroids use was significantly higher in professional athletes. The routes of administration were injection (40.8%), oral and injection (34.7%), and oral (24.5%). 46.9% of the participants used steroids to gain muscle strength and 6.1% of them used steroids to increase their total body strength. The most commonly used drug was Dianabol. Conclusion: Although exercise should improve mental health, the body image concerns, consumption of anabolic steroids, and the competitive atmosphere among the athletes cause disturbances in mental and physical health
- …