405 research outputs found

    CORTICAL CHARACTERISTICS OF LISTENING EFFORT

    Get PDF

    The impact of anti-tumor necrosis factor alpha drugs on lipid profile of patients with rheumatoid arthritis or seronegative spondyloarthropathy

    Get PDF
    Background: Inflammatory arthritis is associated with abnormal levels of lipoprotein. The cause is considered to be inflammation. Recently, the use of biologic drugs in the treatment of inflammatory arthritis, especially rheumatoid arthritis (RA) and seronegative spondyloarthropathy, has increased. There are different results of the effect of these drugs on fat profile. Evaluate the impact of anti-tumor necrosis factor (TNF) alpha drugs on lipid profile of patients with RA or seronegative spondyloarthritis. Methods: In this cross-sectional descriptive study, 50 patients with rheumatoid arthritis or seronegative spondyloarthritis, who were candidates for TNF alpha treatment, were included in the study. After obtaining written consent, a checklist was completed for all patients including demographic information such as age, sex, height, weight, place of residence, level of education, type of disease, and lipid profile test results including total cholesterol, triglycerides (TG), low density lipoprotein (LDL) and high density lipoprotein (HDL) were recorded. Then the patients were treated and evaluated for fat profile after 3, 6, 9 months after receiving the relevant treatment regimen. The test results were recorded in checklists. After completing the study, the data were entered into the statistical package for the social sciences (SPSS) 24 software and analyzed. Results: The mean age of patients was 46.38±14.33 years. 54% of patients were female. 54% of patients had rheumatoid arthritis. 62% of patients were treated with Sinora. The results of this study showed that serum triglyceride levels increased during the study period and this increase was statistically significant but the trend of changes in serum cholesterol, HDL and LDL levels was not statistically significant. However, serum LDL levels measured in the ninth month increased significantly compared to the initial measurement. Conclusions: Results showed that there was a significant relationship between lipid profile changes and anti-TNF alpha consumption. Although decreased inflammation appears but other mechanisms may be involved in dyslipidemia

    Comparison of urinary and plasma ketone using urinary nitroprusside strip in patients with diabetic ketoacidosis

    Get PDF
    Background: Diabetic ketoacidosis is one of the most important and serious acute complications of diabetes and one of the medical emergencies that has been the most common cause of death in patients with diabetes. Prompt diagnosis and therapeutic intervention play an important role in reducing complications and mortality. The aim of this study was to compare urinary and plasma ketones using urinary nitroprusside strip in patients with diabetic ketoacidosis. Methods: In this cross-sectional study, 38 diabetic ketoacidosis patients were included in this study during the years 2017 and 2018 in the emergency department of Imam Khomeini hospital in Ardabil city. To test for plasma ketones, 2 cc of venous blood samples were taken and transferred to the laboratory for plasma isolation. The resulting plasma was examined with a urine dipstick and the discoloration was recorded. This was repeated at 0, 6 and 12 o'clock for serum ketones. All patients received their treatment according to the treatment protocol of diabetic ketoacidosis and urine ketone, PH and bicarbonate and BE patients were measured routinely. Results: Serum ketones were positive in all patients and 34 patients had positive urinary ketones. In this study, serum ketone levels were significantly correlated with blood acidity at baseline and with bicarbonate and basal arterial gas deficit at all three stages. However, urinary ketones had a significant correlation with blood acidity at baseline and at 12 hours, with bicarbonate at baseline and with arterial gas deficiency at 12 hours. Conclusions: The results showed that examination of plasma ketones with dipstick can be a useful, rapid and accurate clinical trial for the diagnosis of diabetic ketoacidosis in patients with diabetes

    Frequency of pes anserine bursitis in patients with knee osteoarthritis

    Get PDF
    Background: Knee pain is a very common disease in the elderly, which is often attributed to osteoarthritis but pes anserine bursitis also can cause knee pain, especially in OA patients. The aim of this study was to determine the frequency of pes anserine bursitis in patients with knee osteoarthritis. Methods: This cross-sectional descriptive study included 245 patients with definitive knee osteoarthritis referred to Imam Khomeini Hospital in Ardabil city from September 2020 to December 2021. All patients were sent to ultrasonography to diagnosis Pes Anserine Bursitis. Osteoarthritis was divided into 4 groups based on the radiographic classification of Kellgren-Lawrence. The severity of knee pain was also determined by the Visual Analogue Scale (VAS). The necessary data were collected by a checklist and then analysed by statistical methods in SPSS version 22. Results: Total 175 out of 245 patients (71%) were diagnosed with pes anserin bursitis. There was a significant relationship between age, gender, BMI, OA and staging of OA with the risk of anserin bursitis. Conclusions: Results showed that the high prevalence of anserine bursitis in patients studied indicates the importance of this issue and the need for attention by physician as one of knee pain reasons. Since the clinical examination has a significant diagnostic power in the detection of anserin bursitis, it is recommended that special attention be paid to the clinical examination. Attention to the risk factors in the country such as obesity, the use of toilets and so on, which can be controlled and can be prevented

    The Nucleus Pulpous of Intervertebral Disc Effect on Finite Element Modeling of Spine

    Get PDF
    BACKGROUND: Spine of an adult is made up of five areas that include 7 cervical vertebrae, 12 thoracic vertebrae, 5 lumbar vertebrae, sacrum and finally coccyx. Selecting appropriate assumptions for modeling and biological analysis of the spine components has a significant impact on the accuracy of results in biomechanical simulation for different modes.METHODS: In the present study, biomechanical analysis has been done on the spine by using finite element simulation. Dimensional characteristics of an individual’s spine components are obtained, then the spine model as one-piece and intervertebral discs as two modes of one-piece and two-piece (Annulus and Nucleus section) in the form of two separate models is modeled. Gravity caused by the weight of spine (gravity intensity of 9800 Newton per square millimeters) was applied to the model and output of stress, displacement and changing the angle between the vertebrae of the spine has been obtained.RESULTS: The maximum displacement, stress and change of angle between the vertebrae in spine model with one-piece disc was 0.254 and 0.197 and -0.083 respectively, and for the model with two-piece disc is 0.399 and 0.205 and 0.021 respectively.CONCLUSION: According to the results for examining the stress, there was no significant difference in choosing the assumption of two-piece or one-piece of the intervertebral disc, but results of the model analysis with assuming two pieces of the intervertebral disc is more appropriate in examining displacement and changing the angle between the vertebrae

    Investigating the Relationship between Manager and Shareholder Using game theory: Applying Accounting Conservatism and Financial Reporting Quality

    Get PDF
    The decision-making of managers in today's organizations is crucial due to increased complexity of internal and external influential factors and increased competition among organizations. Game theory attempts to model the mathematical behavior of a strategic situation. This situation arises when the success of one side of the game depends on the strategies selected by other side. The present study aims at finding a strategy to maximize the balance interests of managers and shareholders by applying strategic characteristics of accounting information and accounting conservatism.   The statistical population of study included 132 companies listed on the Tehran Stock Exchange during a period of seven years (2012-2018). The data analysis method is inferential and SPSS software was used to prepare the data and estimate the models. Pooled data model was used to test the research hypotheses. The results of testing the research hypotheses show that the combination of strategies of manager low reporting quality-shareholder low reporting quality (m1, S1), manager high reporting quality- shareholder high reporting quality (m2, S2), manager low conservatism - shareholder low conservatism (m3, S3), manager high conservatism - shareholder high conservatism (m4, s4), were selected as poor Nash equilibrium. The study results show that game theory plays a major role in the relationship between managers and shareholders and finding equilibrium points of game can play an effective role in the decisions of game parties (managers and shareholders). Accordingly, it informs the parties of game of the strategy that has highest utility for them

    Green synthesis of ZnO nanoparticles using the aqueous extract of Euphorbia petiolata and study of its stability and antibacterial properties

    Get PDF
    Through this study for the first time the biosynthesis, identification, stability and antibacterial activity of zinc oxide (ZnO) nanoparticles (NPs) was studied using the Euphorbia petiolata Banks aqueous extract as a reducing and stabilizing media during a simple and green method. Interaction of plant extract with the aqueous mixture of zinc nitrate and the oxidation via annealing process efficiently caused the reduction of the Zn ions and formation of zinc oxide nanoparticles. The stability, purity and crystalline nature of green synthesized nanoparticles was demonstrated using Uv-vis spectroscopy, EDS and XRD techniques, respectively. Also, the scheme of possible mechanism leading to the formation of NPs was illustrated. Moreover, the efficient antibacterial ability of green synthesized nanoparticles against Escherichia coli was demonstrated compared to the plant extract and chloramphenicol as positive control.

    ‘Optimulation’ in Chemical Reaction Engineering: The Oxidative Coupling of Methane as a Case Study

    Get PDF
    The optimization of reacting systems, including chemical, biological, and macromolecular reactions, is of great importance from both theoretical and practical standpoints. Even though several classical deterministic and stochastic modeling and simulation approaches have been routinely examined to understand and control reacting systems from lab- to industrial-scales, almost all tackling the same problem, i.e., how to predict reaction outputs from any given set of reaction input variables. Development and application of an effective and versatile mathematical tool capable of appropriately connecting preset desired reaction outputs to corresponding inputs have always been the ideal goal for experts in the related fields. Hence, there definitely exists the need to predict a priori optimum reaction conditions in a computationally-demanding multi-variable space for both keeping the chemical and biological reactions in optimal conditions and at the same time satisfying preset desired targets. As a novel and powerful solution, we hereby introduce a robust and functional computational tool capable of simultaneously simulating and optimizing, i.e. ‘optim-ulating’ intricate chemical, biological, and macromolecular reactions via the amalgamation of the Kinetic Monte Carlo (KMC) simulation approach and the multi-objective version of Genetic Algorithms (NSGA-II). The synergistic interplay of KMC and NSGA-II for the optimulation of Oxidative Coupling of Methane (OCM) as an example of a challenging chemical reaction engineering system has clearly demonstrated the outstanding capabilities of the proposed method. Undoubtedly, the proposed novel hybridized technique is very powerful and can address a variety of unsolved optimization questions in chemical, biological, and macromolecular reaction engineering

    Polymerization Data Mining: A Perspective

    Get PDF
    This is the peer reviewed version of the following article:Mohammadi, Y. and Penlidis, A. (2019), Polymerization Data Mining: A Perspective. Adv. Theory Simul., 2: 1800144, which has been published in final form at https://doi.org/10.1002/adts.201800144. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.Nowadays, ‘Data mining’ is widely proposed by data scientists as the most accepted and powerful approach to properly handle the information explosion. Data mining is defined as the extraction of interesting patterns and knowledge from huge amounts of data. It should be noted that the word ‘interesting’ refers to ‘non-trivial’, ‘implicit’, ‘previously unknown’, and ‘potentially useful’. Generally, data mining projects are composed of three essential steps including data pre-processing, processing, and post-processing. The first step, i.e. data pre-processing, is mostly applied for data cleaning, data integration, data transformation, and also dimensionality reduction. Data processing, the heart of all data mining projects, results in knowledge discovery as the main outcome of data mining, applying powerful modeling and optimization techniques. Post processing, the last step of data mining, is mostly employed to appropriately interpret, visualize, and present the processed outputs. The main functions of data mining are generalization, pattern discovery, classification, clustering, outlier analysis, time and ordering (sequential pattern, trend, and evolution analysis), and structure/network analysis. Data mining is the confluence of multiple disciplines including Statistics, visualization technology, high-performance computing, database technology, algorithm design, machine learning, and pattern recognition, with a wide variety of applications. It is mostly due to (1) a tremendous amount of data being generated (i.e. ‘big data’), (2) the high-dimensionality of data, (3) the high-complexity of data, and (4) the emergence of new novel and sophisticated applications. Today, data mining has been implemented and applied over a vast range of applications, like web page analysis, market basket analysis, fraud and intrusion detection, banking, telecommunication, customer relationship management, bioinformatics, educational technology, software engineering, criminal investigation, medical and health systems, text analysis, voice recognition, social and information networks, and the analysis of large amounts of unstructured information in the oil and gas industry. Polymerization data mining, like in other disciplines, can be considered as the measurement, collection, analysis, and reporting of data about polymerization systems for purposes of understanding, controlling, and optimizing macromolecular reactions and the environments in which they occur. In fact, polymerization data mining is an effective and intelligent processing/analysis of massive datasets frequently generated in polymerization systems. In general, for all macromolecular reaction engineering projects, several polymerization recipes are predefined applying experimental design techniques first. Then, the polymerization processes are separately performed for each recipe. Afterwards, the produced macromolecules are precisely analyzed applying available experimental techniques to determine their micromolecular characteristics and also final properties. The microstructure and architecture of the synthesized chains is precisely quantified by well-defined micromolecular indices either as average or distributional properties. Also, the final properties including chemical, physical, thermal, mechanical, optical, and/or biological properties determine the appropriateness of the produced macromolecules in different applications. Undoubtedly, understanding the intricate interrelationships between polymerization recipe, microstructure, and ultimately the polymer properties is the key to tailor-make complex macromolecules. Hence, the ultimate goal of polymerization data mining is to ‘crack’ the complexity of recipe-architecture-property interrelationships via masterful processing of the collected data

    PROJETO GENERATIVO E OTIMIZAÇÃO DE DESEMPENHO IEQ DE EDIFÍCIOS ESCOLARES COM BASE EM UM ALGORITMO PARAMÉTRICO

    Get PDF
    This research aims to examine the potential of generative and optimization algorithms in the early stage of a school building design in Tabriz to achieve better IEQ. It also investigates the compatibility of the evolutionary optimization tools combined with a parametric model in stimulating building comfort performance in achieving an optimized design. This process includes four steps: defining the parametric building model, defining its material and construction properties, stimulation of thermal and visual comfort and carbon dioxide concentration, optimization, and choosing the best result. The adaptive PMV model is used for thermal comfort, imageless daylight glare probability is used for visual comfort, and a CO2 concentration is used for IAQ assessment. It was found that the performance of the options introduced by the algorithm is more appropriate than the design prototype. However, the results show that the samples are acceptable in carbon dioxide concentration. What needs further investigation is thermal and visual comfort. Among the studied variables on IEQ performance, the WWR ratio of the southern wall had the most significant impact. Based on the optimization results, thermal comfort changed in the range of 10%, visual comfort in the range of 30%, and CO2 concentration in the range of 0.19%.Esta pesquisa tem como objetivo examinar os potenciais de algoritmos generativos e de otimização na fase inicial de um projeto de edifício escolar em Tabriz para obter um melhor IEQ. Também investiga a compatibilidade das ferramentas de otimização evolutiva combinadas com um modelo paramétrico para estimular o desempenho de conforto de construção na obtenção de um design otimizado. Este processo inclui quatro etapas: definição do modelo paramétrico de construção, definição de suas propriedades materiais e construtivas, estimulação do conforto térmico e visual e da concentração de dióxido de carbono, otimização e escolha do melhor resultado. O modelo adaptativo PMV é usado para conforto térmico, a probabilidade de ofuscamento da luz do dia sem imagens é usada para conforto visual, uma concentração de CO2 é usada para avaliação de IAQ. A investigação revelou que o desempenho das opções introduzidas pelo algoritmo é mais adequado do que o protótipo de projecto. No entanto, os resultados mostram que as amostras são aceitáveis na concentração de dióxido de carbono. É necessário mais investigação para conforto térmico e visual. Dentre as variáveis estudadas sobre o desempenho do IEQ, a relação WWR da parede sul teve o impacto mais significativo. Com base nos resultados da otimização, o conforto térmico mudou na faixa de 10%, o conforto visual na faixa de 30% e a concentração de CO2 na faixa de 0,19%
    corecore