32 research outputs found
A study on lactate dehydrogenase levels in hypertensive disorders of pregnancy and its correlation with feto-maternal outcome
Background: Hypertensive disorders of pregnancy are one of the most common medical disorders seen during pregnancy. Lactate dehydrogenase (LDH) is an intracellular enzyme. The objective of this study was to compare lactate dehydrogenase levels in women with hypertensive disorders of pregnancy and normal pregnant women, to correlate lactate dehydrogenase levels with complications of hypertensive disorders of pregnancy and role of lactate dehydrogenase as an early predictor of hypertensive disorders of pregnancy.
Methods: A study was conducted in the department of obstetrics and gynecology at JJ group of hospitals, Mumbai, India for a duration of 18 months from January 2020 to June 2021. This study has a sample size of 500 antenatal patients. Necessary information such as their detailed clinical, and obstetric history, clinical examination, investigations was noted. LDH were measured at 12-16 weeks of pregnancy and at the time of delivery.
Results: In our study, the incidence of hypertensive disorders of pregnancy was 10.2% There was no association between LDH levels at 12-16 weeks of gestation and development of hypertensive disorders of pregnancy. There was association between levels of lactate dehydrogenase levels at time of delivery and severity of hypertensive disorders in our study. Higher serum LDH levels were associated with increased incidence of maternal and fetal complications like abruption placenta, HELLP syndrome, IUGR, IUFD, prematurity and oligohydramnios in our study.
Conclusions: Hypertensive disorders of pregnancy are one of the medical conditions affecting pregnancy. Lactate dehydrogenase levels at 12-16 weeks of gestation is not early predictor of hypertensive disorders of pregnancy. Serum lactate dehydrogenase levels at time of delivery helps in prediction of severity of disease, adverse outcomes and complications of hypertensive disorders of pregnancy. Hence lactate dehydrogenase acts as prognostic factor in hypertensive disorders of pregnancy
A study on fetomaternal outcome of hypertensive disorders of pregnancy
Background: Hypertensive disorders of pregnancy are one of the most common medical disorders seen during pregnancy. Early diagnosis of hypertensive disorders in pregnancy by regular antenatal checkup can help in proper management, thus decreasing the maternal and fetal complications related it. Ensuring timely and effective care requires appropriate use of evidence-based clinical and nonclinical interventions, strengthened health infrastructure, and motivated and competent health care providers. The objective of this study was to study the feto-maternal outcome of hypertensive disorders of pregnancy and complications related to them.
Methods: A study was conducted in the department of obstetrics and gynecology at JJ Group of hospitals, Mumbai, India for a duration of 18 months from January 2020 to June 2021. This study had a sample size of 500 antenatal patients. Necessary information such as their detailed clinical, and obstetric history, clinical examination, investigations was noted.
Results: In our study, the incidence of hypertensive disorders of pregnancy was 10.2%, being most common in age group of 21-25 years (45.1%) and Primigravida patients (47.1%). The most common type of hypertensive disorder in our study was non severe preeclampsia with incidence of 74.50%. The most common complication was oligohydramnios (11.76%), followed by preterm delivery (9.80%) and IUGR (9.80%). The most common drug used in the management of hypertensive disorder was lobetalol. Most common neonatal complication in PIH group was low birth weight, followed by fetal distress (19.6%), prematurity (9.8%) and IUGR (9.8%).
Conclusions: Hypertensive disorders of pregnancy are one of the medical conditions affecting pregnancy. Hypertensive disorders of pregnancy are more prevalent in younger and nulliparous mothers. Early diagnosis and appropriate timely management of hypertensive disorders in pregnant women can prevent the maternal and fetal complications and improve the outcome of pregnancy. These women should be monitored carefully to prevent maternal morbidity and mortality
Automated Questions Unique Arrangement (A.Q.U.A)
With the world digitizing and moving at a fast pace, framing questions for examinations or learning is a time-consuming process and requires a lot of critical thinking. Questions we solve in the exams, for instance, school and college level examinations, are similar to the last year papers and contain repeated questions with little or no paraphrasing or modifications. Educators spend a significant amount of time in preparing question papers to come up with creative brainstorming questions. Automation has become a vital aspect of life. New technologies are coming up every day to minimize manual work and make everything automated with just a click. Considering the present pandemic scenario, education is now internet based and exams are being conducted online. Most of the examinations are based on multiple choice questions and these questions are, in most cases, taken from popular quizzing websites. This practice makes it easier for students to find the correct answer without even studying the subject and increases malpractices. We propose an automatic solution to the issue of making questions that will save time and energy and also promote proper learning with our model “A.Q.U.A – Automated Questions Unique Arrangement. It is a machine learning model that uses transformers for natural language processing and generating meaningful and understandable questions from the given context. A.Q.U.A will be of great use in online assessments , school level and university level exams, as well as competitive exams. It’ll be also helpful for students and learners to take practise tests for a topic and evaluate their knowledge in it
Modelling & Analyzing View Growth Pattern of YouTube Videos inculcating the impact of Subscribers, Word of Mouth and Recommendation Systems
YouTube, one of the prominent online video-sharing platforms, plays a pivotal role in modern media consumption, making it crucial to understand and predict the view-count dynamics of its videos. The viewership of YouTube videos can be influenced by three distinct sources: subscribers, word-of-mouth, and recommendation systems. This paper presents a comprehensive modelling framework that takes into account the view-count obtained through these three sources, assuming that a single view-count can only be attributed to one of these sources at any given time. We investigate the interplay among these sources in shaping YouTube video view-count dynamics, proposing a novel approach to model and analyse their impact on video popularity. Additionally, the VIKOR multi-criteria decision-making method is employed to validate and rank our proposed models. This study's findings deepen our understanding of the intricate mechanisms within the YouTube ecosystem, offering insights for predicting and managing video viewership
Resource Allocation Modeling Framework to Refactor Software Design Smells
The domain to study design flaws in the software environment has created enough opportunity for the researchers. These design flaws i.e., code smells, were seen hindering the quality aspects of the software in many ways. Once detected, the segment of the software which was found to be infected with such a flaw has to be passed through some refactoring steps in order to remove it. To know about their working phenomenon in a better way, authors have innovatively talked about the smell detection mechanism using the NHPP modeling framework. Further the authors have also chosen to investigate about the amount of resources/efforts which should be allotted to various code smell categories. The authors have developed an optimization problem for the said purpose which is being validated on the real-life smell data set belonging to an open-source software system. The obtained results are in acceptable range and are justifying the applicability of the model
Standards Development and Innovative Products. When should Standards be Prepared?
[EN] Usually, innovative products came to market with no standardization in their characteristics, resulting in frequent incompatibilities among similar products from different manufacturers. Thus, it is important to have standards available to industry as soon as possible. However, if standards are developed too early, maybe products finally do not succeed in the market and standardization efforts are lost, or maybe the technology continues evolving before it achieves some stability. If standards are developed too late, then probably their impact and usefulness for industry is much more limited. Basing in the adoption curve for innovative products, we formulated a proposal about the right moment to develop new standards for such products. To facilitate comparisons, we have defined an adequate common metric scale. This proposal has been verified by the analysis of some cases of innovative products, in which we have reviewed the adoption process and the starting of the standardization activity.Aronov, J.; Carrión GarcÃa, A.; Papic, L.; Galkina, N.; Aggrawal, D.; Anand, A. (2019). Standards Development and Innovative Products. When should Standards be Prepared?. International Journal of Mathematical, Engineering and Management Sciences. 4(5):1081-1093. https://doi.org/10.33889/IJMEMS.2019.4.5-086S108110934
Exploitability prediction of software vulnerabilities
The number of security failure discovered and disclosed publicly are increasing at a pace like never before. Wherein, a small fraction of vulnerabilities encountered in the operational phase are exploited in the wild. It is difficult to find vulnerabilities during the early stages of software development cycle, as security aspects are often not known adequately. To counter these security implications, firms usually provide patches such that these security flaws are not exploited. It is a daunting task for a security manager to prioritize patches for vulnerabilities that are likely to be exploitable. This paper fills this gap by applying different machine learning techniques to classify the vulnerabilities based on previous exploit-history. Our work indicates that various vulnerability characteristics such as severity, type of vulnerabilities, different software configurations, and vulnerability scoring parameters are important features to be considered in judging an exploit. Using such methods, it is possible to predict exploit-prone vulnerabilities with an accuracy >85%. Finally, with this experiment, we conclude that supervised machine learning approach can be a useful technique in predicting exploit-prone vulnerabilities.http://wileyonlinelibrary.com/journal/qrehj2022Industrial and Systems Engineerin
Studying Multi-Stage Diffusion Dynamics using Epidemic Modeling Framework
Buying process has always carried a two-fold perspective with itself. On one hand, it is important for individuals and on other hand it is equally important for the firms to deliver the perfect need and want to the customer. Amongst this entire process, awareness along with positive motivation towards the product; plays an equally significant role in strategizing the plans for any company. Plenty of models have been proposed and many would be in the pipeline that have talked about the connectivity of these processes and their impact on the final adoption. In the current work, these processes have been studied through the analogy taken from epidemic modelling framework. Furthermore, an approximation method; Range Kutta of 4th order has been utilized to come to a near approximate solution to the otherwise available non-closed form solution. The proposed modelling framework is validated on real-life data sets and the results depict the existence and presence of various stages under consideration