550 research outputs found

    Predicting Software Reliability Using Ant Colony Optimization Technique with Travelling Salesman Problem for Software Process – A Literature Survey

    Get PDF
    Computer software has become an essential and important foundation in several versatile domains including medicine, engineering, etc. Consequently, with such widespread application of software, there is a need of ensuring software reliability and quality. In order to measure such software reliability and quality, one must wait until the software is implemented, tested and put for usage for a certain time period. Several software metrics have been proposed in the literature to avoid this lengthy and costly process, and they proved to be a good means of estimating software reliability. For this purpose, software reliability prediction models are built. Software reliability is one of the important software quality features. Software reliability is defined as the probability with which the software will operate without any failure for a specific period of time in a specified environment. Software reliability, when estimated in early phases of software development life cycle, saves lot of money and time as it prevents spending huge amount of money on fixing of defects in the software after it has been deployed to the client. Software reliability prediction is very challenging in starting phases of life cycle model. Software reliability estimation has thus become an important research area as every organization aims to produce reliable software, with good quality and error or defect free software. There are many software reliability growth models that are used to assess or predict the reliability of the software. These models help in developing robust and fault tolerant systems. In the past few years many software reliability models have been proposed for assessing reliability of software but developing accurate reliability prediction models is difficult due to the recurrent or frequent changes in data in the domain of software engineering. As a result, the software reliability prediction models built on one dataset show a significant decrease in their accuracy when they are used with new data. The main aim of this paper is to introduce a new approach that optimizes the accuracy of software reliability predictive models when used with raw data. Ant Colony Optimization Technique (ACOT) is proposed to predict software reliability based on data collected from literature. An ant colony system by combining with Travelling Sales Problem (TSP) algorithm has been used, which has been changed by implementing different algorithms and extra functionality, in an attempt to achieve better software reliability results with new data for software process. The intellectual behavior of the ant colony framework by means of a colony of cooperating artificial ants are resulting in very promising results. Keywords: Software Reliability, Reliability predictive Models, Bio-inspired Computing, Ant Colony Optimization technique, Ant Colon

    The Childbirth Services Aspect That Influence Patient Satisfaction

    Full text link
    Pregnant women have higher expectations of the care providers attitude in order to have childbirth satisfaction. This study aims to analyze the relationship between childbirth services and birth satisfaction in Roemani Muhammadiyah Semarang hospital. We used a cross sectional design and used an interview to collect the data, it was conducted from July to August 2016. A total of 79 women were enrolled in the study that fulfill the inclusion and exclusion criteria. The childbirth services and patient satisfaction were measured using survey instruments which we'd measured the reliability and validity previously. The linier regression were applied. The majority of the patients (94,9%) were 21-40 years old, 67,1% were university graduate, 40,5% were housewife, 62% were multigravida, and 57% were delivered by cesarean section. There were significant effect between interpersonal relationship (p=0,0001), patient decision choice (p=0,001), and breastfeeding management (p=0,021) and birth satisfaction. The strongest predictor of birth satisfaction was interpersonal relationship

    Zero resource speech synthesis using transcripts derived from perceptual acoustic units

    Full text link
    Zerospeech synthesis is the task of building vocabulary independent speech synthesis systems, where transcriptions are not available for training data. It is, therefore, necessary to convert training data into a sequence of fundamental acoustic units that can be used for synthesis during the test. This paper attempts to discover, and model perceptual acoustic units consisting of steady-state, and transient regions in speech. The transients roughly correspond to CV, VC units, while the steady-state corresponds to sonorants and fricatives. The speech signal is first preprocessed by segmenting the same into CVC-like units using a short-term energy-like contour. These CVC segments are clustered using a connected components-based graph clustering technique. The clustered CVC segments are initialized such that the onset (CV) and decays (VC) correspond to transients, and the rhyme corresponds to steady-states. Following this initialization, the units are allowed to re-organise on the continuous speech into a final set of AUs in an HMM-GMM framework. AU sequences thus obtained are used to train synthesis models. The performance of the proposed approach is evaluated on the Zerospeech 2019 challenge database. Subjective and objective scores show that reasonably good quality synthesis with low bit rate encoding can be achieved using the proposed AUs

    A study of comparison of liquid-based cytology versus conventional pap smear for evaluation of cervical cytology at a tertiary healthcare hospital

    Get PDF
    Background: Cervical cancer is the second highest cause of cancer-related mortality in women, and the only sign of this cancer in the early stages is the loss of abnormal cells. Clinical signs of the disease appear only after cancer has reached advanced stages. Conversely, this cancer in precancerous conditions is completely curable and screening with conventional Papanicolaou (CP) has reduced the mortality by 70% but it is also associated with a significant number of false-negative cases (20-50%). In 1996, liquid-based cytology (LBC) method was developed to overcome the disadvantages of the previous method expecting to have good features such as high sensitivity, faster sample preparation, and decreased the rate of inadequate smears.Methods: This descriptive-analytic study was conducted at the Department of Pathology and Department of Obstetrics and Gynaecology, government medical college, Datia for the period of 11 months from April 2018 to February 2019. The study includes total 80 subjects. Total No. of Pap smear examined (both LBC and CPS) are 160.Results: There were statistically significant differences of satisfactory/unsatisfactory rate, smear cellularity, background clarity and detection of endocervical cells, in between liquid based cytology and conventional Pap smear finding (p<0.05). Diagnostic efficacy i.e. sensitivity and specificity of LBC is greater than CPS for evaluation of cervical cytology.Conclusions: Results shows, that LBC may improve the sample's quality, reduce the number of unsatisfactory smear and increases the diagnostic efficacy

    Potential drug-drug interactions among hospitalized cardiac patients

    Get PDF
    Background: Drug-drug interactions (DDIs) are a major cause for concern in patients with cardiovascular disorders due to multiple co-existing conditions and the wide class of drugs they receive. The objective of our study was to identify potential drug-drug interactions among hospitalized cardiac patients and to identify the risk factors associated with these interactions.Methods: After obtaining approval from Institutional Ethical Committee, a prospective observational study was carried out among 367 hospitalized cardiac patients in Sri Jayadeva Institute of Cardiovascular Sciences and Research, Mysuru. Cardiac patients prescribed at least 2 drugs and having hospital stay of more than 24 hour duration were enrolled into the study. The prescriptions were analysed for potential DDIs using MEDSCAPE multidrug interaction checker tool. Descriptive statistics, Student ‘t’ test, ANOVA and Pearson correlation coefficient were used to analyse the results.Results: The incidence of potential DDIs was 98% with 360 prescriptions having at least one potential DDI. A total of 38 potentially interacting drug pairs were identified among which majority were of significant grade while only 3 were serious. Majority of interactions were pharmacodynamic (76.3%) in nature. Aspirin/clopidogrel (71.1%) and pantoprazole/clopidogrel (69.8%) were the most common interacting pairs. Drugs most commonly involved were aspirin, clopidogrel, heparin, pantoprazole and ramipril. Age, female gender, polypharmacy, prolonged hospital stay, stay in ICU and diabetes mellitus were the risk factors found associated with the potential DDIs.Conclusions: Proper therapeutic planning, routine monitoring of cardiac in-patients and usage of online DDI database will avoid potentially hazardous consequences in cardiac in-patients

    OBJECT DETECTION BASED ON SPECTRAL ANALYSIS USING SOBEL AND ROBERTS EDGE DETECTION ALGORITHM

    Get PDF
    Aim: This paper proposes novel object detection (OD) approach based on a thorough examination of the image's details and its approximate density chart. Results: Our proposed OD approach is divided into two phases. Knowledge about Spatial Distribution of Objects obtained from a density map that is used to compute initial object positions. With the aid of the original object positions estimated, a saliency map that provides entity boundaries is then used to calculate the bounding boxes with precision, which is inspired by human attention to detail. The scale variance of objects induced by uncertain perspective is a common problem in object density map estimation. A new method for estimating the prior focus for map for any image is proposed. Sobel and Roberts Edge Detection Algorithm are used in this study. The proposed approach is based on sparse defocus dictionary learning on a newly constructed dataset. The focus power is determined by the number of non-zero coefficients of the dictionary atoms. Conclusion: The algorithm's output can capture spatial features and pick the threshold type in a variety of ways.   HIGHLIGHTS: Object detection based on spectral analysis using Sobel and Roberts edge detection algorithm proved to be effective when compared with existing methodologies

    Adaptive Optimized Discriminative Learning based Image Deblurring using Deep CNN

    Get PDF
    Image degradation plays a major problem in many image processing applications. Due to blurring, the quality of an image is degraded and there will be a reduction in bandwidth. Blur in an image is due to variations in atmospheric turbulence, focal length, camera settings, etc. Various types of blurs include Gaussian blur, Motion blur, Out-of-focus blur. The effect of noise along with blur further corrupts the captured image. Many techniques have evolved to deblur the degraded image. The leading approach to solve various degraded images are either based on discriminative learning models or on optimization models. Each method has its own advantages and disadvantages.&nbsp; Learning by discriminative methods is faster but restricted to a specific task whereas optimization models handle flexibly but consume more time. Integrating optimization models suitably by learning with discriminative manner results in effective image restoration. In this paper, a set of effective and fast Convolutional Neural Networks (CNNs) are employed to deblur the Gaussian, motion and out-of-focus blurred images that integrate with optimization models to further avoid noise effects. The proposed methods work more efficiently for applications with low-level vision

    IMPLEMENTING A DISTURBANCE FINDING PROCESS BY FEATURE SELECTION ALGORITHM

    Get PDF
    Within this report, a managed filter-positioned innovation choice description pass be implied, i.e. Flexible Mutual Information Feature Selection. FMIFS is unequivocally a progress over MIFS and MMIFS. FMIFS suggests a compensation to Battuta’s description to curtail the attrition by all of marks. FMIFS eliminates the repetition criterion necessary in MIFS and MMIFS.FMIFS is unequivocally a progress over MIFS and MMIFS. FMIFS suggests a compromise to Battuta’s equation to narrow the superfluity by the whole of marks. FMIFS eliminates the verboseness specification vital in MIFS and MMIFS. Existing solutions wait not suitable positively protecting internet applications and clone systems from the threats from ever-evolving electronic besiege techniques e.g. Do’s raid and mainframe adware and spyware. Current structure movement data that are regularly huge in scale, near a substantial assert to IDSs. The appraisal results concede that our mark election description contributes more decisive emphasizes for LSSVM-IDS to realize beat particularity minimizing computational cost in opposition to the arrangement-of-the-art methods. This bilateral instruction situated innovation pick description mesh linearly and nonlinearly poor data marks. Within this card, we apprise a bilateral message occupying description that on probation selects the flawless promote for designation. Its convenience is evaluated in reach the installments of structure invasion acceptance. Redundant and unimportant innovations in data have caused a lengthy-term arrangement in chain trade coordination. These functions not just slow decrease the integrated operation of designation but also stop a classifier from designing definite decisions, notably when dealing with big data
    • …
    corecore