41 research outputs found

    Quality assessment of conventional X-ray diagnostic equipment by measuring X-ray exposure and tube output parameters in Great Khorasan Province, Iran

    Get PDF
    Introduction: Regular implementation of quality control (QC) program in diagnostic X-ray facilities may affect both image quality and patient radiation dose due to the changes in exposure parameters. Therefore, this study aimed to investigate the status of randomly selected conventional radiographic X-ray devices installed in radiology centers of Great Khorasan Province, Iran, to produce the data needed to formulate QC policies, which are essential to ensure the accuracy of the diagnosis while minimizing the radiation dose. Material and Methods: This cross-sectional study was performed using a calibrated Piranha multi-purpose detector to measure QC parameters in order to unify X‐ray imaging practices using international guidelines. The QC parameters included voltage accuracy, voltage reproducibility, exposure time accuracy, exposure time reproducibility, tube output linearity with time andmilliampere (mA), and tube output reproducibility. Data analysis procedures were performed based on the type of an X-ray generator, which has not been reported in previous studies. Results: The results showed that the implementation of high-frequency X-ray generators were more advantageous compared to alternative current generators, due to their efficient, better accuracy, linearity, and reproducibility. Conclusion: The survey revealed that the QC program was not conducted at regular intervals in some of the investigated radiology centers, mostly because of inadequate enforcement by national regulatory authorities for implementation of QC program

    Scattered gamma rays

    No full text

    Tumor dose enhancement by gold nanoparticles in a 6 MV photon beam: a Monte Carlo study on the size effect of nanoparticles

    No full text
    In this study after benchmarking of Monte Carlo (MC) simulation of a 6 MV linac, the simulation model was used for estimation of tumor dose enhancement by gold nanoparticles (GNPs). The 6 MV photon mode of a Siemens Primus linac was simulated and a percent depth dose and dose profiles values obtained from the simulations were compared with the corresponding measured values. Dose enhancement for various sizes and concentrations of GNPs were studied for two cases with and without the presence of a flattening filter in the beam’s path. Tumor dose enhancement with and without the presence of the flattening filter was, respectively 1–5 and 3–10%. The maximum dose enhancement was observed when 200 nm GNPs was used and the concentration was 36 mg/g tumor. Furthermore, larger GNPs resulted in higher dose values in the tumor. After careful observation of the dose enhancement factor data, it was found that there is a poor relation between the nanoparticle size and dose enhancement. It seems that for high energy photons, the dose enhancement is more affected by the concentration of nanoparticles than their size

    Classification of EEG-P300 Signals Extracted from Brain Activities in BCI Systems Using ν-SVM and BLDA Algorithms

    No full text
    In this paper, a linear predictive coding (LPC) model is used to improve classification accuracy, convergent speed to maximum accuracy, and maximum bitrates in brain computer interface (BCI) system based on extracting EEG-P300 signals. First, EEG signal is filtered in order to eliminate high frequency noise. Then, the parameters of filtered EEG signal are extracted using LPC model. Finally, the samples are reconstructed by LPC coefficients and two classifiers, a) Bayesian Linear discriminant analysis (BLDA), and b) the υ-support vector machine (υ-SVM) are applied in order to classify. The proposed algorithm performance is compared with fisher linear discriminant analysis (FLDA). Results show that the efficiency of our algorithm in improving classification accuracy and convergent speed to maximum accuracy are much better. As example at the proposed algorithms, respectively BLDA with LPC model and υ-SVM with LPC model with8 electrode configuration for subject S1 the total classification accuracy is improved as 9.4% and 1.7%. And also, subject 7 at BLDA and υ-SVM with LPC model algorithms (LPC+BLDA and LPC+ υ-SVM) after block 11th converged to maximum accuracy but Fisher Linear Discriminant Analysis (FLDA) algorithm did not converge to maximum accuracy (with the same configuration). So, it can be used as a promising tool in designing BCI systems

    EFFECTIVENESS OF LIFE QUALITY GROUP TRAINING ON LIFE SATISFACTION AND HAPPINESS OF MOTHERS WITH PHENYLKETONURIA CHILD

    No full text
    ABSTRACT This research aims to study the effectiveness of life quality group training on life satisfaction and happiness of mothers with Phenylketonuria Child. The research is experimental with pre-post-test with control group. Statistic population includes all mothers with Phenylketonuria child with medical records in Amin Hospital of Isfahan, 100 individuals. 30 mothers were selected based on simple random sampling and divided in two test and control group. Diener Life Satisfaction Questionnaire and Oxford Happiness Questionnaire were used as tools. Life quality course was started with 7 sessions, weekly and each session 90 minutes. Gathered data was analyzed with ANCOVA test and show that there is a meaningful difference between symbols of life satisfaction and happiness in control and test group i.e. life quality training increases the life satisfaction and happiness of mothers with Phenylketonuria Child. According to psychological problems of these parents, life quality training courses can be used as intervention programs and reduce the side effects

    Design and construction of a laser-based respiratory gating system for implementation of deep inspiration breathe hold technique in radiotherapy clinics

    No full text
    Background: Deep inspiration breath-hold (DIBH) is known as a radiotherapy method for the treatment of patients with left-sided breast cancer. In this method, patient is under exposure only while he/she is at the end of a deep inspiration cycle and holds his/her breath. In this situation, the volume of the lung tissue is enhanced and the heart tissue is pushed away from the treating breast. Therefore, heart dose of these patients, using DIBH, experiences a considerable decline compared to free breathing treatment. There are a few commercialized systems for implementation of DIBH in invasive or noninvasive manners. Methods: We present a novel constructed noninvasive DIBH device relied on a manufacturing near-field laser distance meter. This in-house constructed system is composed of a CD22-100AM122 laser sensor combined with a data acquisition system for monitoring the breathing curve. Qt Creator (a cross-platform JavaScript, QML, and C++-integrated development environment that is part of the SDK for development of the Qt Graphical User Interface application framework) and Keil MDK-ARM (a programming software where users can write in C and C++ and assemble for ARM-based microcontrollers) are used for composing computer and microcontroller programs, respectively. Results: This system could be mounted in treatment or computed tomography (CT) room at suitable cost; it is also easy to use and needs a little training for personnel and patients. The system can assess the location of chest wall or abdomen in real time with high precision and frequency. The performance of CD22-100AM122 demonstrates promise for respiratory monitoring for its fast sampling rate as well as high precision. It can also deliver reasonable spatial and temporal accuracy. The patient observes his/her breathing waveform through a 7” 1024 × 600 liquid crystal display and gets some instructions during treatment and CT sessions by an exploited algorithm called “interaction scenario” in this study. The system is also noninvasive and well sustainable for patients. Conclusions: The constructed system has true real-time operation and is rapid enough for delivering clear contiguous monitoring. In addition, in this system, we have provided an interaction scenario option between patient and CT or Linac operator. In addition, the constructed system has the capability of sending triggers for turning on and off CT or Linac facilities. In this concern, the system has the superiority of combining a plenty of characteristics

    3D Reconstruction Using Cubic Bezier Spline Curves and Active Contours (Case Study)

    No full text
    Introduction 3D reconstruction of an object from its 2D cross-sections (slices) has many applications in different fields of sciences such as medical physics and biomedical engineering. In order to perform 3D reconstruction, at first, desired boundaries at each slice are detected and then using a correspondence between points of successive slices surface of desired object is reconstructed. Materials and Methods In this study, Gradient Vector Flow (GVF) was used in order to trace the boundaries at each slice. Then, cubic Bezier Spline curves were used to approximate each of obtained contours and to approximate the corresponding points of different contours at successive slices. The reconstructed surface was a bi-cubic Bezier Spline surface which was smooth with G2 continuity. Results Our presented method was tested on SPECT data of JASZCZAK phantom and human's left ventricle. The results confirmed that the presented method was accurate, promising, applicable, and effective. Conclusion Using GVF algorithm to trace boundaries at each slice, and cubic Bezier Spline curves to approximate the obtained rough contours yield to the procedure of reconstruction which was fast and also the final surface was smooth with G2 continuity. So far, some mathematical curves such as spline, cubic spline, and B-spline curves were used to approximate the computed contour during a time consuming procedure. This study presented a 3D reconstruction method based on a combination of GVF algorithm and cubic Bezier Spline curves. There was a good trade-off between speed and accuracy in using cubic Bezier Spline curves which is especially useful for training students

    Evaluation of Attenuation Correction Process in Cardiac SPECT Images

    No full text
    ABSTRACT Introduction: Attenuation correction is a useful process for improving myocardial perfusion SPECT and is dependent on activity and distribution of attenuation coefficients in the body (attenuation map). Attenuation artifacts are a common problem in myocardial perfusion SPECT. The aim of this study was to compare the effect of attenuation correction using different attenuation maps and different activities in a specially designed heart phantom. Methods: The SPECT imaging for different activities and different body contours were performed by a phantom using tissue-equivalent boluses for making different thicknesses. The activity was ranged from 0.3-2mCi and the images were acquired in 180 degree, 32 steps. The images were reconstructed by OSEM method in a PC computer using Matlab software. Attenuation map were derived from CT images of the phantom. Two quality and quantity indices, derived from universal image quality index have been used to investigate the effect of attenuation correction in each SPECT image. Results: The result of our measurements showed that the quantity index of corrected image was in the range of 3.5 to 5.2 for minimum and maximum tissue thickness and was independent of activity. Comparing attenuation corrected and uncorrected images, the quality index of corrected image improved by increasing body thickness and decreasing activity of the voxels. Conclusion: Attenuation correction was more effective for images with low activity or phantoms with more thickness. In our study, the location of the pixel relative to the associated attenuator tissues was another important factor in attenuation correction. The more accurate the registration process (attenuation map and SPECT) the better the result of attenuation correction
    corecore