61 research outputs found

    The what and where of adding channel noise to the Hodgkin-Huxley equations

    Get PDF
    One of the most celebrated successes in computational biology is the Hodgkin-Huxley framework for modeling electrically active cells. This framework, expressed through a set of differential equations, synthesizes the impact of ionic currents on a cell's voltage -- and the highly nonlinear impact of that voltage back on the currents themselves -- into the rapid push and pull of the action potential. Latter studies confirmed that these cellular dynamics are orchestrated by individual ion channels, whose conformational changes regulate the conductance of each ionic current. Thus, kinetic equations familiar from physical chemistry are the natural setting for describing conductances; for small-to-moderate numbers of channels, these will predict fluctuations in conductances and stochasticity in the resulting action potentials. At first glance, the kinetic equations provide a far more complex (and higher-dimensional) description than the original Hodgkin-Huxley equations. This has prompted more than a decade of efforts to capture channel fluctuations with noise terms added to the Hodgkin-Huxley equations. Many of these approaches, while intuitively appealing, produce quantitative errors when compared to kinetic equations; others, as only very recently demonstrated, are both accurate and relatively simple. We review what works, what doesn't, and why, seeking to build a bridge to well-established results for the deterministic Hodgkin-Huxley equations. As such, we hope that this review will speed emerging studies of how channel noise modulates electrophysiological dynamics and function. We supply user-friendly Matlab simulation code of these stochastic versions of the Hodgkin-Huxley equations on the ModelDB website (accession number 138950) and http://www.amath.washington.edu/~etsb/tutorials.html.Comment: 14 pages, 3 figures, review articl

    Barriers to formal healthcare utilisation among poor older people under the livelihood empowerment against poverty programme in the Atwima Nwabiagya District of Ghana

    Get PDF
    Abstract: Background: Even though there is a growing literature on barriers to formal healthcare use among older people, little is known from the perspective of vulnerable older people in Ghana. Involving poor older people under the Livelihood Empowerment Against Poverty (LEAP) programme, this study explores barriers to formal healthcare use in the Atwima Nwabiagya District of Ghana. Methods: Interviews and focus group discussions were conducted with 30 poor older people, 15 caregivers and 15 formal healthcare providers in the Atwima Nwabiagya District of Ghana. Data were analysed using the thematic analytical framework, and presented based on an a posteriori inductive reduction approach. Results: Four main barriers to formal healthcare use were identified: physical accessibility barriers (poor transport system and poor architecture of facilities), economic barriers (low income coupled with high charges, and non-comprehensive nature of the National Health Insurance Scheme [NHIS]), social barriers (communication/language difficulties and poor family support) and unfriendly nature of healthcare environment barriers (poor attitude of healthcare providers). Conclusions: Considering these barriers, removing them would require concerted efforts and substantial financial investment by stakeholders. We argue that improvement in rural transport services, implementation of free healthcare for poor older people, strengthening of family support systems, recruitment of language translators at the health facilities and establishment of attitudinal change programmes would lessen barriers to formal healthcare use among poor older people. This study has implications for health equity and health policy framework in Ghana

    The Love of Money and Pay Level Satisfaction: Measurement and Functional Equivalence in 29 Geopolitical Entities around the World

    Get PDF
    Demonstrating the equivalence of constructs is a key requirement for cross-cultural empirical research. The major purpose of this paper is to demonstrate how to assess measurement and functional equivalence or invariance using the 9-item, 3-factor Love of Money Scale (LOMS, a second-order factor model) and the 4-item, 1-factor Pay Level Satisfaction Scale (PLSS, a first-order factor model) across 29 samples in six continents (N = 5973). In step 1, we tested the configural, metric and scalar invariance of the LOMS and 17 samples achieved measurement invariance. In step 2, we applied the same procedures to the PLSS and nine samples achieved measurement invariance. Five samples (Brazil, China, South Africa, Spain and the USA) passed the measurement invariance criteria for both measures. In step 3, we found that for these two measures, common method variance was non-significant. In step 4, we tested the functional equivalence between the Love of Money Scale and Pay Level Satisfaction Scale. We achieved functional equivalence for these two scales in all five samples. The results of this study suggest the critical importance of evaluating and establishing measurement equivalence in cross-cultural studies. Suggestions for remedying measurement non-equivalence are offered

    An efficient intelligent analysis system for confocal corneal endothelium images

    Get PDF
    A confocal microscope provides a sequence of images of the corneal layers and structures at different depths from which medical clinicians can extract clinical information on the state of health of the patient's cornea. A hybrid model based on snake and particle swarm optimisation (S-PSO) is proposed in this paper to analyse the confocal endothelium images. The proposed system is able to pre-process images (including quality enhancement and noise reduction), detect cells, measure cell densities and identify abnormalities in the analysed data sets. Three normal corneal data sets acquired using a confocal microscope, and three abnormal confocal endothelium images associated with diseases have been investigated in the proposed system. Promising results are presented and the performance of this system is compared with manual and two morphological based approaches. The average differences between the manual and the automatic cell densities calculated using S-PSO and two other morphological based approaches is 5%, 7% and 13% respectively. The developed system will be deployable as a clinical tool to underpin the expertise of ophthalmologists in analysing confocal corneal images

    Domain Adaptation and Feature Fusion for the Detection of Abnormalities in X-Ray Forearm Images

    No full text
    The main challenge in adopting deep learning models is limited data for training, which can lead to poor generalization and a high risk of overfitting, particularly when detecting forearm abnormalities in X-ray images. Transfer learning from ImageNet is commonly used to address these issues. However, this technique is ineffective for grayscale medical imaging because of a mismatch between the learned features. To mitigate this issue, we propose a domain adaptation deep TL approach that involves training six pre-trained ImageNet models on a large number of X-ray images from various body parts, then fine-tuning the models on a target dataset of forearm X-ray images. Furthermore, the feature fusion technique combines the extracted features with deep neural models to train machine learning classifiers. Gradient-based class activation heat map (Grad CAM) was used to verify the accuracy of our results. This method allows us to see which parts of an image the model uses to make its classification decisions. The statically results and Grad CAM have shown that the proposed TL approach is able to alleviate the domain mismatch problem and is more accurate in their decision-making compared to models that were trained using the ImageNet TL technique, achieving an accuracy of 90.7%, an F1-score of 90.6%, and a Cohen's kappa of 81.3%. These results indicate that the proposed approach effectively improved the performance of the employed models individually and with the fusion technique. It helped to reduce the domain mismatch between the source of TL and the target task.</p
    • …
    corecore