40 research outputs found

    Effect of ear canal pressure and age on wideband absorbance in young infants

    Get PDF
    Objective: The study investigated the effect of ear canal pressure and age on wideband absorbance (WBA) in healthy young infants. Design: Using a cross-sectional design, WBA at 0.25 to 8 kHz was obtained from infants as the ear canal pressure was swept from +200 to −300 daPa. Study sample: The participants included 29 newborns, 9 infants each at 1 and 4 months and 11 infants at 6 months of age who passed distortion product otoacoustic emissions test. Results: In general, negative-ear canal pressures reduced WBA across the frequency range, while positive-ear canal pressures resulted in reduced WBA from 0.25 to 2 kHz and above 4 kHz with an increase in absorbance between 2 and 3 kHz compared to WBA at ambient pressure. The variation in WBA below 0.5 kHz, as the pressure was varied, was the greatest in newborns. But, the variation was progressively reduced in older infants up to the age of 6 months, suggesting stiffening of the ear canal with age. Conclusions: Significant changes in WBA were observed as a function of pressure and age. In particular, developmental effects on WBA were evident during the first six months of life

    Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data

    Full text link
    Real-world datasets exhibit imbalances of varying types and degrees. Several techniques based on re-weighting and margin adjustment of loss are often used to enhance the performance of neural networks, particularly on minority classes. In this work, we analyze the class-imbalanced learning problem by examining the loss landscape of neural networks trained with re-weighting and margin-based techniques. Specifically, we examine the spectral density of Hessian of class-wise loss, through which we observe that the network weights converge to a saddle point in the loss landscapes of minority classes. Following this observation, we also find that optimization methods designed to escape from saddle points can be effectively used to improve generalization on minority classes. We further theoretically and empirically demonstrate that Sharpness-Aware Minimization (SAM), a recent technique that encourages convergence to a flat minima, can be effectively used to escape saddle points for minority classes. Using SAM results in a 6.2\% increase in accuracy on the minority classes over the state-of-the-art Vector Scaling Loss, leading to an overall average increase of 4\% across imbalanced datasets. The code is available at: https://github.com/val-iisc/Saddle-LongTail.Comment: NeurIPS 2022. Code: https://github.com/val-iisc/Saddle-LongTai

    Cost-effectiveness of non-invasive methods for assessment and monitoring of liver fibrosis and cirrhosis in patients with chronic liver disease: systematic review and economic evaluation

    Get PDF
    BACKGROUND: Liver biopsy is the reference standard for diagnosing the extent of fibrosis in chronic liver disease; however, it is invasive, with the potential for serious complications. Alternatives to biopsy include non-invasive liver tests (NILTs); however, the cost-effectiveness of these needs to be established. OBJECTIVE: To assess the diagnostic accuracy and cost-effectiveness of NILTs in patients with chronic liver disease. DATA SOURCES: We searched various databases from 1998 to April 2012, recent conference proceedings and reference lists. METHODS: We included studies that assessed the diagnostic accuracy of NILTs using liver biopsy as the reference standard. Diagnostic studies were assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Meta-analysis was conducted using the bivariate random-effects model with correlation between sensitivity and specificity (whenever possible). Decision models were used to evaluate the cost-effectiveness of the NILTs. Expected costs were estimated using a NHS perspective and health outcomes were measured as quality-adjusted life-years (QALYs). Markov models were developed to estimate long-term costs and QALYs following testing, and antiviral treatment where indicated, for chronic hepatitis B (HBV) and chronic hepatitis C (HCV). NILTs were compared with each other, sequential testing strategies, biopsy and strategies including no testing. For alcoholic liver disease (ALD), we assessed the cost-effectiveness of NILTs in the context of potentially increasing abstinence from alcohol. Owing to a lack of data and treatments specifically for fibrosis in patients with non-alcoholic fatty liver disease (NAFLD), the analysis was limited to an incremental cost per correct diagnosis. An analysis of NILTs to identify patients with cirrhosis for increased monitoring was also conducted. RESULTS: Given a cost-effectiveness threshold of £20,000 per QALY, treating everyone with HCV without prior testing was cost-effective with an incremental cost-effectiveness ratio (ICER) of £9204. This was robust in most sensitivity analyses but sensitive to the extent of treatment benefit for patients with mild fibrosis. For HBV [hepatitis B e antigen (HBeAg)-negative)] this strategy had an ICER of £28,137, which was cost-effective only if the upper bound of the standard UK cost-effectiveness threshold range (£30,000) is acceptable. For HBeAg-positive disease, two NILTs applied sequentially (hyaluronic acid and magnetic resonance elastography) were cost-effective at a £20,000 threshold (ICER: £19,612); however, the results were highly uncertain, with several test strategies having similar expected outcomes and costs. For patients with ALD, liver biopsy was the cost-effective strategy, with an ICER of £822. LIMITATIONS: A substantial number of tests had only one study from which diagnostic accuracy was derived; therefore, there is a high risk of bias. Most NILTs did not have validated cut-offs for diagnosis of specific fibrosis stages. The findings of the ALD model were dependent on assuptions about abstinence rates assumptions and the modelling approach for NAFLD was hindered by the lack of evidence on clinically effective treatments. CONCLUSIONS: Treating everyone without NILTs is cost-effective for patients with HCV, but only for HBeAg-negative if the higher cost-effectiveness threshold is appropriate. For HBeAg-positive, two NILTs applied sequentially were cost-effective but highly uncertain. Further evidence for treatment effectiveness is required for ALD and NAFLD. STUDY REGISTRATION: This study is registered as PROSPERO CRD42011001561. FUNDING: The National Institute for Health Research Health Technology Assessment programme

    A genome-wide association study confirms VKORC1, CYP2C9, and CYP4F2 as principal genetic determinants of warfarin dose.

    Get PDF
    We report the first genome-wide association study (GWAS) whose sample size (1,053 Swedish subjects) is sufficiently powered to detect genome-wide significance (p<1.5 x 10(-7)) for polymorphisms that modestly alter therapeutic warfarin dose. The anticoagulant drug warfarin is widely prescribed for reducing the risk of stroke, thrombosis, pulmonary embolism, and coronary malfunction. However, Caucasians vary widely (20-fold) in the dose needed for therapeutic anticoagulation, and hence prescribed doses may be too low (risking serious illness) or too high (risking severe bleeding). Prior work established that approximately 30% of the dose variance is explained by single nucleotide polymorphisms (SNPs) in the warfarin drug target VKORC1 and another approximately 12% by two non-synonymous SNPs (*2, *3) in the cytochrome P450 warfarin-metabolizing gene CYP2C9. We initially tested each of 325,997 GWAS SNPs for association with warfarin dose by univariate regression and found the strongest statistical signals (p<10(-78)) at SNPs clustering near VKORC1 and the second lowest p-values (p<10(-31)) emanating from CYP2C9. No other SNPs approached genome-wide significance. To enhance detection of weaker effects, we conducted multiple regression adjusting for known influences on warfarin dose (VKORC1, CYP2C9, age, gender) and identified a single SNP (rs2108622) with genome-wide significance (p = 8.3 x 10(-10)) that alters protein coding of the CYP4F2 gene. We confirmed this result in 588 additional Swedish patients (p<0.0029) and, during our investigation, a second group provided independent confirmation from a scan of warfarin-metabolizing genes. We also thoroughly investigated copy number variations, haplotypes, and imputed SNPs, but found no additional highly significant warfarin associations. We present power analysis of our GWAS that is generalizable to other studies, and conclude we had 80% power to detect genome-wide significance for common causative variants or markers explaining at least 1.5% of dose variance. These GWAS results provide further impetus for conducting large-scale trials assessing patient benefit from genotype-based forecasting of warfarin dose

    Perceptual Consequences of Conductive Hearing Loss: Speech Perception in Indigenous Students Learning English as a School Language

    No full text
    The high incidence of ear disease and hearing loss in Australian Indigenous children is well documented. This study aims to consider the effect of hearing loss and native-language phonology on learning English by Australian Indigenous children. Twenty-one standard Australian English consonants were considered in a consonant-vowel (CV) context. Each consonant was paired with each other to yield 'same' and 'different' consonant pairs. The participants were classified into three groups: (1) English speaking, non-Indigenous children without history of hearing loss and otitis media (three males, four females, mean age 13.7 years); (2) Indigenous children speaking Tiwi as their native language, without history of hearing loss and otitis media and learning English as a second language (two males, three females, mean age 12.1 yrs) and (3) Indigenous children speaking Tiwi as their native language, with a history of hearing loss and otitis media since childhood (six females, mean age 13.1 years). The reaction time from the onset of the second word of the pair to the pressing of a 'same' or 'different' button was measured. The results demonstrated that discrimination of consonants was differentially affected by differences in language. Hearing loss further complicated the difficulties that a child was already having with English. Hearing loss tended to affect discrimination of English consonants more than those in the native language. The study suggests that amplification alone does not suffice and recommended that phonological awareness programs, with or without amplification, need to be part of a reading program from preschool with Indigenous children learning English as a 'school' language

    High frequency (1000 Hz) tympanometry in six-month-old infants

    No full text
    Objectives: High frequency tympanometry (HFT) using a 1000 Hz probe tone is recommended for infants from birth to six months of age. However, there is limited normative HFT data outside the newborn period. The objective of this study was to describe HFT data in healthy six-month-old infants. Methods: HFT and distortion product otoacoustic emission (DPOAE) tests were performed on 168 six-month-old full-term healthy infants. Ears that passed DPOAEs and had a single-peaked tympanogram were included for analysis. The tympanometric measures included in the normative HFT data were tympanometric peak pressure (TPP), peak compensated static admittance (Ytm) and tympanometric width (TW). Results: A total of 118 ears from 118 infants who passed DPOAE and had single-peaked tympanograms were included in the analysis. Normative data were presented for TPP, Ytm and TW. A comparison of the present study with studies on neonates and younger infants revealed significantly higher mean Ytm and lower mean TPP for six-month-old-infants. Conclusion: Significant differences in HFT findings between neonates and six-month-old infants suggest a developmental trend and confirm the need for separate age-appropriate norms for the tympanometric measures. Normative HFT data described in the present study may provide useful information for optimizing the diagnosis of conductive conditions in six-month-old infants

    Tonal Masking Level Differences in Aboriginal Children: Implications for Binaural Interaction, Auditory Processing Disorders and Education

    No full text
    The masking level difference (MLD) is a psychoacoustic measure of binaural interaction and central auditory processing related to extracting signals from noise backgrounds. It represents the improvement in threshold sensitivity under antiphasic listening conditions relative to homophasic conditions. A low frequency pure tone (500 Hz) was presented in-phase (So) binaurally to the subject in the presence of a phasic masker (No). The behavioural threshold obtained at this condition was used as a reference. The behavioural threshold was again determined with the pure tone stimulus presented antiphasically (SÏ€), and the difference in thresholds was calculated to determine the MLD. The MLD was measured for a 500 Hz pure tone in 36 Aboriginal children (16 males and 20 females) from an Aboriginal community school (Nguiu, Tiwi Islands) where conductive hearing loss, due to otitis media, is endemic. The control group consisted of 62 normal-hearing children (40 males and 22 females) from a private school in Darwin. Aboriginal children showed a mean MLD of 7.76 dB whereas the control group exhibited a mean MLD of 11.21 dB. Aboriginal children showed a consistently lower MLD than non-Aboriginal normal-hearing children. Auditory processing disorders (APDs) have been shown to be related to early auditory deprivation, a common feature of chronic conductive hearing loss observed frequently in Aboriginal children. Thus, the MLD provides a metric for assessing binaural hearing abilities which may be relevant to the assessment of APD and hearing aid fitting. The MLD is a less linguistically, less culturally biased predictive measure and may be more easily administered than many speech and language test procedures used in diagnosing APD

    Binaural Speech Discrimination in Noise With Bone Conduction: Applications for Hearing Loss in High-Risk Populations

    No full text
    The use of bone conducted signals for children with chronic otitis media may be considered when earphones or hearing aid receivers are contraindicated because of discharging or painful ears. The use of FM hearing aids in the classroom coupled to a bone conduction (BC) transducer has beneficial application even when considering improved binaural function. This study investigated speech discrimination in diotic and dichotic noise. Confusion matrices were obtained for consonant-vowel (CV) exemplars presented to normal hearing subjects through BC in both correlated and uncorrelated noise. Thirty-six university-aged listeners served as subjects. The CV exemplars were presented randomly, 20 times each, for a total presentation of 420 stimuli for each subject. The stimuli were presented to the subject at a signal level of 55 dB HL through a B-70A BC transducer worn at the forehead position. Each subject was requested to write down the consonants as they heard them. Three conditions were utilised. In condition 1, CV exemplars were presented through air conduction (earphones) in order to assess the quality of the testing apparatus, including the CV exemplars, and to provide a reference for comparison to BC. In condition 2, these exemplars were presented through the BC transducer. Condition 3 involved two separate listening tasks in which CV exemplars were presented through the BC transducer and band-limited white noise was presented binaurally, correlated and uncorrelated, through earphones. The results indicated that speech discrimination with BC was excellent and equal to air-conduction consonant identification. The confusion matrices showed higher speech discrimination scores in the uncorrelated noise condition, revealing a binaural advantage for BC hearing. Distinctive feature identification was also greater for the uncorrelated noise condition

    Effect of negative middle ear pressure and compensated pressure on wideband absorbance and otoacoustic emissions in children

    No full text
    Objective: This study investigated pressurized transient evoked otoacoustic emission (TEOAE) responses and wideband absorbance (WBA) in healthy ears and ears with negative middle ear pressure (NMEP).Method: In this cross-sectional study, TEOAE amplitude, signal-to-noise ratio, and WBA were measured at ambient and tympanometric peak pressure (TPP) in 36 ears from 25 subjects with healthy ears (age range: 3.1-13.0 years) and 88 ears from 76 patients with NMEP (age range: 2.0-13.1 years), divided into 3 groups based on NMEP (Group 1 with TPP between -101 and -200 daPa, Group 2 with TPP between -201 and -300 daPa, and Group 3 with TPP between -301 and -400 daPa).Results: Mean TEOAE amplitude, signal-to-noise ratio, and WBA were increased at TPP relative to that measured at ambient pressure between 0.8 and 1.5 kHz. Further decrease in TPP beyond -300 daPa did not result in further increases in the mean TEOAE or WBA at TPP. The correlation between TEOAE and WBA was dependent on the frequency, pressure conditions, and subject group. There was no difference in pass rates between the 2 pressure conditions for the control group, while the 3 NMEP groups demonstrated an improvement in pass rates at TPP. With pressurization, the false alarm rate for TEOAE due to NMEP was reduced by 17.8% for NMEP Group 1, 29.2% for NMEP Group 2, and 15.8% for NMEP Group 3.Conclusion: Results demonstrated the feasibility and clinical benefits of measuring TEOAE and WBA under pressurized conditions. Pressurized TEOAE and WBA should be used for assessment of ears with NMEP in hearing screening programs to reduce false alarm rates
    corecore