7 research outputs found

    Denoising Of Satellite Images

    Get PDF
    We use images in our day to day life for keeping a record of information or merely to convey a message. There are a number of parameters which determine the quality of an image or a photograph most of which cannot be solved manually without the help of a computer whatsoever any image that has been captured represents a deteriorated version of the original image. However its clear that by any means we can never get the ideal image which is hypothetical as it is 100% accurate which is not possible. Our aim in image processing is to get the best possible image with minimum number of errors. In order to come to the conclusion of a certain task the correction of this deteriorated version is of optimal importance. Rectifying too much lighting effects, instance noising, geometrical faults, unwanted colour variations and blur are some of the important parameters we need to attend to in order to get a good and useful image. In this paper, the deterioration of images because of noising has been addressed. Noise is any undesired information which adversely affects the quality and content of our image. Primary factors responsible for creating noise in an image are the medium through which photograph is taken (climatic and atmospheric factors like pressure and temperature), the accuracy of the instrument used to take the photograph (for instance camera) and the quantization of data used to store the image. This noise can be removed by an image processing technique called Image restoration. Image restoration process is concerned with the reconstruction of the original image from a noisy one.That is it tries to perform an operation on the image as the inverse of the imperfections in the image formation system. Degraded image can be perfected by various processes which are actually the reverse of noising. These filtering techniques are very simple and can be applied very easily through software. Some filtering processes have better performance than the others. This depends on the type of noise the image has. These filters are used in a variety of applications efficiently in preprocessing module. In this paper, the restoration performance of Arithmetic mean filter, Geometric mean filter and Median filter have been analyzed. The performance of these filters is analyzed by applying it on satellite images which are affected by Impulse noise, Speckle noise and Gaussian noise. Since the satellite images are being corrupted by various noises, the satellite images are considered in this paper to analyze the performance of arithmetic mean filter, geometric mean filter and median filter. By observing the obtained results and PSNR value for various satellite images under different noises, we have recorded the following conclusion. • the median filter gives better performance for satellite images affected by impulse noise than arithmetic mean filter and geometric mean filter. •the arithmetic mean filter gives better performance for gaussian noise than median filter and geometric mean filters for all satellite images. •the arithmetic mean filter gives better performance for speckle noise than median filter and geometric mean filter for all satellite images. Median Filter is an image filter that is more effective in situations where white spots and black spots appear on the image. For this technique the middle value of the m×n window is considered to replace the black and white pixels.After white spots and black spots appear on the image, it becomes pretty difficult to find which pixel is the affected pixel. Replacing those affected pixels with AMF, GMF and HMF is not enough because those pixels are replaced by a value which is not appropriate to the original one. It is observed that the median filter gives better performance than AMF and GMF for distorted images. The performance of restoration filter can be increased further to completely remove noise and to preserve the edges of the image by using both linear and nonlinear filter together

    Non-invasive Diagnostic Measures of Sensorineural Hearing Loss in Chinchillas

    Get PDF
    According to the World Health Organization, disabling hearing loss affects nearly 466 million people worldwide. Sensorineural hearing loss (SNHL), which is characterized as damage to the inner ear (e.g., cochlear hair cells) and/or to the neural pathways connecting the inner ear and brain, accounts for 90% of all disabling hearing loss. More concerning is that significant perceptual and physiological aspects of SNHL remain “hidden” from standard clinical diagnostics. Hidden hearing loss (HHL) manifests as the inability to understand speech in loud, noisy environments (e.g., listening in a noisy restaurant) despite a normal audiogram (i.e., normal detection of soft sounds). Recently, HHL has been suggested to result from cochlear synaptopathy, a significant loss of inner-hair-cell/ afferent-nerve synaptic terminals after an acoustic over-exposure causing “only” a temporary threshold shift (TTS), e.g., after a rock concert. In this study, three physiological non-invasive diagnostic measures of HHL will be evaluated in chinchillas: otoacoustic emissions, auditory brainstem responses, and middle-ear-muscle reflex strength. As a first step, the effect of anesthesia will be evaluated. Four animals will be tested twice while awake and then also twice while under anesthesia (xylazine and ketamine). The repeatability, accuracy, and precision of each measure will be examined. Future work will include collecting these measures before and after TTS-inducing noise exposure. The long-term goal of this study is to establish and characterize reliable and efficient HHL measures in the lab using our noise-induced synaptopathy chinchilla model, and then to translate the animal results into a plausible clinical HHL diagnostic for humans

    Neural Representations of Natural Speech in a Chinchilla Model of Noise-Induced Hearing Loss

    No full text
    Hearing loss hinders the communication ability of many individuals despite state-of-theart interventions. Animal models of different hearing-loss etiologies can help improve the clinical outcomes of these interventions; however, several gaps exist. First, translational aspects of animal models are currently limited because anatomically and physiologically specific data obtained from animals are analyzed differently compared to noninvasive evoked responses that can be recorded from humans. Second, we lack a comprehensive understanding of the neural representation of everyday sounds (e.g., naturally spoken speech) in real-life settings (e.g., in background noise). This is even true at the level of the auditory nerve, which is the first bottleneck of auditory information flow to the brain and the first neural site to exhibit crucial effects of hearing-loss. To address these gaps, we developed a unifying framework that allows direct comparison of invasive spike-train data and noninvasive far-field data in response to stationary and nonstationary sounds. We applied this framework to recordings from single auditory-nerve fibers and frequency-following responses from the scalp of anesthetized chinchillas with either normal hearing or noise-induced mild-moderate hearing loss in response to a speech sentence in noise. Key results for speech coding following hearing loss include: (1) coding deficits for voiced speech manifest as tonotopic distortions without a significant change in driven rate or spike-time precision, (2) linear amplification aimed at countering audiometric threshold shift is insufficient to restore neural activity for low-intensity consonants, (3) susceptibility to background noise increases as a direct result of distorted tonotopic mapping following acoustic trauma, and (4) temporal-place representation of pitch is also degraded. Finally, we developed a noninvasive metric to potentially diagnose distorted tonotopy in humans. These findings help explain the neural origins of common perceptual difficulties that listeners with hearing impairment experience, offer several insights to make hearingaids more individualized, and highlight the importance of better clinical diagnostics and noise-reduction algorithms

    Adaptive mechanisms facilitate robust performance in noise and in reverberation in an auditory categorization model

    No full text
    Abstract For robust vocalization perception, the auditory system must generalize over variability in vocalization production as well as variability arising from the listening environment (e.g., noise and reverberation). We previously demonstrated using guinea pig and marmoset vocalizations that a hierarchical model generalized over production variability by detecting sparse intermediate-complexity features that are maximally informative about vocalization category from a dense spectrotemporal input representation. Here, we explore three biologically feasible model extensions to generalize over environmental variability: (1) training in degraded conditions, (2) adaptation to sound statistics in the spectrotemporal stage and (3) sensitivity adjustment at the feature detection stage. All mechanisms improved vocalization categorization performance, but improvement trends varied across degradation type and vocalization type. One or more adaptive mechanisms were required for model performance to approach the behavioral performance of guinea pigs on a vocalization categorization task. These results highlight the contributions of adaptive mechanisms at multiple auditory processing stages to achieve robust auditory categorization

    Spectrally specific temporal analyses of spike-train responses to complex sounds: A unifying framework.

    No full text
    Significant scientific and translational questions remain in auditory neuroscience surrounding the neural correlates of perception. Relating perceptual and neural data collected from humans can be useful; however, human-based neural data are typically limited to evoked far-field responses, which lack anatomical and physiological specificity. Laboratory-controlled preclinical animal models offer the advantage of comparing single-unit and evoked responses from the same animals. This ability provides opportunities to develop invaluable insight into proper interpretations of evoked responses, which benefits both basic-science studies of neural mechanisms and translational applications, e.g., diagnostic development. However, these comparisons have been limited by a disconnect between the types of spectrotemporal analyses used with single-unit spike trains and evoked responses, which results because these response types are fundamentally different (point-process versus continuous-valued signals) even though the responses themselves are related. Here, we describe a unifying framework to study temporal coding of complex sounds that allows spike-train and evoked-response data to be analyzed and compared using the same advanced signal-processing techniques. The framework uses a set of peristimulus-time histograms computed from single-unit spike trains in response to polarity-alternating stimuli to allow advanced spectral analyses of both slow (envelope) and rapid (temporal fine structure) response components. Demonstrated benefits include: (1) novel spectrally specific temporal-coding measures that are less confounded by distortions due to hair-cell transduction, synaptic rectification, and neural stochasticity compared to previous metrics, e.g., the correlogram peak-height, (2) spectrally specific analyses of spike-train modulation coding (magnitude and phase), which can be directly compared to modern perceptually based models of speech intelligibility (e.g., that depend on modulation filter banks), and (3) superior spectral resolution in analyzing the neural representation of nonstationary sounds, such as speech and music. This unifying framework significantly expands the potential of preclinical animal models to advance our understanding of the physiological correlates of perceptual deficits in real-world listening following sensorineural hearing loss

    Estimation of the air-tissue boundaries of the vocal tract in the mid-sagittal plane from electromagnetic articulograph data

    No full text
    Electromagnetic articulograph (EMA) provides movement data of sensors attached to a few flesh points on different speech articulators including lips, jaw, and tongue while a subject speaks. In this work, we quantify the amount of information these flesh points provide about the vocal tract (VT) shape in the mid-sagittal plane. VT shape is described by the air-tissue boundaries, which are obtained manually from the recordings by real-time magnetic resonance imaging (rtMRI) of a set of utterances spoken by a subject, from whom the EMA recordings of the same set of utterances are also available. We propose a two-stage approach for reconstructing the VT shape from the EMA data. The first stage involves a co-registration of the EMA data with the VT shape from the rtMRI frames. The second stage involves the estimation of the air-tissue boundaries from the co-registered EMA points. Co-registration is done by a spatio-temporal alignment of the VT shapes from the rtMRI frames and EMA sensor data, while radial basis function (RBF) network is used for estimating the air tissue boundaries (ATBs). Experiments with the EMA and rtMRI recordings of five sentences spoken by one male and one female speakers show that the VT shape in the mid-sagittal plane can be recovered from the EMA flesh points with an average reconstruction error of 2.55 mm and 2.75 mm respectively

    Antioxidant and antimicrobial activities and GC/MS-based phytochemical analysis of two traditional Lichen species Trypethellium virens and Phaeographis dendritica

    No full text
    Abstract Background Lichens are complex plants living in symbiotic relationship between fungi and algae. They are used for human and animal nutrition and are used in folk medicine in many countries over a considerable period of time. In the present study, various solvent extracts of Trypethelslium virens and Phaeographis dendritica were tested for their antioxidant and antimicrobial activity. Results The phytochemical analysis by GC/MS revealed phenolics (1.273%), terpene (0.963%), hydrocarbons (2.081%), benzofurans (2.081%), quinone (1.273%), alkanes (0.963%), and aliphatic aldehydes (0.963%) as the predominant compounds in Trypethellium virens SPTV02, whereas secondary alcohol (1.184%), alkaloids (1.184%), and fatty acids (4.466) were the major constituents in Phaeographis dendritica. The antioxidant property of methanolic extract of T. virens and P. dendritica revealed the presence of total phenolic and terpenoids. The methanolic extracts of both the lichens exhibited encouraging DPPH antiradical activity, with the IC50 of 62.4 ± 0.76 µg/ml for T. virens and 68.48 ± 0.45 µg/ml for P. dendritica. Similarly, ferric reducing power assay result exhibited higher reducing activity. Further, the lichen extracts (methanolic) indicated promising antimicrobial activities against pathogens showing MIC from 62.5 to 500 µg/ml. Conclusion The study results concludes that both the lichens could be used as new natural source of antioxidants and antimicrobial agents which can be exploited for pharmaceutical applications
    corecore