6,232 research outputs found

    AMPA Receptor Phosphorylation and Synaptic Colocalization on Motor Neurons Drive Maladaptive Plasticity below Complete Spinal Cord Injury.

    Get PDF
    Clinical spinal cord injury (SCI) is accompanied by comorbid peripheral injury in 47% of patients. Human and animal modeling data have shown that painful peripheral injuries undermine long-term recovery of locomotion through unknown mechanisms. Peripheral nociceptive stimuli induce maladaptive synaptic plasticity in dorsal horn sensory systems through AMPA receptor (AMPAR) phosphorylation and trafficking to synapses. Here we test whether ventral horn motor neurons in rats demonstrate similar experience-dependent maladaptive plasticity below a complete SCI in vivo. Quantitative biochemistry demonstrated that intermittent nociceptive stimulation (INS) rapidly and selectively increases AMPAR subunit GluA1 serine 831 phosphorylation and localization to synapses in the injured spinal cord, while reducing synaptic GluA2. These changes predict motor dysfunction in the absence of cell death signaling, suggesting an opportunity for therapeutic reversal. Automated confocal time-course analysis of lumbar ventral horn motor neurons confirmed a time-dependent increase in synaptic GluA1 with concurrent decrease in synaptic GluA2. Optical fractionation of neuronal plasma membranes revealed GluA2 removal from extrasynaptic sites on motor neurons early after INS followed by removal from synapses 2 h later. As GluA2-lacking AMPARs are canonical calcium-permeable AMPARs (CP-AMPARs), their stimulus- and time-dependent insertion provides a therapeutic target for limiting calcium-dependent dynamic maladaptive plasticity after SCI. Confirming this, a selective CP-AMPAR antagonist protected against INS-induced maladaptive spinal plasticity, restoring adaptive motor responses on a sensorimotor spinal training task. These findings highlight the critical involvement of AMPARs in experience-dependent spinal cord plasticity after injury and provide a pharmacologically targetable synaptic mechanism by which early postinjury experience shapes motor plasticity

    Adapting Computer Vision Models To Limitations On Input Dimensionality And Model Complexity

    Get PDF
    When considering instances of distributed systems where visual sensors communicate with remote predictive models, data traffic is limited to the capacity of communication channels, and hardware limits the processing of collected data prior to transmission. We study novel methods of adapting visual inference to limitations on complexity and data availability at test time, wherever the aforementioned limitations exist. Our contributions detailed in this thesis consider both task-specific and task-generic approaches to reducing the data requirement for inference, and evaluate our proposed methods on a wide range of computer vision tasks. This thesis makes four distinct contributions: (i) We investigate multi-class action classification via two-stream convolutional neural networks that directly ingest information extracted from compressed video bitstreams. We show that selective access to macroblock motion vector information provides a good low-dimensional approximation of the underlying optical flow in visual sequences. (ii) We devise a bitstream cropping method by which AVC/H.264 and H.265 bitstreams are reduced to the minimum amount of necessary elements for optical flow extraction, while maintaining compliance with codec standards. We additionally study the effect of codec rate-quality control on the sparsity and noise incurred on optical flow derived from resulting bitstreams, and do so for multiple coding standards. (iii) We demonstrate degrees of variability in the amount of data required for action classification, and leverage this to reduce the dimensionality of input volumes by inferring the required temporal extent for accurate classification prior to processing via learnable machines. (iv) We extend the Mixtures-of-Experts (MoE) paradigm to adapt the data cost of inference for any set of constituent experts. We postulate that the minimum acceptable data cost of inference varies for different input space partitions, and consider mixtures where each expert is designed to meet a different set of constraints on input dimensionality. To take advantage of the flexibility of such mixtures in processing different input representations and modalities, we train biased gating functions such that experts requiring less information to make their inferences are favoured to others. We finally note that, our proposed data utility optimization solutions include a learnable component which considers specified priorities on the amount of information to be used prior to inference, and can be realized for any combination of tasks, modalities, and constraints on available data

    Object Recognition

    Get PDF
    Vision-based object recognition tasks are very familiar in our everyday activities, such as driving our car in the correct lane. We do these tasks effortlessly in real-time. In the last decades, with the advancement of computer technology, researchers and application developers are trying to mimic the human's capability of visually recognising. Such capability will allow machine to free human from boring or dangerous jobs

    Texture classification using transform analysis

    Get PDF
    The work presented in this thesis deals with the application of spectral methods for texture classification. The aim of the present work is to introduce a hybrid methodology for texture classification based on a spatial domain global pre-classifier together with a spectral classifier that utilizes multiresolution transform analysis. The reason for developing a spatial pre-classifier is that many discriminating features of textures are present in the spatial domain of the texture. Of these, global features such as intensity histograms and entropies can still add significant information to the texture classification process. The pre-classifier uses texture intensity histograms to derive histogram moments that serve as global features. A spectral classifier that uses Hartley transform follows the pre-classifier. The choice of such transform was due to the fact that the Fast Hartley Transform has many advantages over the other transforms since it results in real valued arrays and requires less memory space and computational complexity. To test the performance of the whole classifier, 900 texture images were generated using mathematical texture generating functions. The images generated were of three different classes and each class is sub-classified into three sub-classes. Half of the generated samples was used to build the classifier, while the other half was used to test it. The pre-classifier was designed to identify texture classes using an Euclidean distance matching for 4 statistical moments of the intensity histograms. The pre-classifier matching accuracy is found to be 99.89%. The spectral classifier is designed on the basis of the Hartley transform to determine the image sub-class. Initially, a full resolution Hartley transform was used to obtain two orthogonal power spectral vectors. Peaks in these two vectors were detected after applying a 10% threshold and the highest 4 peaks for each image are selected and saved in position lookup tables. The matching accuracy obtained using the two classification phases (pre-classifier and spectral classifier) is 99.56%. The accuracy achieved for the single resolution classifier is high but that was achieved on the expense of space for the lookup tables. In order to investigate the effect of lowering the resolution on the size of the information needed for matching the textures, we have applied a multiresolution technique to the Hartley Transform in a restricted way by computing the Hartley spectra in decreasing resolution. In particular, a one-step resolution decrease achieves 99% matching efficiency while saving memory space by 40%. This is a minor sacrifice of less than 1% in the matching efficiency with a considerable decrease in the complexity of the present methodology

    Large-scale single-photon imaging

    Full text link
    Benefiting from its single-photon sensitivity, single-photon avalanche diode (SPAD) array has been widely applied in various fields such as fluorescence lifetime imaging and quantum computing. However, large-scale high-fidelity single-photon imaging remains a big challenge, due to the complex hardware manufacture craft and heavy noise disturbance of SPAD arrays. In this work, we introduce deep learning into SPAD, enabling super-resolution single-photon imaging over an order of magnitude, with significant enhancement of bit depth and imaging quality. We first studied the complex photon flow model of SPAD electronics to accurately characterize multiple physical noise sources, and collected a real SPAD image dataset (64 ×\times 32 pixels, 90 scenes, 10 different bit depth, 3 different illumination flux, 2790 images in total) to calibrate noise model parameters. With this real-world physical noise model, we for the first time synthesized a large-scale realistic single-photon image dataset (image pairs of 5 different resolutions with maximum megapixels, 17250 scenes, 10 different bit depth, 3 different illumination flux, 2.6 million images in total) for subsequent network training. To tackle the severe super-resolution challenge of SPAD inputs with low bit depth, low resolution, and heavy noise, we further built a deep transformer network with a content-adaptive self-attention mechanism and gated fusion modules, which can dig global contextual features to remove multi-source noise and extract full-frequency details. We applied the technique on a series of experiments including macroscopic and microscopic imaging, microfluidic inspection, and Fourier ptychography. The experiments validate the technique's state-of-the-art super-resolution SPAD imaging performance, with more than 5 dB superiority on PSNR compared to the existing methods

    Liquid Crystal Optics For Communications, Signal Processing And 3-d Microscopic Imaging

    Get PDF
    This dissertation proposes, studies and experimentally demonstrates novel liquid crystal (LC) optics to solve challenging problems in RF and photonic signal processing, freespace and fiber optic communications and microscopic imaging. These include free-space optical scanners for military and optical wireless applications, variable fiber-optic attenuators for optical communications, photonic control techniques for phased array antennas and radar, and 3-D microscopic imaging. At the heart of the applications demonstrated in this thesis are LC devices that are non-pixelated and can be controlled either electrically or optically. Instead of the typical pixel-by-pixel control as is custom in LC devices, the phase profile across the aperture of these novel LC devices is varied through the use of high impedance layers. Due to the presence of the high impedance layer, there forms a voltage gradient across the aperture of such a device which results in a phase gradient across the LC layer which in turn is accumulated by the optical beam traversing through this LC device. The geometry of the electrical contacts that are used to apply the external voltage will define the nature of the phase gradient present across the optical beam. In order to steer a laser beam in one angular dimension, straight line electrical contacts are used to form a one dimensional phase gradient while an annular electrical contact results in a circularly symmetric phase profile across the optical beam making it suitable for focusing the optical beam. The geometry of the electrical contacts alone is not sufficient to form the linear and the quadratic phase profiles that are required to either deflect or focus an optical beam. Clever use of the phase response of a typical nematic liquid crystal (NLC) is made such that the linear response region is used for the angular beam deflection while the high voltage quadratic response region is used for focusing the beam. Employing an NLC deflector, a device that uses the linear angular deflection, laser beam steering is demonstrated in two orthogonal dimensions whereas an NLC lens is used to address the third dimension to complete a three dimensional (3-D) scanner. Such an NLC deflector was then used in a variable optical attenuator (VOA), whereby a laser beam coupled between two identical single mode fibers (SMF) was mis-aligned away from the output fiber causing the intensity of the output coupled light to decrease as a function of the angular deflection. Since the angular deflection is electrically controlled, hence the VOA operation is fairly simple and repeatable. An extension of this VOA for wavelength tunable operation is also shown in this dissertation. A LC spatial light modulator (SLM) that uses a photo-sensitive high impedance electrode whose impedance can be varied by controlling the light intensity incident on it, is used in a control system for a phased array antenna. Phase is controlled on the Write side of the SLM by controlling the intensity of the Write laser beam which then is accessed by the Read beam from the opposite side of this reflective SLM. Thus the phase of the Read beam is varied by controlling the intensity of the Write beam. A variable fiber-optic delay line is demonstrated in the thesis which uses wavelength sensitive and wavelength insensitive optics to get both analog as well as digital delays. It uses a chirped fiber Bragg grating (FBG), and a 1xN optical switch to achieve multiple time delays. The switch can be implemented using the 3-D optical scanner mentioned earlier. A technique is presented for ultra-low loss laser communication that uses a combination of strong and weak thin lens optics. As opposed to conventional laser communication systems, the Gaussian laser beam is prevented from diverging at the receiving station by using a weak thin lens that places the transmitted beam waist mid-way between a symmetrical transmitter-receiver link design thus saving prime optical power. LC device technology forms an excellent basis to realize such a large aperture weak lens. Using a 1-D array of LC deflectors, a broadband optical add-drop filter (OADF) is proposed for dense wavelength division multiplexing (DWDM) applications. By binary control of the drive signal to the individual LC deflectors in the array, any optical channel can be selectively dropped and added. For demonstration purposes, microelectromechanical systems (MEMS) digital micromirrors have been used to implement the OADF. Several key systems issues such as insertion loss, polarization dependent loss, wavelength resolution and response time are analyzed in detail for comparison with the LC deflector approach. A no-moving-parts axial scanning confocal microscope (ASCM) system is designed and demonstrated using a combination of a large diameter LC lens and a classical microscope objective lens. By electrically controlling the 5 mm diameter LC lens, the 633 nm wavelength focal spot is moved continuously over a 48 [micro]m range with measured 3-dB axial resolution of 3.1 [micro]m using a 0.65 numerical aperture (NA) micro-objective lens. The ASCM is successfully used to image an Indium Phosphide twin square optical waveguide sample with a 10.2 [micro]m waveguide pitch and 2.3 [micro]m height and width. Using fine analog electrical control of the LC lens, a super-fine sub-wavelength axial resolution of 270 nm is demonstrated. The proposed ASCM can be useful in various precision three dimensional imaging and profiling applications

    Light-sheet microscopy: a tutorial

    Get PDF
    This paper is intended to give a comprehensive review of light-sheet (LS) microscopy from an optics perspective. As such, emphasis is placed on the advantages that LS microscope configurations present, given the degree of freedom gained by uncoupling the excitation and detection arms. The new imaging properties are first highlighted in terms of optical parameters and how these have enabled several biomedical applications. Then, the basics are presented for understanding how a LS microscope works. This is followed by a presentation of a tutorial for LS microscope designs, each working at different resolutions and for different applications. Then, based on a numerical Fourier analysis and given the multiple possibilities for generating the LS in the microscope (using Gaussian, Bessel, and Airy beams in the linear and nonlinear regimes), a systematic comparison of their optical performance is presented. Finally, based on advances in optics and photonics, the novel optical implementations possible in a LS microscope are highlighted.Peer ReviewedPostprint (published version

    Effect of multiuser interference on subscriber location in CDMA networks

    Get PDF
    The last few years have witnessed an ever growing interest in the field of mobile location systems for cellular systems. The motivation is the series of regulations passed by Federal Communications Commission, requiring that wireless service providers support a mobile telephone callback feature and cell site location mechanism. A further application of the location technology is in the rapidly emerging field of intelligent transportation systems, which are intended to enhance highway safety, location based billing etc. Many of the existing location technologies use GPS and its derivatives which require a specialized subscriber equipment. This is not feasible for popular use, as the cost of such equipments is very high. Hence, for a CDMA network, various methods have been studied that use the cellular network as the sole means to locate the mobile station (MS), where the estimates are derived from the signal transmitted by the MS to a set of base station\u27s (BS) This approach has the advantage of requiring no modifications to the subscriber equipment. While subscriber location has been previously studied for CDMA networks, the effect of multiple access interference has been ignored. In this thesis we investigate the problem of subscriber location in the presence of multiple access interference. Using MATLAB as a simulation tool, we have developed an extensive simulation technique which measures the error in location estimation for different network and user configurations. In our studies we include the effects of log-normal shadow and Rayleigh fading. We present results that illustrate the effects of varying shadowing losses, number of BS\u27s involved in position location, early-late discriminator offset and cell sizes in conjunction with the varying number of users per cell on the accuracy of radiolocation estimation

    Compressed-domain transcoding of H.264/AVC and SVC video streams

    Get PDF
    corecore