3,459 research outputs found

    Magnetic superlens-enhanced inductive coupling for wireless power transfer

    Full text link
    We investigate numerically the use of a negative-permeability "perfect lens" for enhancing wireless power transfer between two current carrying coils. The negative permeability slab serves to focus the flux generated in the source coil to the receiver coil, thereby increasing the mutual inductive coupling between the coils. The numerical model is compared with an analytical theory that treats the coils as point dipoles separated by an infinite planar layer of magnetic material [Urzhumov et al., Phys. Rev. B, 19, 8312 (2011)]. In the limit of vanishingly small radius of the coils, and large width of the metamaterial slab, the numerical simulations are in excellent agreement with the analytical model. Both the idealized analytical and realistic numerical models predict similar trends with respect to metamaterial loss and anisotropy. Applying the numerical models, we further analyze the impact of finite coil size and finite width of the slab. We find that, even for these less idealized geometries, the presence of the magnetic slab greatly enhances the coupling between the two coils, including cases where significant loss is present in the slab. We therefore conclude that the integration of a metamaterial slab into a wireless power transfer system holds promise for increasing the overall system performance

    Wavelet-Based Enhancement Technique for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of color digital images based on wavelet transform domain are investigated in this dissertation research. In this research, a novel, fast and robust wavelet-based dynamic range compression and local contrast enhancement (WDRC) algorithm to improve the visibility of digital images captured under non-uniform lighting conditions has been developed. A wavelet transform is mainly used for dimensionality reduction such that a dynamic range compression with local contrast enhancement algorithm is applied only to the approximation coefficients which are obtained by low-pass filtering and down-sampling the original intensity image. The normalized approximation coefficients are transformed using a hyperbolic sine curve and the contrast enhancement is realized by tuning the magnitude of the each coefficient with respect to surrounding coefficients. The transformed coefficients are then de-normalized to their original range. The detail coefficients are also modified to prevent edge deformation. The inverse wavelet transform is carried out resulting in a lower dynamic range and contrast enhanced intensity image. A color restoration process based on the relationship between spectral bands and the luminance of the original image is applied to convert the enhanced intensity image back to a color image. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some pathological scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for tackling the color constancy problem. The illuminant is modeled having an effect on the image histogram as a linear shift and adjust the image histogram to discount the illuminant. The WDRC algorithm is then applied with a slight modification, i.e. instead of using a linear color restoration, a non-linear color restoration process employing the spectral context relationships of the original image is applied. The proposed technique solves the color constancy issue and the overall enhancement algorithm provides attractive results improving visibility even for scenes with near-zero visibility conditions. In this research, a new wavelet-based image interpolation technique that can be used for improving the visibility of tiny features in an image is presented. In wavelet domain interpolation techniques, the input image is usually treated as the low-pass filtered subbands of an unknown wavelet-transformed high-resolution (HR) image, and then the unknown high-resolution image is produced by estimating the wavelet coefficients of the high-pass filtered subbands. The same approach is used to obtain an initial estimate of the high-resolution image by zero filling the high-pass filtered subbands. Detail coefficients are estimated via feeding this initial estimate to an undecimated wavelet transform (UWT). Taking an inverse transform after replacing the approximation coefficients of the UWT with initially estimated HR image, results in the final interpolated image. Experimental results of the proposed algorithms proved their superiority over the state-of-the-art enhancement and interpolation techniques

    A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications

    Full text link
    Auditory models are commonly used as feature extractors for automatic speech-recognition systems or as front-ends for robotics, machine-hearing and hearing-aid applications. Although auditory models can capture the biophysical and nonlinear properties of human hearing in great detail, these biophysical models are computationally expensive and cannot be used in real-time applications. We present a hybrid approach where convolutional neural networks are combined with computational neuroscience to yield a real-time end-to-end model for human cochlear mechanics, including level-dependent filter tuning (CoNNear). The CoNNear model was trained on acoustic speech material and its performance and applicability were evaluated using (unseen) sound stimuli commonly employed in cochlear mechanics research. The CoNNear model accurately simulates human cochlear frequency selectivity and its dependence on sound intensity, an essential quality for robust speech intelligibility at negative speech-to-background-noise ratios. The CoNNear architecture is based on parallel and differentiable computations and has the power to achieve real-time human performance. These unique CoNNear features will enable the next generation of human-like machine-hearing applications

    Mapping the Recent Star Formation History of the Disk of M51

    Full text link
    Using data acquired as part of a unique Hubble Heritage imaging program of broadband colors of the interacting spiral system M51/NGC 5195, we have conducted a photometric study of the stellar associations across the entire disk of the galaxy in order to assess trends in size, luminosity, and local environment associated with recent star formation activity in the system. Starting with a sample of over 900 potential associations, we have produced color-magnitude and color-color diagrams for the 120 associations that were deemed to be single-aged. It has been found that main sequence turnoffs are not evident for the vast majority of the stellar associations in our set, potentially due to the overlap of isochronal tracks at the high mass end of the main sequence, and the limited depth of our images at the distance of M51. In order to obtain ages for more of our sample, we produced model spectral energy distributions (SEDs) to fit to the data from the GALEXEV simple stellar population (SSP) models of Bruzual and Charlot (2003). These SEDs can be used to determine age, size, mass, metallicity, and dust content of each association via a simple chi-squared minimization to each association's B, V, and I-band fluxes. The derived association properties are mapped as a function of location, and recent trends in star formation history of the galaxy are explored in light of these results. This work is the first phase in a program that will compare these stellar systems with their environments using ultraviolet data from GALEX and infrared data from Spitzer, and ultimately we plan to apply the same stellar population mapping methodology to other nearby face-on spiral galaxies.Comment: 13 pages, 3 figures, 1 table. Accepted to The Astronomical Journa

    WIDE-RANGE COMPRESSION FORCES TO INVESTIGATE SINGLE-CELL IN-FLOW MOTIONS, MECHANOBIOLOGICAL RESPONSES AND INTRACELLULAR DELIVERY

    Get PDF
    The aim of the PhD work is to create a new microfluidic approach to finely tune applied in-flow forces in order to explore controlled single-cell deformation. In fact, we propose a microfluidic device based on compression forces arising from a viscoelastic fluid solution that firstly align cells and then deform them. By simply changing the rheological properties and the imposed fluid-flow conditions, our approach represents an easy-to-use and versatile tool to collect a comprehensive mapping of single-cell properties, investigating both biophysical and biomechanical characteristics. In a wide-range of applied compression, we observe how different degrees of deformation lead to cell-specific deformation-dependent in-flow dynamics, which correlate the classical deformation parameters (e.g. cell aspect-ratio), with dynamic quantities (e.g. revolution time of rotation during in-flow motion). Thus, a precise in-flow label-free cell phenotyping is achieved allowing the distinction of different cell classes. The observation of different degrees of deformation corresponding to variable compression, lead us to interrogate the inner cell structures possibly involved into the mechanical responses. We demonstrate that re-organization phenomena of actin cortex and microtubules as well as of nuclear envelope and chromatin content, occur. Also in this case, cell-specific responses are collected, allowing us to distinguish healthy from pathological cells depending on the structural mechanical reaction. Furthermore, by playing with the high levels of compression, we show preliminary results about the possibility to induce a nanoparticle intracellular delivery process by escaping physiological endocytosis. In fact, cells result to be able to incorporate nanoparticles into the cytoplasm, without involving a vesicle formation for the entry. These outcome open up new interesting scenarios about the possibility to use the microfluidic device as a platform for cell phenotyping and intracellular delivery, properly engineered for both diagnostic and therapeutic purposes

    Video enhancement : content classification and model selection

    Get PDF
    The purpose of video enhancement is to improve the subjective picture quality. The field of video enhancement includes a broad category of research topics, such as removing noise in the video, highlighting some specified features and improving the appearance or visibility of the video content. The common difficulty in this field is how to make images or videos more beautiful, or subjectively better. Traditional approaches involve lots of iterations between subjective assessment experiments and redesigns of algorithm improvements, which are very time consuming. Researchers have attempted to design a video quality metric to replace the subjective assessment, but so far it is not successful. As a way to avoid heuristics in the enhancement algorithm design, least mean square methods have received considerable attention. They can optimize filter coefficients automatically by minimizing the difference between processed videos and desired versions through a training. However, these methods are only optimal on average but not locally. To solve the problem, one can apply the least mean square optimization for individual categories that are classified by local image content. The most interesting example is Kondo’s concept of local content adaptivity for image interpolation, which we found could be generalized into an ideal framework for content adaptive video processing. We identify two parts in the concept, content classification and adaptive processing. By exploring new classifiers for the content classification and new models for the adaptive processing, we have generalized a framework for more enhancement applications. For the part of content classification, new classifiers have been proposed to classify different image degradations such as coding artifacts and focal blur. For the coding artifact, a novel classifier has been proposed based on the combination of local structure and contrast, which does not require coding block grid detection. For the focal blur, we have proposed a novel local blur estimation method based on edges, which does not require edge orientation detection and shows more robust blur estimation. With these classifiers, the proposed framework has been extended to coding artifact robust enhancement and blur dependant enhancement. With the content adaptivity to more image features, the number of content classes can increase significantly. We show that it is possible to reduce the number of classes without sacrificing much performance. For the part of model selection, we have introduced several nonlinear filters to the proposed framework. We have also proposed a new type of nonlinear filter, trained bilateral filter, which combines both advantages of the original bilateral filter and the least mean square optimization. With these nonlinear filters, the proposed framework show better performance than with linear filters. Furthermore, we have shown a proof-of-concept for a trained approach to obtain contrast enhancement by a supervised learning. The transfer curves are optimized based on the classification of global or local image content. It showed that it is possible to obtain the desired effect by learning from other computationally expensive enhancement algorithms or expert-tuned examples through the trained approach. Looking back, the thesis reveals a single versatile framework for video enhancement applications. It widens the application scope by including new content classifiers and new processing models and offers scalabilities with solutions to reduce the number of classes, which can greatly accelerate the algorithm design

    A Vision-Based Automatic Safe landing-Site Detection System

    Get PDF
    An automatic safe landing-site detection system is proposed for aircraft emergency landing, based on visible information acquired by aircraft-mounted cameras. Emergency landing is an unplanned event in response to emergency situations. If, as is unfortunately usually the case, there is no airstrip or airfield that can be reached by the un-powered aircraft, a crash landing or ditching has to be carried out. Identifying a safe landing-site is critical to the survival of passengers and crew. Conventionally, the pilot chooses the landing-site visually by looking at the terrain through the cockpit. The success of this vital decision greatly depends on the external environmental factors that can impair human vision, and on the pilot\u27s flight experience that can vary significantly among pilots. Therefore, we propose a robust, reliable and efficient detection system that is expected to alleviate the negative impact of these factors. In this study, we focus on the detection mechanism of the proposed system and assume that the image enhancement for increased visibility and image stitching for a larger field-of-view have already been performed on terrain images acquired by aircraft-mounted cameras. Specifically, we first propose a hierarchical elastic horizon detection algorithm to identify ground in rile image. Then the terrain image is divided into non-overlapping blocks which are clustered according to a roughness measure. Adjacent smooth blocks are merged to form potential landing-sites whose dimensions are measured with principal component analysis and geometric transformations. If the dimensions of a candidate region exceed the minimum requirement for safe landing, the potential landing-site is considered a safe candidate and highlighted on the human machine interface. At the end, the pilot makes the final decision by confirming one of the candidates, also considering other factors such as wind speed and wind direction, etc

    Visual Content Characterization Based on Encoding Rate-Distortion Analysis

    Get PDF
    Visual content characterization is a fundamentally important but under exploited step in dataset construction, which is essential in solving many image processing and computer vision problems. In the era of machine learning, this has become ever more important, because with the explosion of image and video content nowadays, scrutinizing all potential content is impossible and source content selection has become increasingly difficult. In particular, in the area of image/video coding and quality assessment, it is highly desirable to characterize/select source content and subsequently construct image/video datasets that demonstrate strong representativeness and diversity of the visual world, such that the visual coding and quality assessment methods developed from and validated using such datasets exhibit strong generalizability. Encoding Rate-Distortion (RD) analysis is essential for many multimedia applications. Examples of applications that explicitly use RD analysis include image encoder RD optimization, video quality assessment (VQA), and Quality of Experience (QoE) optimization of streaming videos etc. However, encoding RD analysis has not been well investigated in the context of visual content characterization. This thesis focuses on applying encoding RD analysis as a visual source content characterization method with image/video coding and quality assessment applications in mind. We first conduct a video quality subjective evaluation experiment for state-of-the-art video encoder performance analysis and comparison, where our observations reveal severe problems that motivate the needs of better source content characterization and selection methods. Then the effectiveness of RD analysis in visual source content characterization is demonstrated through a proposed quality control mechanism for video coding by eigen analysis in the space of General Quality Parameter (GQP) functions. Finally, by combining encoding RD analysis with submodular set function optimization, we propose a novel method for automating the process of representative source content selection, which helps boost the RD performance of visual encoders trained with the selected visual contents
    • …
    corecore