82 research outputs found

    Decision making in an uncertain world

    Get PDF

    Method and system for providing beam polarization

    Full text link
    A radiation polarizer, controller, and a method of radiation polarization and beam control, are disclosed. The radiation polarizer includes a substrate, at least one anti-reflection coating layer communicatively coupled to the substrate, at least two nanostructures communicatively coupled to the at least one anti-reflection coating layer, and at least two groove layers, wherein each one of the at least two groove layers is interstitial to a respective one of the at least two nanostructures. The method may include the steps of communicatively coupling at least one anti-reflection coating layer to a substrate, communicatively coupling at least two nanostructures to at least one of the at least one anti-reflection coating layer, providing interstitially to a respective one of the at least two nanostructures at least two groove layers, coupling the at least two groove layers and the at least two nanostructures to provide a pass wavelength in the range of about 250 nm to less than about a microwave wavelength, and allowing for examining of radiation having a wavelength in a range of about 250 nm to less than about a microwave wavelength, and having an electric field orthogonal to the at least two groove layers, by allowing for a passing of the radiation through said coupling of the at least two groove layers and the at least two nanostructures.Published versio

    Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision

    Full text link
    The rapid evolution of Multi-modality Large Language Models (MLLMs) has catalyzed a shift in computer vision from specialized models to general-purpose foundation models. Nevertheless, there is still an inadequacy in assessing the abilities of MLLMs on low-level visual perception and understanding. To address this gap, we present Q-Bench, a holistic benchmark crafted to systematically evaluate potential abilities of MLLMs on three realms: low-level visual perception, low-level visual description, and overall visual quality assessment. a) To evaluate the low-level perception ability, we construct the LLVisionQA dataset, consisting of 2,990 diverse-sourced images, each equipped with a human-asked question focusing on its low-level attributes. We then measure the correctness of MLLMs on answering these questions. b) To examine the description ability of MLLMs on low-level information, we propose the LLDescribe dataset consisting of long expert-labelled golden low-level text descriptions on 499 images, and a GPT-involved comparison pipeline between outputs of MLLMs and the golden descriptions. c) Besides these two tasks, we further measure their visual quality assessment ability to align with human opinion scores. Specifically, we design a softmax-based strategy that enables MLLMs to predict quantifiable quality scores, and evaluate them on various existing image quality assessment (IQA) datasets. Our evaluation across the three abilities confirms that MLLMs possess preliminary low-level visual skills. However, these skills are still unstable and relatively imprecise, indicating the need for specific enhancements on MLLMs towards these abilities. We hope that our benchmark can encourage the research community to delve deeper to discover and enhance these untapped potentials of MLLMs. Project Page: https://vqassessment.github.io/Q-Bench.Comment: 25 pages, 14 figures, 9 tables, preprint versio

    Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models

    Full text link
    Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model. While existing foundation models have shown exciting potentials on low-level visual tasks, their related abilities are still preliminary and need to be improved. In order to enhance these models, we conduct a large-scale subjective experiment collecting a vast number of real human feedbacks on low-level vision. Each feedback follows a pathway that starts with a detailed description on the low-level visual appearance (*e.g. clarity, color, brightness* of an image, and ends with an overall conclusion, with an average length of 45 words. The constructed **Q-Pathway** dataset includes 58K detailed human feedbacks on 18,973 images with diverse low-level appearance. Moreover, to enable foundation models to robustly respond to diverse types of questions, we design a GPT-participated conversion to process these feedbacks into diverse-format 200K instruction-response pairs. Experimental results indicate that the **Q-Instruct** consistently elevates low-level perception and understanding abilities across several foundational models. We anticipate that our datasets can pave the way for a future that general intelligence can perceive, understand low-level visual appearance and evaluate visual quality like a human. Our dataset, model zoo, and demo is published at: https://q-future.github.io/Q-Instruct.Comment: 16 pages, 11 figures, page 12-16 as appendi

    Hyperspectral Mineral Target Detection Based on Density Peak

    No full text
    Hyperspectral remote sensing, with its narrow band imaging, provides the potential for fine identification of ground objects, and has unique advantages in mineral detection. However, the image is nonlinear and the pure pixel is scarce, so using standard spectrum detection will lead to an increase of the number of false alarm and missed detection. The density peak algorithm performs well in high-dimensional space and data clustering with irregular category shape. This paper used the density peak clustering to determine the cluster centers of various categories of images, and took it as the target spectrum, and took the clustering results as the ground data. Two methods of HUD and OSP were used to detect the image, and the correlation coefficients of the spectrum of each cluster center and the mineral spectrum of the spectral library were obtained. Finally, the results were compared with the mapping results of Clark et al. The experimental results showed that the cluster center spectrum as the target can well detected the distribution of the corresponding minerals, and it has higher correlation coefficient with mineral in the result of mapping
    • …
    corecore