133 research outputs found

    Influence of miR155 on allergic conjunctivitis in mice via regulation of NF-κB signal pathway

    Get PDF
    Purpose: To investigate the effect of miR-155 on allergic conjunctivitis (AC) in mice, and to elucidate the mechanism of action. Methods: Sixty (60) Balb/c mice were randomly divided into three groups with 20 mice per group. Ovalbumin (OVA) was used to induce experimental model of AC in mice. Mice in the AC+miR-155 siRNA group were given miR-SiRNA once daily for 2 weeks before inducing AC. The expressions of miR-155 in conjunctival tissue of the control and AC groups were assayed with reverse transcriptionpolymerase chain reaction (RT-PCR). In addition, anti-OVA IgE antibody, eotaxin, IL-13 and IFN-γ levels were determined using ELISA (enzyme-linked immunosorbent assay). The regulatory effect of miR-155 on the NF-κB signal pathway in mice conjunctiva tissue with AC was determined using immunoblotting. Results: Higher miR-155 expression was seen in serum of AC group than in that of control group (p < 0.05). Inhibition of miR-155 mitigated AC-induced pathological injury, reduced infiltration of eosinophils, lowered serum levels of anti-AVO IgE antibody eotaxin and Il-13, and increased IFN-γ level (p < 0.05). Phosphorylation of P65 of conjunctiva tissue of AC mice was blocked after inhibition of miR-155. Conclusion: The inhibition of miR-155 ameliorates AC in mice most likely via a mechanism related to the inhibition of phosphorylation of P65. This provides a theoretical basis for new drug research and development

    GaussianBody: Clothed Human Reconstruction via 3d Gaussian Splatting

    Full text link
    In this work, we propose a novel clothed human reconstruction method called GaussianBody, based on 3D Gaussian Splatting. Compared with the costly neural radiance based models, 3D Gaussian Splatting has recently demonstrated great performance in terms of training time and rendering quality. However, applying the static 3D Gaussian Splatting model to the dynamic human reconstruction problem is non-trivial due to complicated non-rigid deformations and rich cloth details. To address these challenges, our method considers explicit pose-guided deformation to associate dynamic Gaussians across the canonical space and the observation space, introducing a physically-based prior with regularized transformations helps mitigate ambiguity between the two spaces. During the training process, we further propose a pose refinement strategy to update the pose regression for compensating the inaccurate initial estimation and a split-with-scale mechanism to enhance the density of regressed point clouds. The experiments validate that our method can achieve state-of-the-art photorealistic novel-view rendering results with high-quality details for dynamic clothed human bodies, along with explicit geometry reconstruction

    Hierarchical Fashion Design with Multi-stage Diffusion Models

    Full text link
    Cross-modal fashion synthesis and editing offer intelligent support to fashion designers by enabling the automatic generation and local modification of design drafts.While current diffusion models demonstrate commendable stability and controllability in image synthesis,they still face significant challenges in generating fashion design from abstract design elements and fine-grained editing.Abstract sensory expressions, \eg office, business, and party, form the high-level design concepts, while measurable aspects like sleeve length, collar type, and pant length are considered the low-level attributes of clothing.Controlling and editing fashion images using lengthy text descriptions poses a difficulty.In this paper, we propose HieraFashDiff,a novel fashion design method using the shared multi-stage diffusion model encompassing high-level design concepts and low-level clothing attributes in a hierarchical structure.Specifically, we categorized the input text into different levels and fed them in different time step to the diffusion model according to the criteria of professional clothing designers.HieraFashDiff allows designers to add low-level attributes after high-level prompts for interactive editing incrementally.In addition, we design a differentiable loss function in the sampling process with a mask to keep non-edit areas.Comprehensive experiments performed on our newly conducted Hierarchical fashion dataset,demonstrate that our proposed method outperforms other state-of-the-art competitors

    Learning to Zoom and Unzoom

    Full text link
    Many perception systems in mobile computing, autonomous navigation, and AR/VR face strict compute constraints that are particularly challenging for high-resolution input images. Previous works propose nonuniform downsamplers that "learn to zoom" on salient image regions, reducing compute while retaining task-relevant image information. However, for tasks with spatial labels (such as 2D/3D object detection and semantic segmentation), such distortions may harm performance. In this work (LZU), we "learn to zoom" in on the input image, compute spatial features, and then "unzoom" to revert any deformations. To enable efficient and differentiable unzooming, we approximate the zooming warp with a piecewise bilinear mapping that is invertible. LZU can be applied to any task with 2D spatial input and any model with 2D spatial features, and we demonstrate this versatility by evaluating on a variety of tasks and datasets: object detection on Argoverse-HD, semantic segmentation on Cityscapes, and monocular 3D object detection on nuScenes. Interestingly, we observe boosts in performance even when high-resolution sensor data is unavailable, implying that LZU can be used to "learn to upsample" as well.Comment: CVPR 2023. Code and additional visuals available at https://tchittesh.github.io/lzu

    SonicVisionLM: Playing Sound with Vision Language Models

    Full text link
    There has been a growing interest in the task of generating sound for silent videos, primarily because of its practicality in streamlining video post-production. However, existing methods for video-sound generation attempt to directly create sound from visual representations, which can be challenging due to the difficulty of aligning visual representations with audio representations. In this paper, we present SonicVisionLM, a novel framework aimed at generating a wide range of sound effects by leveraging vision-language models(VLMs). Instead of generating audio directly from video, we use the capabilities of powerful VLMs. When provided with a silent video, our approach first identifies events within the video using a VLM to suggest possible sounds that match the video content. This shift in approach transforms the challenging task of aligning image and audio into more well-studied sub-problems of aligning image-to-text and text-to-audio through the popular diffusion models. To improve the quality of audio recommendations with LLMs, we have collected an extensive dataset that maps text descriptions to specific sound effects and developed a time-controlled audio adapter. Our approach surpasses current state-of-the-art methods for converting video to audio, enhancing synchronization with the visuals, and improving alignment between audio and video components. Project page: https://yusiissy.github.io/SonicVisionLM.github.io/Comment: CVPR 202

    Semi-Supervised Semantic Segmentation of Remote Sensing Images Based on Dual Cross-Entropy Consistency

    Get PDF
    Semantic segmentation is a growing topic in high-resolution remote sensing image processing. The information in remote sensing images is complex, and the effectiveness of most remote sensing image semantic segmentation methods depends on the number of labels; however, labeling images requires significant time and labor costs. To solve these problems, we propose a semi-supervised semantic segmentation method based on dual cross-entropy consistency and a teacher–student structure. First, we add a channel attention mechanism to the encoding network of the teacher model to reduce the predictive entropy of the pseudo label. Secondly, the two student networks share a common coding network to ensure consistent input information entropy, and a sharpening function is used to reduce the information entropy of unsupervised predictions for both student networks. Finally, we complete the alternate training of the models via two entropy-consistent tasks: (1) semi-supervising student prediction results via pseudo-labels generated from the teacher model, (2) cross-supervision between student models. Experimental results on publicly available datasets indicate that the suggested model can fully understand the hidden information in unlabeled images and reduce the information entropy in prediction, as well as reduce the number of required labeled images with guaranteed accuracy. This allows the new method to outperform the related semi-supervised semantic segmentation algorithm at half the proportion of labeled images

    The elasticity of tobacco demand in Australia

    Get PDF
    This paper examines the elasticity of demand of tobacco products in Australia from 2000 to 2011. The hypothesis is that the demand for cigarettes is inelastic. The alternate hypothesis is that the demand for cigarettes is elastic. The hypothesis implies that increasing tobacco tax decreases government tax revenue, while the opposite is true for a decrease in tobacco tax. This paper obtains data mainly from Australian Bureau of Statistics and Cancer Council Victoria. We find an increase in the excise rate and government revenue from tobacco products, therefore implying that the demand of tobacco products in Australia is inelastic. We find further support of this finding by examining factors such as the age and income structure of the population
    • …
    corecore