30 research outputs found

    Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for Loss-free Multi-Exposure Image Fusion

    Full text link
    Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels. Despite its advancements, the field grapples with challenges, notably the reliance on manual designs for network structures and loss functions, and the constraints of utilizing simulated reference images as ground truths. Consequently, current methodologies often suffer from color distortions and exposure artifacts, further complicating the quest for authentic image representation. In addressing these challenges, this paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions. More specifically, we harnesses a unique dual research mechanism rooted in a novel weighted structure refinement architecture search. Besides, a hybrid supervised contrast constraint seamlessly guides and integrates with searching process, facilitating a more adaptive and comprehensive search for optimal loss functions. We realize the state-of-the-art performance in comparison to various competitive schemes, yielding a 10.61% and 4.38% improvement in Visual Information Fidelity (VIF) for general and no-reference scenarios, respectively, while providing results with high contrast, rich details and colors

    CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion

    Full text link
    Infrared and visible image fusion targets to provide an informative image by combining complementary information from different sensors. Existing learning-based fusion approaches attempt to construct various loss functions to preserve complementary features from both modalities, while neglecting to discover the inter-relationship between the two modalities, leading to redundant or even invalid information on the fusion results. To alleviate these issues, we propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion in an end-to-end manner. Concretely, to simultaneously retain typical features from both modalities and remove unwanted information emerging on the fused result, we develop a coupled contrastive constraint in our loss function.In a fused imge, its foreground target/background detail part is pulled close to the infrared/visible source and pushed far away from the visible/infrared source in the representation space. We further exploit image characteristics to provide data-sensitive weights, which allows our loss function to build a more reliable relationship with source images. Furthermore, to learn rich hierarchical feature representation and comprehensively transfer features in the fusion process, a multi-level attention module is established. In addition, we also apply the proposed CoCoNet on medical image fusion of different types, e.g., magnetic resonance image and positron emission tomography image, magnetic resonance image and single photon emission computed tomography image. Extensive experiments demonstrate that our method achieves the state-of-the-art (SOTA) performance under both subjective and objective evaluation, especially in preserving prominent targets and recovering vital textural details.Comment: 25 pages, 16 figure

    A Multi-Scale Decomposition MLP-Mixer for Time Series Analysis

    Full text link
    Time series data, often characterized by unique composition and complex multi-scale temporal variations, requires special consideration of decomposition and multi-scale modeling in its analysis. Existing deep learning methods on this best fit to only univariate time series, and have not sufficiently accounted for sub-series level modeling and decomposition completeness. To address this, we propose MSD-Mixer, a Multi-Scale Decomposition MLP-Mixer which learns to explicitly decompose the input time series into different components, and represents the components in different layers. To handle multi-scale temporal patterns and inter-channel dependencies, we propose a novel temporal patching approach to model the time series as multi-scale sub-series, i.e., patches, and employ MLPs to mix intra- and inter-patch variations and channel-wise correlations. In addition, we propose a loss function to constrain both the magnitude and autocorrelation of the decomposition residual for decomposition completeness. Through extensive experiments on various real-world datasets for five common time series analysis tasks (long- and short-term forecasting, imputation, anomaly detection, and classification), we demonstrate that MSD-Mixer consistently achieves significantly better performance in comparison with other state-of-the-art task-general and task-specific approaches

    Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and Beyond

    Full text link
    Recently, multi-modality scene perception tasks, e.g., image fusion and scene understanding, have attracted widespread attention for intelligent vision systems. However, early efforts always consider boosting a single task unilaterally and neglecting others, seldom investigating their underlying connections for joint promotion. To overcome these limitations, we establish the hierarchical dual tasks-driven deep model to bridge these tasks. Concretely, we firstly construct an image fusion module to fuse complementary characteristics and cascade dual task-related modules, including a discriminator for visual effects and a semantic network for feature measurement. We provide a bi-level perspective to formulate image fusion and follow-up downstream tasks. To incorporate distinct task-related responses for image fusion, we consider image fusion as a primary goal and dual modules as learnable constraints. Furthermore, we develop an efficient first-order approximation to compute corresponding gradients and present dynamic weighted aggregation to balance the gradients for fusion learning. Extensive experiments demonstrate the superiority of our method, which not only produces visually pleasant fused results but also realizes significant promotion for detection and segmentation than the state-of-the-art approaches.Comment: 9 pages,6 figures, published to IJCA

    An Efficient and Accurate Convolution-Based Similarity Measure for Uncertain Trajectories

    No full text
    With the rapid development of localization techniques and the prevalence of mobile devices, massive amounts of trajectory data have been generated, playing essential roles in areas of user analytics, smart transportation, and public safety. Measuring trajectory similarity is one of the fundamental tasks in trajectory analytics. Although considerable research has been conducted on trajectory similarity, the majority of existing approaches measure the similarity between two trajectories by calculating the distance between aligned locations, leading to challenges related to uncertain trajectories (e.g., low and heterogeneous data sampling rates, as well as location noise). To address these challenges, we propose Contra, a convolution-based similarity measure designed specifically for uncertain trajectories. The main focus of Contra is to identify the similarity of trajectory shapes while disregarding the time/order relevance of each record within the trajectory. To this end, it leverages a series of convolution and pooling operations to extract high-level geo-information from trajectories, and subsequently compares their similarities based on these extracted features. Moreover, we introduce efficient trajectory index strategies to enhance the computational efficiency of our proposed measure. We conduct comprehensive experiments on two trajectory datasets to evaluate the performance of our proposed approach. The experiments on both datasets show the effectiveness and efficiency of our approach. Specifically, the mean rank of Contra is 3 times better than the state-of-the-art approaches, and the precision of Contra surpasses baseline approaches by 20–40%

    Doxorubicin combined with low intensity ultrasound suppresses the growth of oral squamous cell carcinoma in culture and in xenografts

    No full text
    Abstract Background Oral squamous cell carcinoma (OSCC) invades surrounding tissues by upregulating matrix metalloproteinases (MMPs) -2 and −9, which causes over-expression of the Hedgehog signaling proteins Shh and Gli-1 and degradation of the extracellular matrix, thereby creating a “highway” for tumor invasion. We explored the potential of low intensity ultrasound (LIUS) and doxorubicin (DOX) to inhibit the formation of this “highway”. Methods MTT assays were used to examine OSCC cell viability after exposure to LIUS and DOX. The cell morphological changes and ultrastructure were detected by scanning electron microscopy and transmission electron microscopy. Endogenous autophagy-associated proteins were analyzed by immunofluorescent staining and western blotting. Cell migration and invasion abilities were evaluated by Transwell assays. Collagen fiber changes were evaluated by Masson’s trichrome staining. Invasion-associated proteins were analyzed by immunohistochemistry and western blotting. Results LIUS of 1 W/cm2 increased the in vitro DOX uptake into OSCC by nearly 3-fold in three different cell lines and induced transient autophagic vacuoles on the cell surface. The combination of LIUS and 0.2 μg/ml DOX inhibited tumor cell viability and invasion, promoted tumor stromal collagen deposition, and prolonged the survival of mice. This combination also down-regulated MMP-2, MMP-9, Shh and Gli-1 in tumor xenografts. Collagen fiber expression was negatively correlated with the expression of these proteins in human OSCC samples. Conclusions Our findings suggest that effective low dosages of DOX in combination with LIUS can inhibit cell proliferation, migration and invasion, which might be through MMP-2/9 production mediated by the Hedgehog signaling pathway

    Amorphous phosphatized hybrid interfacial layer for dendrite-free sodium deposition

    No full text
    Sodium (Na) is a promising anode material for sodium ion batteries due to its high theoretical capacity and favorable redox voltage, but the dendrite growth issue limits its practical application. Herein, an artificial hybrid interface layer based on an amorphous phosphatized hybrid (a-Na3P/NaBr) is developed to facilitate a homogeneous and dendrite-free lateral growth behavior during recurring sodium plating/stripping processes. The proposed Na metal anode delivers an excellent cycling performance for over 200 cycles with an average Coulombic efficiency 99.5% under the capacity of 3 mAh cm−2. Besides, the symmetric cell also persists for over 2000 h under the same capacity. Notably, under the depth of discharge as high as 50%, the modified Na metal anode can still be stably cycled for nearly 350 h, showing much superior performance to the bare Na counterpart. Benefiting from these advantages, the full cell based on the Na anodes with this amorphous phosphatized hybrid interphase coating delivers a 98% capacity retention even after 1200 cycles. We believe that these findings will provide a promising avenue for the next-generation Na metal-based energy storage technologies
    corecore