43 research outputs found

    Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and Visible Image Fusion

    Full text link
    Infrared and visible image fusion plays a vital role in the field of computer vision. Previous approaches make efforts to design various fusion rules in the loss functions. However, these experimental designed fusion rules make the methods more and more complex. Besides, most of them only focus on boosting the visual effects, thus showing unsatisfactory performance for the follow-up high-level vision tasks. To address these challenges, in this letter, we develop a semantic-level fusion network to sufficiently utilize the semantic guidance, emancipating the experimental designed fusion rules. In addition, to achieve a better semantic understanding of the feature fusion process, a fusion block based on the transformer is presented in a multi-scale manner. Moreover, we devise a regularization loss function, together with a training strategy, to fully use semantic guidance from the high-level vision tasks. Compared with state-of-the-art methods, our method does not depend on the hand-crafted fusion loss function. Still, it achieves superior performance on visual quality along with the follow-up high-level vision tasks

    Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for Loss-free Multi-Exposure Image Fusion

    Full text link
    Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels. Despite its advancements, the field grapples with challenges, notably the reliance on manual designs for network structures and loss functions, and the constraints of utilizing simulated reference images as ground truths. Consequently, current methodologies often suffer from color distortions and exposure artifacts, further complicating the quest for authentic image representation. In addressing these challenges, this paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions. More specifically, we harnesses a unique dual research mechanism rooted in a novel weighted structure refinement architecture search. Besides, a hybrid supervised contrast constraint seamlessly guides and integrates with searching process, facilitating a more adaptive and comprehensive search for optimal loss functions. We realize the state-of-the-art performance in comparison to various competitive schemes, yielding a 10.61% and 4.38% improvement in Visual Information Fidelity (VIF) for general and no-reference scenarios, respectively, while providing results with high contrast, rich details and colors

    Multiple Methods to Partition Evapotranspiration in a Maize Field

    Get PDF
    Partitioning evapotranspiration (ET) into soil evaporation E and plant transpiration T is important, but it is still a theoretical and technical challenge. The isotopic technique is considered to be an effective method, but it is difficult to quantify the isotopic composition of transpiration δT and evaporation δE directly and continuously; few previous studies determined δT successfully under a non-steady state (NSS). Here, multiple methods were used to partition ET in a maize field and a new flow-through chamber system was refined to provide direct and continuous measurement of δT and δE. An eddy covariance and lysimeter (EC-L)-based method and two isotope-based methods [isotope combined with the Craig–Gordon model (Iso-CG) and isotope using chamber measurement (Iso-M)] were applied to partition ET. Results showed the transpiration fraction FT in Iso-CG was consistent with EC-L at both diurnal and growing season time scales, but FT calculated by Iso-M was less than Iso-CG and EC-L. The chamber system method presented here to determine δT under NSS and isotope steady state (ISS) was robust, but there could be some deviation in measuring δE. The FT varied from 52% to 91%, with a mean of 78% during the entire growing season, and it was well described by a function of LAI, with a nonlinear relationship of FT = 0.71LAI0.14. The results demonstrated the feasibility of the isotope-based chamber system to partition ET. This technique and its further development may enable field ET partitioning accurately and continuously and improve understanding of water cycling through the soil–plant–atmosphere continuum

    CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion

    Full text link
    Infrared and visible image fusion targets to provide an informative image by combining complementary information from different sensors. Existing learning-based fusion approaches attempt to construct various loss functions to preserve complementary features from both modalities, while neglecting to discover the inter-relationship between the two modalities, leading to redundant or even invalid information on the fusion results. To alleviate these issues, we propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion in an end-to-end manner. Concretely, to simultaneously retain typical features from both modalities and remove unwanted information emerging on the fused result, we develop a coupled contrastive constraint in our loss function.In a fused imge, its foreground target/background detail part is pulled close to the infrared/visible source and pushed far away from the visible/infrared source in the representation space. We further exploit image characteristics to provide data-sensitive weights, which allows our loss function to build a more reliable relationship with source images. Furthermore, to learn rich hierarchical feature representation and comprehensively transfer features in the fusion process, a multi-level attention module is established. In addition, we also apply the proposed CoCoNet on medical image fusion of different types, e.g., magnetic resonance image and positron emission tomography image, magnetic resonance image and single photon emission computed tomography image. Extensive experiments demonstrate that our method achieves the state-of-the-art (SOTA) performance under both subjective and objective evaluation, especially in preserving prominent targets and recovering vital textural details.Comment: 25 pages, 16 figure

    The distribution and characteristics of suspended particulate matter in the Chukchi Sea

    Get PDF
    Samples taken from the Chukchi Sea (CS) during the 4th Chinese National Arctic Research Expedition, 2010, were analyzed to determine the content and composition of suspended particulate matter (SPM) to improve our understanding of the distribution, sources and control factors of the SPM there. The results show that the SPM in the water column is highest in the middle and near the bottom in the south and central–north CS, followed by that off the Alaskan coast and in Barrow Canyon. The SPM content is lowest in the central CS. Scanning electron microscope (SEM) analysis shows that the SPM in the south and central–north CS is composed mainly of diatoms, but the dominant species in those two areas are different. The SPM off the Alaskan coast and in Barrow Canyon is composed mainly of terrigenous material with few bio-skeletal clasts. The distribution of temperature and salinity and the correlation between diatom species in SPM indicate that the diatom dominant SPM in the south CS is from the Pacific Ocean via the Bering Strait in summer. The diatom dominant SPM in the central–north CS is also from Pacific water, which reaches the CS in winter. The SPM in the middle and near the bottom of the water column off the Alaskan coast and in Barrow Canyon is from Alaskan coastal water and terrigenous material transported by rivers in Alaska

    Rupture Heterogeneity and Directivity Effects in Back-Projection Analysis

    Get PDF
    The back projection method is a tremendously powerful technique for investigating the time dependent earthquake source, but its physical interpretation is elusive. We investigate how earthquake rupture heterogeneity and directivity can affect back-projection results (imaged location and beam power) using synthetic earthquake models. Rather than attempting to model the dynamics of any specific real earthquake, we use idealized kinematic rupture models, with constant or varying rupture velocity, peak slip rate, and fault-local strike orientation along unilateral or bilateral rupturing faults, and perform back-projection with the resultant synthetic seismograms. Our experiments show back-projection can track only heterogeneous rupture processes;homogeneous rupture is not resolved in our synthetic experiments. The amplitude of beam power does not necessarily correlate with the amplitude of any specific rupture parameter (e.g., slip rate or rupture velocity) at the back-projected location. Rather, it depends on the spatial heterogeneity around the back-projected rupture front, and is affected by the rupture directivity. A shorter characteristic wavelength of the source heterogeneity or rupture directivity toward the array results in strong beam power in higher frequency. We derive an equation based on Doppler theory to relate the wavelength of heterogeneity with synthetic seismogram frequency. This theoretical relation can explain the frequency- and array-dependent back-projection results not only in our synthetic experiments but also to analyze the 2019 M7.6 bilaterally rupturing New Ireland earthquake. Our study provides a novel perspective to physically interpret back-projection results and to retrieve information about earthquake rupture characteristics

    Composition and distribution of fish species collected during the fourth Chinese National Arctic Research Expedition in 2010

    Get PDF
    There are awareness and concerns caused by the decreasing sea ice coverage around the Arctic and Antarctic due to effects of climate change. Emphasis in this study was on rapid changes in Arctic sea ice coverage and its impacts on the marine ecology during the fourth Chinese National Arctic Research Expedition in 2010. Our purpose was to establish a baseline of Arctic fish compositions, and consequent effects of climate change on the fish community and biogeography. Fish specimens were collected using a multinet middle-water trawl, French-type beam trawl, otter trawl, and triangular bottom trawl. In total, 36 tows were carried out along the shelf of the Bering Sea, Bering Strait, and Chukchi Sea in the Arctic Ocean. In total, 41 fish species belonging to 14 families in 7 orders were collected during the expedition. Among them, the Scorpaeniformes, including 17 species, accounted for almost one third of the total number (34.8%), followed by 14 species of the Perciformes (27.0%), 5 species of the Pleuronectiformes(22.3%), and 2 species of the Gadiformes (15.4%). The top 6 most abundant species were Hippoglossoides robustus, Boregadus saida, Myoxocephalus scorpius, Lumpenus fabricii, Artediellus scaber, and Gymnocanthus tricuspis. Abundant species varied according to the different fishing methods; numbers of families and species recorded did not differ with the various fishing methods; species and abundances decreased with depth and latitude; and species extending over their known geographic ranges were observed during the expedition. Station information, species list, and color photographs of all fishes are provided

    Anomalously steep dips of earthquakes in the 2011 Tohoku-Oki source region and possible explanations

    Get PDF
    Keywords: subduction zone the 2011 Tohoku-Oki earthquake focal mechanism fault geometry seamount fault zone structure a b s t r a c t The 2011 M w 9.1 Tohoku-Oki earthquake had unusually large slip (over 50 m) concentrated in a relatively small region, with local stress drop inferred to be 5-10 times larger than that found for typical megathrust earthquakes. Here we conduct a detailed analysis of foreshocks and aftershocks (M w 5.5-7.5) sampling this megathrust zone for possible clues regarding such differences in seismic excitation. We find that events occurring in the region that experienced large slip during the M w 9.1 event had steeper dip angles (by 5-101) than the surrounding plate interface. This discrepancy cannot be explained by a single smooth plate interface. We provide three possible explanations. In Model I, the oceanic plate undergoes two sharp breaks in slope, which were not imaged well in previous seismic surveys. These break-points may have acted as strong seismic barriers in previous seismic ruptures, but may have failed in and contributed to the complex rupture pattern of the Tohoku-Oki earthquake. In Model II, the discrepancy of dip angles is caused by a rough plate interface, which in turn may be the underlying cause for the overall strong coupling and concentrated energy-release. In Model III, the earthquakes with steeper dip angles did not occur on the plate interface, but on nearby steeper subfaults. Since the differences in dip angle are only 5-101, this last explanation would imply that the main fault has about the same strength as the nearby subfaults, rather than much weaker. A relatively uniform fault zone with both the main fault and the subfaults inside is consistent with Model III. Higher resolution source locations and improved models of the velocity structure of the megathrust fault zone are necessary to resolve these issues

    Anomalously steep dips of earthquakes in the 2011 Tohoku-Oki source region and possible explanations

    Get PDF
    The 2011 M_w 9.1 Tohoku-Oki earthquake had unusually large slip (over 50 m) concentrated in a relatively small region, with local stress drop inferred to be 5–10 times larger than that found for typical megathrust earthquakes. Here we conduct a detailed analysis of foreshocks and aftershocks (M_w 5.5–7.5) sampling this megathrust zone for possible clues regarding such differences in seismic excitation. We find that events occurring in the region that experienced large slip during the M_w 9.1 event had steeper dip angles (by 5–10°) than the surrounding plate interface. This discrepancy cannot be explained by a single smooth plate interface. We provide three possible explanations. In Model I, the oceanic plate undergoes two sharp breaks in slope, which were not imaged well in previous seismic surveys. These break-points may have acted as strong seismic barriers in previous seismic ruptures, but may have failed in and contributed to the complex rupture pattern of the Tohoku-Oki earthquake. In Model II, the discrepancy of dip angles is caused by a rough plate interface, which in turn may be the underlying cause for the overall strong coupling and concentrated energy-release. In Model III, the earthquakes with steeper dip angles did not occur on the plate interface, but on nearby steeper subfaults. Since the differences in dip angle are only 5–10°, this last explanation would imply that the main fault has about the same strength as the nearby subfaults, rather than much weaker. A relatively uniform fault zone with both the main fault and the subfaults inside is consistent with Model III. Higher resolution source locations and improved models of the velocity structure of the megathrust fault zone are necessary to resolve these issues

    A nuclear track microporous membrane (NTMM)

    No full text
    corecore