230 research outputs found

    Soft tissue recurrent ameloblastomas also show some malignant features: a clinicopathological study of a 15-year database

    Get PDF
    Background: To investigate the clinicopathological features of six cases of soft tissue recurrent ameloblastoma and explore the role of increased aggressive biological behavior in the recurrences and treatment of this type of ameloblastomas. Material and Methods: In this study, we retrospectively reviewed recurrent ameloblastomas during a 15-year period; six cases were diagnosed as soft tissue recurrent ameloblastoma. The clinical, radiographic, cytological and immunohistochemical records of these six cases were investigated and analyzed. Results: All the six soft tissue recurrent ameloblastomas occurred after radical bone resection, and were located in the adjacent soft tissues around the osteotomy regions. In Case 4, the patient developed pulmonary metastasis, extensive skull-base infiltration and cytological malignancy after multiple recurrences and malignant transformation was diagnosed. In the other five cases, although there were no cytological signs are sufficient to justify an ameloblastoma as malignant, some malignant features were observed. In Case 1, the tumor showed moderate atypical hyperplasia and the Ki-67 staining percentage was 40% positive, which are strongly suggestive of potential malignance. In Case 5, the patient developed a second soft tissue recurrence in the parapharyngeal region and later died of tumor-related complications. All the remaining three patients showed cytology atypia of varying degrees and high expression of PCNA or Ki-67, which confirmed active cell proliferation. Conclusions: Increased aggressiveness is an important factor of soft tissue recurrence. An intraoperative rapid pathological examination and more radical treatment are suggested for these cases

    Identification Of An In Vitro Medium For Leptospira Spp. As A Surrogate For Host Environment, Using Rna-Seq Transcriptome Analysis

    Get PDF
    Pathogenic Leptospira species causes millions of leptospirosis cases around the world and is an urgent public health issue that needs to be properly addressed. The infection leads to clinical manifestations ranging from self-limiting febrile illness to severe life-threatening symptoms. Currently, there is a lack of sensitive assay for early diagnosis of leptospirosis, and there is no FDA-approved vaccine for human use in the United States. Despite the worldwide occurrence of this zoonotic disease, low and middle-income countries are disproportionately affected by it. A better understanding of the pathogenesis of Leptospira is a crucial step for the development of better diagnostic assays and effective vaccines. Currently, leptospiral research is highly dependent on animal models, which increases the cost and time of research, and can’t eliminate the lack of reproducibility among different species, especially humans, while raising ethical issues. In this study, we evaluated and compared the gene expression of Leptospira on the transcriptome level. We compared different growth media with the hamster model to identify a medium that can be used as an in vitro surrogate for the host environment in key steps of leptospiral research. The results show that among different media tested, EMEM and DMEM are better choices to mimic the host environment

    DDAP: Dual-Domain Anti-Personalization against Text-to-Image Diffusion Models

    Full text link
    Diffusion-based personalized visual content generation technologies have achieved significant breakthroughs, allowing for the creation of specific objects by just learning from a few reference photos. However, when misused to fabricate fake news or unsettling content targeting individuals, these technologies could cause considerable societal harm. To address this problem, current methods generate adversarial samples by adversarially maximizing the training loss, thereby disrupting the output of any personalized generation model trained with these samples. However, the existing methods fail to achieve effective defense and maintain stealthiness, as they overlook the intrinsic properties of diffusion models. In this paper, we introduce a novel Dual-Domain Anti-Personalization framework (DDAP). Specifically, we have developed Spatial Perturbation Learning (SPL) by exploiting the fixed and perturbation-sensitive nature of the image encoder in personalized generation. Subsequently, we have designed a Frequency Perturbation Learning (FPL) method that utilizes the characteristics of diffusion models in the frequency domain. The SPL disrupts the overall texture of the generated images, while the FPL focuses on image details. By alternating between these two methods, we construct the DDAP framework, effectively harnessing the strengths of both domains. To further enhance the visual quality of the adversarial samples, we design a localization module to accurately capture attentive areas while ensuring the effectiveness of the attack and avoiding unnecessary disturbances in the background. Extensive experiments on facial benchmarks have shown that the proposed DDAP enhances the disruption of personalized generation models while also maintaining high quality in adversarial samples, making it more effective in protecting privacy in practical applications.Comment: Accepted by IJCB 202

    Multi-modal Document Presentation Attack Detection With Forensics Trace Disentanglement

    Full text link
    Document Presentation Attack Detection (DPAD) is an important measure in protecting the authenticity of a document image. However, recent DPAD methods demand additional resources, such as manual effort in collecting additional data or knowing the parameters of acquisition devices. This work proposes a DPAD method based on multi-modal disentangled traces (MMDT) without the above drawbacks. We first disentangle the recaptured traces by a self-supervised disentanglement and synthesis network to enhance the generalization capacity in document images with different contents and layouts. Then, unlike the existing DPAD approaches that rely only on data in the RGB domain, we propose to explicitly employ the disentangled recaptured traces as new modalities in the transformer backbone through adaptive multi-modal adapters to fuse RGB/trace features efficiently. Visualization of the disentangled traces confirms the effectiveness of the proposed method in different document contents. Extensive experiments on three benchmark datasets demonstrate the superiority of our MMDT method on representing forensic traces of recapturing distortion.Comment: Accepted to ICME 202

    Multi-representations Space Separation based Graph-level Anomaly-aware Detection

    Full text link
    Graph structure patterns are widely used to model different area data recently. How to detect anomalous graph information on these graph data has become a popular research problem. The objective of this research is centered on the particular issue that how to detect abnormal graphs within a graph set. The previous works have observed that abnormal graphs mainly show node-level and graph-level anomalies, but these methods equally treat two anomaly forms above in the evaluation of abnormal graphs, which is contrary to the fact that different types of abnormal graph data have different degrees in terms of node-level and graph-level anomalies. Furthermore, abnormal graphs that have subtle differences from normal graphs are easily escaped detection by the existing methods. Thus, we propose a multi-representations space separation based graph-level anomaly-aware detection framework in this paper. To consider the different importance of node-level and graph-level anomalies, we design an anomaly-aware module to learn the specific weight between them in the abnormal graph evaluation process. In addition, we learn strictly separate normal and abnormal graph representation spaces by four types of weighted graph representations against each other including anchor normal graphs, anchor abnormal graphs, training normal graphs, and training abnormal graphs. Based on the distance error between the graph representations of the test graph and both normal and abnormal graph representation spaces, we can accurately determine whether the test graph is anomalous. Our approach has been extensively evaluated against baseline methods using ten public graph datasets, and the results demonstrate its effectiveness.Comment: 11 pages, 12 figure

    Biomedical Image Splicing Detection using Uncertainty-Guided Refinement

    Full text link
    Recently, a surge in biomedical academic publications suspected of image manipulation has led to numerous retractions, turning biomedical image forensics into a research hotspot. While manipulation detectors are concerning, the specific detection of splicing traces in biomedical images remains underexplored. The disruptive factors within biomedical images, such as artifacts, abnormal patterns, and noises, show misleading features like the splicing traces, greatly increasing the challenge for this task. Moreover, the scarcity of high-quality spliced biomedical images also limits potential advancements in this field. In this work, we propose an Uncertainty-guided Refinement Network (URN) to mitigate the effects of these disruptive factors. Our URN can explicitly suppress the propagation of unreliable information flow caused by disruptive factors among regions, thereby obtaining robust features. Moreover, URN enables a concentration on the refinement of uncertainly predicted regions during the decoding phase. Besides, we construct a dataset for Biomedical image Splicing (BioSp) detection, which consists of 1,290 spliced images. Compared with existing datasets, BioSp comprises the largest number of spliced images and the most diverse sources. Comprehensive experiments on three benchmark datasets demonstrate the superiority of the proposed method. Meanwhile, we verify the generalizability of URN when against cross-dataset domain shifts and its robustness to resist post-processing approaches. Our BioSp dataset will be released upon acceptance

    SARS-CoV-2: The Monster Causes COVID-19

    Get PDF
    Coronaviruses are viruses whose particles look like crowns. SARS-CoV-2 is the seventh member of the human coronavirus family to cause COVID-19 which is regarded as a once-in-a-century pandemic worldwide. It holds has the characteristics of a pandemic, which has broy -55ught many serious negative impacts to human beings. It may take time for humans to fight the pandemic. In addition to humans, SARS-CoV-2 also infects animals such as cats. This review introduces the origins, structures, pathogenic mechanisms, characteristics of transmission, detection and diagnosis, evolution and variation of SARS-CoV-2. We summarized the clinical characteristics, the strategies for treatment and prevention of COVID-19, and analyzed the problems and challenges we face

    Safeguarding Medical Image Segmentation Datasets against Unauthorized Training via Contour- and Texture-Aware Perturbations

    Full text link
    The widespread availability of publicly accessible medical images has significantly propelled advancements in various research and clinical fields. Nonetheless, concerns regarding unauthorized training of AI systems for commercial purposes and the duties of patient privacy protection have led numerous institutions to hesitate to share their images. This is particularly true for medical image segmentation (MIS) datasets, where the processes of collection and fine-grained annotation are time-intensive and laborious. Recently, Unlearnable Examples (UEs) methods have shown the potential to protect images by adding invisible shortcuts. These shortcuts can prevent unauthorized deep neural networks from generalizing. However, existing UEs are designed for natural image classification and fail to protect MIS datasets imperceptibly as their protective perturbations are less learnable than important prior knowledge in MIS, e.g., contour and texture features. To this end, we propose an Unlearnable Medical image generation method, termed UMed. UMed integrates the prior knowledge of MIS by injecting contour- and texture-aware perturbations to protect images. Given that our target is to only poison features critical to MIS, UMed requires only minimal perturbations within the ROI and its contour to achieve greater imperceptibility (average PSNR is 50.03) and protective performance (clean average DSC degrades from 82.18% to 6.80%)
    corecore