237 research outputs found

    Anastomotic loop between common hepatic artery and gastroduodenal artery in coexistence with an aberrant right hepatic artery

    Get PDF
    Anatomical variations of the hepatic arteries are not uncommon. The anomalous hepatic arterial supply is of paramount importance in hepatobiliary, pancreatic or liver transplantation and in laparoscopic surgery. We describe an unusual case of a 66-year-old Greek male cadaver, where a rare anastomosis (in the form of an enlarged arterial loop, 4.84 mm in diameter) between the common hepatic artery (6.42 mm) and the gastroduodenal artery (GDA) (4.82 mm) coexisted with an aberrant right hepatic artery (ARHA) (6.38 mm) originating from the superior mesenteric artery. The proper hepatic artery was absent. The ARHA followed a route posterior to the portal vein and the common hepatic duct, entering the liver and supplying the right hepatic segment. A hypoplastic right gastric artery emanated from the GDA. Our case report highlights the combined variations of hepatic arteries and possible anastomoses emphasizing that a thorough knowledge of the classic and variable hepatic arterial anatomy are mandatory for surgeons and radiologists performing hepatic surgery and arteriography to avoid potential iatrogenic injuries in hepatobiliary and pancreas area and further medico-legal implications

    AC-SUM-GAN: Connecting Actor-Critic and Generative Adversarial Networks for Unsupervised Video Summarization

    Get PDF
    This paper presents a new method for unsupervised video summarization. The proposed architecture embeds an Actor-Critic model into a Generative Adversarial Network and formulates the selection of important video fragments (that will be used to form the summary) as a sequence generation task. The Actor and the Critic take part in a game that incrementally leads to the selection of the video key-fragments, and their choices at each step of the game result in a set of rewards from the Discriminator. The designed training workflow allows the Actor and Critic to discover a space of actions and automatically learn a policy for key-fragment selection. Moreover, the introduced criterion for choosing the best model after the training ends, enables the automatic selection of proper values for parameters of the training process that are not learned from the data (such as the regularization factor σ). Experimental evaluation on two benchmark datasets (SumMe and TVSum) demonstrates that the proposed AC-SUM-GAN model performs consistently well and gives SoA results in comparison to unsupervised methods, that are also competitive with respect to supervised methods

    Video Summarization Using Deep Neural Networks: A Survey

    Get PDF
    Video summarization technologies aim to create a concise and complete synopsis by selecting the most informative parts of the video content. Several approaches have been developed over the last couple of decades and the current state of the art is represented by methods that rely on modern deep neural network architectures. This work focuses on the recent advances in the area and provides a comprehensive survey of the existing deep-learning-based methods for generic video summarization. After presenting the motivation behind the development of technologies for video summarization, we formulate the video summarization task and discuss the main characteristics of a typical deep-learning-based analysis pipeline. Then, we suggest a taxonomy of the existing algorithms and provide a systematic review of the relevant literature that shows the evolution of the deep-learning-based video summarization technologies and leads to suggestions for future developments. We then report on protocols for the objective evaluation of video summarization algorithms and we compare the performance of several deep-learning-based approaches. Based on the outcomes of these comparisons, as well as some documented considerations about the suitability of evaluation protocols, we indicate potential future research directions.Comment: Journal paper; Under revie

    VideoAnalysis4ALL: An On-line Tool for the Automatic Fragmentation and Concept-based Annotation, and the Interactive Exploration of Videos.

    Get PDF
    This paper presents the VideoAnalysis4ALL tool that supports the automatic fragmentation and concept-based annotation of videos, and the exploration of the annotated video fragments through an interactive user interface. The developed web application decomposes the video into two different granularities, namely shots and scenes, and annotates each fragment by evaluating the existence of a number (several hundreds) of high-level visual concepts in the keyframes extracted from these fragments. Through the analysis the tool enables the identification and labeling of semantically coherent video fragments, while its user interfaces allow the discovery of these fragments with the help of human-interpretable concepts. The integrated state-of-the-art video analysis technologies perform very well and, by exploiting the processing capabilities of multi-thread / multi-core architectures, reduce the time required for analysis to approximately one third of the video’s duration, thus making the analysis three times faster than real-time processing

    A Stepwise, Label-based Approach for Improving the Adversarial Training in Unsupervised Video Summarization

    Get PDF
    In this paper we present our work on improving the efficiency of adversarial training for unsupervised video summarization. Our starting point is the SUM-GAN model, which creates a representative summary based on the intuition that such a summary should make it possible to reconstruct a video that is indistinguishable from the original one. We build on a publicly available implementation of a variation of this model, that includes a linear compression layer to reduce the number of learned parameters and applies an incremental approach for training the different components of the architecture. After assessing the impact of these changes to the model’s performance, we propose a stepwise, label-based learning process to improve the training efficiency of the adversarial part of the model. Before evaluating our model’s efficiency, we perform a thorough study with respect to the used evaluation protocols and we examine the possible performance on two benchmarking datasets, namely SumMe and TVSum. Experimental evaluations and comparisons with the state of the art highlight the competitiveness of the proposed method. An ablation study indicates the benefit of each applied change on the model’s performance, and points out the advantageous role of the introduced stepwise, label-based training strategy on the learning efficiency of the adversarial part of the architecture

    Production of Medical Radioisotopes with High Specific Activity in Photonuclear Reactions with γ\gamma Beams of High Intensity and Large Brilliance

    Full text link
    We study the production of radioisotopes for nuclear medicine in (γ,xn+yp)(\gamma,x{\rm n}+y{\rm p}) photonuclear reactions or (γ,γ′\gamma,\gamma') photoexcitation reactions with high flux [(1013−101510^{13}-10^{15})γ\gamma/s], small diameter ∼(100 μ\sim (100 \, \mum)2)^2 and small band width (ΔE/E≈10−3−10−4\Delta E/E \approx 10^{-3}-10^{-4}) γ\gamma beams produced by Compton back-scattering of laser light from relativistic brilliant electron beams. We compare them to (ion,xxn+y + yp) reactions with (ion=p,d,α\alpha) from particle accelerators like cyclotrons and (n,γ\gamma) or (n,f) reactions from nuclear reactors. For photonuclear reactions with a narrow γ\gamma beam the energy deposition in the target can be managed by using a stack of thin target foils or wires, hence avoiding direct stopping of the Compton and pair electrons (positrons). (γ,γ′)(\gamma,\gamma') isomer production via specially selected γ\gamma cascades allows to produce high specific activity in multiple excitations, where no back-pumping of the isomer to the ground state occurs. We discuss in detail many specific radioisotopes for diagnostics and therapy applications. Photonuclear reactions with γ\gamma beams allow to produce certain radioisotopes, e.g. 47^{47}Sc, 44^{44}Ti, 67^{67}Cu, 103^{103}Pd, 117m^{117m}Sn, 169^{169}Er, 195m^{195m}Pt or 225^{225}Ac, with higher specific activity and/or more economically than with classical methods. This will open the way for completely new clinical applications of radioisotopes. For example 195m^{195m}Pt could be used to verify the patient's response to chemotherapy with platinum compounds before a complete treatment is performed. Also innovative isotopes like 47^{47}Sc, 67^{67}Cu and 225^{225}Ac could be produced for the first time in sufficient quantities for large-scale application in targeted radionuclide therapy.Comment: submitted to Appl. Phys.

    VERGE: A Multimodal Interactive Search Engine for Video Browsing and Retrieval.

    Get PDF
    This paper presents VERGE interactive search engine, which is capable of browsing and searching into video content. The system integrates content-based analysis and retrieval modules such as video shot segmentation, concept detection, clustering, as well as visual similarity and object-based search

    Multimodal Video Annotation for Retrieval and Discovery of Newsworthy Video in a News Verification Scenario

    Get PDF
    © 2019, Springer Nature Switzerland AG. This paper describes the combination of advanced technologies for social-media-based story detection, story-based video retrieval and concept-based video (fragment) labeling under a novel approach for multimodal video annotation. This approach involves textual metadata, structural information and visual concepts - and a multimodal analytics dashboard that enables journalists to discover videos of news events, posted to social networks, in order to verify the details of the events shown. It outlines the characteristics of each individual method and describes how these techniques are blended to facilitate the content-based retrieval, discovery and summarization of (parts of) news videos. A set of case-driven experiments conducted with the help of journalists, indicate that the proposed multimodal video annotation mechanism - combined with a professional analytics dashboard which presents the collected and generated metadata about the news stories and their visual summaries - can support journalists in their content discovery and verification work

    Detecting Tampered Videos with Multimedia Forensics and Deep Learning

    Get PDF
    © 2019, Springer Nature Switzerland AG. User-Generated Content (UGC) has become an integral part of the news reporting cycle. As a result, the need to verify videos collected from social media and Web sources is becoming increasingly important for news organisations. While video verification is attracting a lot of attention, there has been limited effort so far in applying video forensics to real-world data. In this work we present an approach for automatic video manipulation detection inspired by manual verification approaches. In a typical manual verification setting, video filter outputs are visually interpreted by human experts. We use two such forensics filters designed for manual verification, one based on Discrete Cosine Transform (DCT) coefficients and a second based on video requantization errors, and combine them with Deep Convolutional Neural Networks (CNN) designed for image classification. We compare the performance of the proposed approach to other works from the state of the art, and discover that, while competing approaches perform better when trained with videos from the same dataset, one of the proposed filters demonstrates superior performance in cross-dataset settings. We discuss the implications of our work and the limitations of the current experimental setup, and propose directions for future research in this area
    • …
    corecore