6 research outputs found

    Multiparton Cwebs at five loops

    Full text link
    Scattering amplitudes involving multiple partons are plagued with infrared singularities. The soft singularities of the amplitude are captured by the soft function which is defined as the vacuum expectation value of Wilson line correlators. Renormalization properties of soft function allows us to write it as an exponential of the finite soft anomalous dimension. An efficient way to study the soft function is through a set of Feynman diagrams known as Cwebs (webs). We obtain the mixing matrices and exponentiated colour factors for all the Cwebs at five loops that connect six massless Wilson lines. Our results are the first key ingredient for the calculation of the soft anomalous dimension at five loops.Comment: 46 pages, 29 figures, 27 tables and 1 ancillary fil

    YOLO-based Segmented Dataset for Drone vs. Bird Detection for Deep and Machine Learning Algorithms

    Get PDF
    The use of unmanned aerial vehicles (UAVs) has been rapidly increasing in both professional and recreational settings, leading to concerns about the safety and security of people and facilities. One area of research that has emerged in response to this concern is the development of detection systems for UAVs. However, many existing systems have limitations, such as detection failures or false detection of other aerial objects, including birds. To address this issue, the development of a standard dataset that provides images of both drones and birds is essential for training accurate and effective detection models. In this context, we present a dataset consisting of images of drones and birds operating in various environments. This dataset will serve as a valu- able resource for researchers and developers working on UAV detection and classification systems. The dataset was created using Roboflow software, which enabled us to efficiently edit and manipulate the images using AI-assisted bounding boxes, polygons, and instance segmentation. The software supports a wide range of input and output formats, making it easy to import and export the dataset in different machine learning frameworks. To ensure the highest possible accuracy, we manually segmented each im- age from edge to edge, providing the YOLO model with detailed and accurate information for training. The dataset includes both training and testing sets, allowing for the evaluation of model performance and accuracy. Our dataset offers several advant- ages over existing datasets, including the inclusion of both drones and birds, which are commonly misclassified by detection systems. Additionally, the images in our dataset were collected in diverse environments, providing a wide range of scenarios for model training and testing. The presented dataset provides a valuable resource for researchers and developers working on UAV detection and classification systems. The inclusion of both drones and birds, as well as the diverse range of environments and scenarios, makes this dataset a unique and essential tool for training accurate and effective models. We hope that this dataset will contribute to the advancement of UAV detection and classification systems, improving safety and security in both professional and recreational settings

    Dissecting Self-Supervised Learning Methods for Surgical Computer Vision

    No full text
    International audienceThe field of surgical computer vision has undergone considerable breakthroughs in recent years with the rising popularity of deep neural network-based methods. However, standard fully-supervised approaches for training such models require vast amounts of annotated data, imposing a prohibitively high cost; especially in the clinical domain. Self-Supervised Learning (SSL) methods, which have begun to gain traction in the general computer vision community, represent a potential solution to these annotation costs, allowing to learn useful representations from only unlabeled data. Still, the effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored. In this work, we address this critical need by investigating four state-of-the-art SSL methods (MoCo v2, SimCLR, DINO, SwAV) in the context of surgical computer vision. We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection. We examine their parameterization, then their behavior with respect to training data quantities in semi-supervised settings. Correct transfer of these methods to surgery, as described and conducted in this work, leads to substantial performance gains over generic uses of SSL - up to 7.4% on phase recognition and 20% on tool presence detection - as well as state-of-the-art semi-supervised phase recognition approaches by up to 14%. Further results obtained on a highly diverse selection of surgical datasets exhibit strong generalization properties. The code is available at https://github.com/CAMMA-public/SelfSupSurg
    corecore