12 research outputs found

    Microsurgical anatomy of the inferior intercavernous sinus

    No full text
    International audiencePurpose Intercavernous sinuses (ICSs) are physiological communications between the cavernous sinuses. The ICSs run between the endosteal and meningeal layers of the dura mater of the sella turcica. Whereas the anterior and posterior ICSs have been frequently described, the inferior ICS (iICS) has been less well studied in the literature; however, poor awareness of the ICS's anatomy can lead to serious problems during transsphenoidal, transsellar surgery. The objective of the present anatomical study was to describe the iICS in detail. Methods The study was carried out over a 6-month period in a university hospital's anatomy laboratory, using brains extracted from human cadavers. The brains were injected with colored neoprene latex and dissected to study the iICS (presence or absence, shape, diameter, length, distance between inferior and anterior ICSs, distance between inferior and posterior ICSs, relationships, and boundaries). Results Seventeen cadaveric specimens were studied, and an iICS was found in all cases (100%). The shape was variously plexiform (47.1%), filiform (35.3%), or punctiform (17.6%). The mean +/- standard deviation diameter and length of the iICS were 3.75 +/- 2.90 mm and 11.92 +/- 2.96 mm, respectively. The mean iICS-anterior ICS and iICS-posterior ICS distances were 5.36 +/- 1.99 mm and 7.03 +/- 2.28 mm, respectively. Conclusion The iICS has been poorly described in the literature. However, damage to the iICS during transsphenoidal, transsellar surgery could lead to serious vascular complications. A precise radiological assessment appears to be essential for a safe surgical approach

    Unilateral duplicated abducens nerve coursing through both the sphenopetroclival venous gulf and cavernous sinus: a case report

    No full text
    International audienceIn this anatomy report, we describe the first case of abducens nerve duplication limited to the sphenopetroclival venous gulf and the cavernous sinus. The objective point of division of the two duplicated roots was localized at the gulfar face of the dural porus, just distal to the unique cisternal trunk of the abducens nerve, as it pierced the petroclival dural mater. In the gulfar segment, both roots traveled through a variant of Dorello's canal called the ``petrosphenoidal canal'' and remained separated through the posterior half of the cavernous sinus. Both roots finally fused in the anterior half of the cavernous sinus to innervate the lateral rectus muscle as a single trunk. Although many variants of the abducens nerve have been reported over the recent decades, this anatomic variation has never been previously described and enriches the continuum of abducens nerve variations reported in the literature data. Awareness of this variation is crucial for neurosurgeons, especially during clival or petrosal surgical approaches used for resection of skull base chordomas

    A Transformer-based Nlp Pipeline for Enhanced Extraction of Botanical Information Using Camembert on French Literature

    No full text
    International audienceThis research investigates the untapped wealth of centuries-old French botanical literature, particularly focused on floras, which are comprehensive guides detailing plant species in specific regions. Despite their significance, this literature remains largely unexplored in the context of AI integration. Our objective is to bridge this gap by constructing a specialized botanical French dataset sourced from the flora of New Caledonia. We propose a transformer-based Named Entity Recognition pipeline, leveraging distant supervision and CamemBERT, for the automated extraction and structuring of botanical information. The results demonstrate exceptional performance: for species names extraction, the NER model achieves precision (0.94), recall (0.98), and F1-score (0.96), while for fine-grained extraction of botanical morphological terms, the CamemBERT-based NER model attains precision (0.93), recall (0.96), and F1-score (0.94). This work contributes to the exploration of valuable botanical literature by underscoring the capability of AI models to automate information extraction from complex and diverse texts

    Le GBIF : ouvrir l’accĂšs aux donnĂ©es primaires sur la biodiversitĂ©

    No full text
    Since 2001, the international community has established a consortium, the Global Biodiversity Information Facility (GBIF), to encourage free and open access to primary data on biodiversity (specimens in natural history collections, and field observations of living organisms). In March 2013, nearly 400 million data are accessible via the GBIF portal (data.gbif.org), and is the major portal in this domain. GBIF interacts at the national level with other SI on biodiversity and at the international level with large programs on the environment, like GEOBON

    Extracting Masks from Herbarium Specimen Images Based on Object Detection and Image Segmentation Techniques

    No full text
    Herbarium specimen scans constitute a valuable source of raw data. Herbarium collections are gaining interest in the scientific community as their exploration can lead to understanding serious threats to biodiversity. Data derived from scanned specimen images can be analyzed to answer important questions such as how plants respond to climate change, how different species respond to biotic and abiotic influences, or what role a species plays within an ecosystem. However, exploiting such large collections is challenging and requires automatic processing. A promising solution lies in the use of computer-based processing techniques, such as Deep Learning (DL). But herbarium specimens can be difficult to process and analyze as they contain several kinds of visual noise, including information labels, scale bars, color palettes, envelopes containing seeds or other organs, collection-specific barcodes, stamps, and other notes that are placed on the mounting sheet. Moreover, the paper on which the specimens are mounted can degrade over time for multiple reasons, and often the paper's color darkens and, in some cases, approaches the color of the plants.Neural network models are well-suited to the analysis of herbarium specimens, while making abstraction of the presence of such visual noise. However, in some cases the model can focus on these elements, which eventually can lead to a bad generalization when analyzing new data on which these visual elements are not present (White et al. 2020). It is important to remove the noise from specimen scans before using them in model training and testing to improve its performance. Studies have used basic cropping techniques (Younis et al. 2018), but they do not guarantee that the visual noise is removed from the cropped image. For instance, the labels are frequently put at random positions into the scans, resulting in cropped images that still contain noise. White et al. (2020) used the Otsu binarization method followed by a manual post-processing and a blurring step to adjust the pixels that should have been assigned to black during segmentation. Hussein et al. (2020) used an image labeler application, followed by a median filtering method to reduce the noise. However, both White et al. (2020) and Hussein et al. (2020) consider only two organs: stems and leaves. Triki et al. (2022) used a polygon-based deep learning object detection algorithm. But in addition to being laborious and difficult, this approach does not give good results when it comes to fully identifying specimens. In this work, we aim to create clean high-resolution mask extractions with the same resolution as the original images. These masks can be used by other models for a variety of purposes, for instance to distinguish the different plant organs. Here, we proceed by combining object detection and image segmentation techniques, using a dataset of scanned herbarium specimens. We propose an algorithm that identifies and retains the pixels belonging to the plant specimen, and removes the other pixels that are part of non-plant elements considered as noise. A removed pixel is set to zero (black). Fig. 1 illustrates the complete masking pipeline in two main stages, object detection and image segmentation.In the first stage, we manually annotated the images using bounding boxes in a dataset of 950 images. We identified (Fig. 2) the visual elements considered to be noise (e.g., scale-bar, barcode, stamp, text box, color pallet, envelope). Then we trained the model to automatically remove the noise elements. We divided the dataset into 80% training, 10% validation and 10% test set. We ultimately achieved a precision score of 98.2%, which is a 3% improvement from the baseline. Next, the results of this stage were used as input for image segmentation, which aimed to generate the final mask. We blacken the pixels covered by the detected noise elements, then we used HSV (Hue Saturation Value) color segmentation to select only the pixels with values in a range that corresponds mostly to a plant color. Finally, we applied the morphological opening operation that removes noise and separates objects; and the closing operation that fills gaps, as described in Sunil Bhutada et al. (2022) to remove the remaining noise. The output here is a generated mask that retains only the pixels that belong to the plant. Unlike other proposed approaches, which focus essentially on leaves and stems, our approach covers all the plant organs (Fig. 3). Our approach removes the background noise from herbarium scans and extracts clean plant images. It is an important step before using these images in different deep learning models. However, the quality of the extractions varies depending on the quality of the scans, the condition of the specimens, and the paper used. For example, extractions made from samples where the color of the plant is different from the color of the background were more accurate than extractions made from samples where the color of the plant and background are close. To overcome this limitation, we aim to use some of the obtained extractions to create a training dataset, followed by the development and the training of a generative deep learning model to generate masks that delimit plants.
    corecore