12 research outputs found

    A vision transformer-based framework for knowledge transfer from multi-modal to mono-modal lymphoma subtyping models

    Full text link
    Determining lymphoma subtypes is a crucial step for better patients treatment targeting to potentially increase their survival chances. In this context, the existing gold standard diagnosis method, which is based on gene expression technology, is highly expensive and time-consuming making difficult its accessibility. Although alternative diagnosis methods based on IHC (immunohistochemistry) technologies exist (recommended by the WHO), they still suffer from similar limitations and are less accurate. WSI (Whole Slide Image) analysis by deep learning models showed promising new directions for cancer diagnosis that would be cheaper and faster than existing alternative methods. In this work, we propose a vision transformer-based framework for distinguishing DLBCL (Diffuse Large B-Cell Lymphoma) cancer subtypes from high-resolution WSIs. To this end, we propose a multi-modal architecture to train a classifier model from various WSI modalities. We then exploit this model through a knowledge distillation mechanism for efficiently driving the learning of a mono-modal classifier. Our experimental study conducted on a dataset of 157 patients shows the promising performance of our mono-modal classification model, outperforming six recent methods from the state-of-the-art dedicated for cancer classification. Moreover, the power-law curve, estimated on our experimental data, shows that our classification model requires a reasonable number of additional patients for its training to potentially reach identical diagnosis accuracy as IHC technologies

    SHREC2020 track:Multi-domain protein shape retrieval challenge

    Get PDF
    Proteins are natural modular objects usually composed of several domains, each domain bearing a specific function that is mediated through its surface, which is accessible to vicinal molecules. This draws attention to an understudied characteristic of protein structures: surface, that is mostly unexploited by protein structure comparison methods. In the present work, we evaluated the performance of six shape comparison methods, among which three are based on machine learning, to distinguish between 588 multi-domain proteins and to recreate the evolutionary relationships at the proteinand species levels of the SCOPe database. The six groups that participated in the challenge submitted a total of 15 sets of results. We observed that the performance of all the methods significantly decreases at the species level, suggesting that shape-only protein comparison is challenging for closely related proteins. Even if the dataset is limited in size (only 588 proteins are considered whereas more than 160,000 protein structures are experimentally solved), we think that this work provides useful insights into the current shape comparison methods performance, and highlights possible limitations to large-scale applications due to the computational cost

    Surface-based protein domains retrieval methods from a SHREC2021 challenge

    Get PDF
    publication dans une revue suite à la communication hal-03467479 (SHREC 2021: surface-based protein domains retrieval)International audienceProteins are essential to nearly all cellular mechanism and the effectors of the cells activities. As such, they often interact through their surface with other proteins or other cellular ligands such as ions or organic molecules. The evolution generates plenty of different proteins, with unique abilities, but also proteins with related functions hence similar 3D surface properties (shape, physico-chemical properties, …). The protein surfaces are therefore of primary importance for their activity. In the present work, we assess the ability of different methods to detect such similarities based on the geometry of the protein surfaces (described as 3D meshes), using either their shape only, or their shape and the electrostatic potential (a biologically relevant property of proteins surface). Five different groups participated in this contest using the shape-only dataset, and one group extended its pre-existing method to handle the electrostatic potential. Our comparative study reveals both the ability of the methods to detect related proteins and their difficulties to distinguish between highly related proteins. Our study allows also to analyze the putative influence of electrostatic information in addition to the one of protein shapes alone. Finally, the discussion permits to expose the results with respect to ones obtained in the previous contests for the extended method. The source codes of each presented method have been made available online

    Nouvelles approches pour l'ordonnancement d'applications parallèles sous contraintes de déploiement d'environnements sur grappe.

    No full text
    This thesis considers the \textit{Clusters} in Grid'5000 (French Project for the grids). Grid'5000 is an experimental platform which makes possible for researchers to submit their programs associating them with an excution environment. Usually, the deployment process on nodes has some problems. One of them is the failure of the machines because the excessive boot of the deployment phases can cause their endomagement. Thus, we consider the bicriteria scheduling problem to solve the scheduling problem with deployment on cluster. The first criteria to minimize is the number of deployments of all machines. The second one is to minimize the makespan. We define an algorithm "Groups List Scheduling" denoted by GLS, based on a bujet approach with relaxation of the optimality constraints. Using this approach, we define a (alpha, beta)-budget-relaxed-approximate solution for the bicriteria optimisation problem. When we consider the bicriteria scheduling problem with deployment, the GLS algorithm gives a (4,2)-budget-relaxe-approximate solution. We have defined a polynomial algorithm that allows the construction of a (4+epsilon,2)-approximate Pareto curve from the results obtained by the GLS algoritm. In the next step we consider the bicriteria scheduling problem with deployment, using the Pareto curve approach. We define a plynomial algorithm which builds, from GLS algorithm, a (4+epsilon,2)-approximate Pareto curve solutions. An experimental analysis gives the performance of the GLS algorithm and allows us to validate the approximation ratios.Cette thèse s'inscrit dans le cadre des grappes dans le projet Grid'5000 (Projet Français pour les grilles). Grid'5000 est une plate-forme expérimentale qui offre la possibilité aux chercheurs de soumettre aux gestionnaires de ressource des programmes (travaux) et d'associer pour chaque requête un environnement. Une grappe est un ensemble de noeuds de calcul, connectés entre eux via un réseau dédié. Le processus de déploiement d'environnement sur les noeuds de calcul n'est pas sans conséquence. Un des problèmes que l'on rencontre est la défaillance des machines. Le démarrage excessif lors de de la phase déploiement peut causer un endomagement de celles-ci. Nous avons ainsi modélisé ce problème sous forme d'un problème d'ordonnancement bicritère. Le premier critère à minimiser comptabilise pour chaque machine (processeur) le nombre de déploiements effectués. Il permet ainsi permet de définir le nombre total de déploiements sur toutes les machines. Nous avons également considéré un second critère à minimiser, le makespan. Nous avons défini un algorithme Groups List Scheduling, basé sur une approche budget, avec un relâchement des contraintes d'optimalité. Cette approche nous a permis de définir une solution (alpha, beta)-budget-relaxée-approchée pour un problème d'optimisation bicritère. Dans le cadre du problème d'ordonnancement bicritère avec déploiement, l'algorithme GLS donne ainsi une solution (4,2)-budget-approchée-relaxée. Nous avons ensuite abordé ce problème d'ordonnancement bicritère avec déploiement en utilisant l'approche «courbe de Pareto». Nous avons défini un algorithme polynômial, qui permet de construire une courbe de Pareto (4+epsilon, 2)-approchée, à partir des solutions fournies par l'algorithme GLS. Une analyse expérimentale nous a permis d'évaluer les performances de l'algorithme GLS et de valider ainsi les rapport

    Bi-criteria Scheduling Algorithm with Deployment in Cluster

    No full text
    International audienc

    Towards a Model of Car Parking Assistance System Using Camera Networks: Slot Analysis and Communication Management

    No full text
    International audienceNowadays, finding an available parking slot in urban environment is become more and more fastidious. In this paper, we present a model of car parking assistance system by particularly using camera networks. Such a system notably includes two research-related milestones, namely the visual detection and the communication of parking information. Hence, this paper presents a computer vision workflow permitting to detect available parking slots. Then, a strategy is proposed in order to optimally communicate to the drivers the locations of the detected available parking slots. Finally, experimental results and evaluations show the feasibility and the potential of the proposed model

    A Coarse-to-Fine Segmentation Methodology Based on Deep Networks for Automated Analysis of Cryptosporidium Parasite from Fluorescence Microscopic Images

    No full text
    posterInternational audienceIn this paper, we present a deep learning-based framework for automated analysis and diagnosis of Cryptosporidium parvum from fluorescence microscopic images. First, a coarse segmentation is applied to roughly delimit the contours either of individual parasites or of grouped ones in the form of a single object from original images. Subsequently, a classifier will be applied to identify grouped parasites which are separated from each other by applying a fine segmentation. Our coarse-to-fine segmentation methodology achieves high accuracy on our generated dataset (over 3,000 parasites) and permit to improve the performance of direct segmentation approaches

    Image-based Ciphering of Video Streams and Object Recognition for Urban and Vehicular Surveillance Services

    No full text
    International audienceNowadays, urban and vehicular surveillance systems are col-lecting large amounts of image data for feeding recognition systems e.g.towards proposing localization or navigation services. In many cases,these image data cannot directly be processed in situ by the acquisitionsystems in reason of their low computational capabilities. The acquiredimages are transferred to remote computing servers through various com-puter networks, then analyzed in details towards object recognition. Theobjective of this paper is twofold i) presenting image-based cypheringmethods that can eciently be applied for securing the image transferagainst consequences of image interceptions (e.g.; man-in-the-middle at-tacks) ii) presenting generic image-based analysis techniques that can beexploited for object recognition. Experimental results show end-to-endimage-based solutions for fostering developments of surveillance systemsand services in urban and vehicular environments
    corecore