797 research outputs found

    NLRC5, a promising new entry in tumor immunology.

    Get PDF
    The recent use of T cell-based cancer immunotherapies, such as adoptive T-cell transfer and checkpoint blockade, yields increasing clinical benefit to patients with different cancer types. However, decrease of MHC class I expression is a common mechanism transformed cells take advantage of to evade CD8(+) T cell-mediated antitumor responses, negatively impacting on the outcome of immunotherapies. Hence, there is an urgent need to develop novel approaches to overcome this limitation. NLRC5 has been recently described as a key transcriptional regulator controlling expression of MHC class I molecules. In this commentary, we summarize and put into perspective a study by Rodriguez and colleagues recently published in Oncoimmunology, addressing the role of NLRC5 in melanoma. The authors demonstrate that NLRC5 overexpression in B16 melanoma allows to recover MHC class I expression, rising tumor immunogenicity and counteracting immune evasion. Possible ways of manipulating NLRC5 activity in tumors will be discussed. Highlighting the therapeutic potential of modulating NLRC5 levels, this publication also encourages evaluation of NLRC5, and by extension MHC class I pathway, as clinical biomarker to select personalized immunotherapeutic strategies

    Image Registration Methode in Radar Interferometry

    Get PDF
    This article presents a methodology for the determination of the registration of an Interferometric Synthetic radar (InSAR) pair images with half pixel precision. Using the two superposed radar images Single Look complexes (SLC) [1-4], we developed an iterative process to superpose these two images according to their correlation coefficient with a high coherence area. This work concerns the exploitation of ERS Tandem pair of radar images SLC of the Algiers area acquired on 03 January and 04 January 1994. The former is taken as a master image and the latter as a slave image

    Myofibromatose infantile infratemporale

    Get PDF
    La myofibromatose infantile est une affection prolifĂ©rative mĂ©senchymateuse rare de l'enfance (1 / 400 000). Ce processus tumoral bĂ©nin peut intĂ©resser les tissus mous, les muscles, l’os et rarement les viscĂšres. elle peut se  prĂ©senter sous une forme solitaire ou multicentrique. La localisation cervico-faciale intĂ©resse 30% des cas. dans la littĂ©rature la localisation  infratemporale est trĂšs rare. Nous prĂ©sentons le cas d'un garçon de quatre ans qui s’est prĂ©sentĂ© avec une rĂ©cente limitation de l’ouverture buccale. L'imagerie (Tdm et iRm) faisait Ă©voquer une tumeur maligne. Le diagnostic a Ă©tĂ© histologique. L’évolution a Ă©tĂ© spectaculaire, basĂ©e sur des contrĂŽles cliniques et iRm, a Ă©tĂ© marquĂ©e par une rĂ©gression quasi totale des signes cliniques et des anomalies Ă  l’imagerie Ă  partir du troisiĂšme mois.Mots-clĂ©s : myofibromatose infantile, infratemporale

    SIMULATION-BASED DECISION MODEL TO CONTROL DYNAMIC MANUFACTURING REQUIREMENTS: APPLICATION OF GREY FORECASTING - DQFD

    Get PDF
    Manufacturing systems have to adapt to changing requirements of their internal and external customers. In fact, new requirements may appear unexpectedly and may change multiple times. Change is a straightforward reality of production, and the engineer has to deal with the dynamic work environment. In this perspective, this paper proposes a decision model in order to fit actual and future processes’ needs. The proposed model is based on the dynamic quality function deployment (DQFD), grey forecasting model GM (1,1) and the technique for order preference by similarity to ideal solution (TOPSIS). The cascading QFD-based model is used to show the applicability of the proposed methodology. The simulation results illustrate the effect of the manufacturing needs changes on the strategic, operational and technical improvements

    Metadata Editor

    Get PDF
    Research Data Management (RDM) revolves around structured data following specific schema definitions, e.g. metadata, Application Programming Interfaces (API), etc. The direct use of these data is impractical for scientists as some formats are non-human-readable. Moreover, its integration into existing user interfaces, e.g. websites, portals, etc is very time-consuming and represents a recurrent task as well as it requires a deep knowledge of various technologies. Thus, a generic user interface, which allows the metadata management or to address research data services, will provide a simple access point to scientists. In addition to that, the validation of metadata before sending a REST request to a service represents a very fundamental quality . For these purposes, the Metadata Editor has been developed at the “Data Exploitation Methods” (DEM) department, that is part of the computing center “Steinbuch Centre for Computing” (SCC) of Karlsruhe Institute of Technology (KIT). The editor is realized as a JavaScript library allowing the generation of web forms as well as the management of JSON metadata in an intuitive and generic way. This metadata is produced by other services and presented to the library as JSON resources. Moreover, the editor enables the validation of JSON metadata based on a developer-provided JSON schemas. In order to fulfill these functionalities, the JSON Form Library is used. In addition to that, the editor allows the user to perform CRUD (Create, Read, Update, Delete) operations and additional operations in case further requirements need to be fulfilled. The Collection Registry, which implements the Collection API recommended by the RDA Research Data Collections Working Group and has been developed at the DEM department, has been adopted as a use case to test the different functionalities of the editor. The editor has been successfully integrated in the Collection Registry ensuring that the different REST requests can be executed through the provided Graphical User Interface (GUI). Moreover, the JSON inputs can be given through the generated web forms and validated against a defined JSON schema. Thus, the service become more intuitive and easier-to-use for scientists and the validation enables the user to provide correct inputs. This work has been supported by the research program ‘Engineering Digital Futures’ of the Helmholtz Association of German Research Centers and the Helmholtz Metadata Collaboration Platform

    Contribution à la modélisation des approches de coopération en e-maintenance.

    No full text
    International audienceCe travail traite des problÚmes liés à la modélisation de la coopération entre différents agents (experts) situés à diffé-rents endroits, pour aider à diagnostiquer et à réparer des défaillances, dans un contexte de e-maintenance. Nous propo-sons des façons d'organiser la coopération entre les experts et de rendre l'information nécessaire à leur diagnostic suffi-samment disponible. Ceci tenant compte de plusieurs facteurs dont la disponibilité des experts, le caractÚre hétérogÚne des moyens de communication, et l'évolution dans le temps de l'état des installations à maintenir. Pour cela, nous avons adopté une méthodologie basée sur les modÚles de Trenteseau, utilisant également les réseaux de Petri et la simulation. Des résultats de simulation de différentes situations seront présentées et discutées par rapport aux hypothÚses et aux indicateurs de performances adoptés

    MetaStore - Managing Metadata for Digital Objects

    Get PDF
    MetaStore is a metadata repository for managing metadata documents. It supports communities in storing metadata documents in a predefined schema. It is therefore an important building block for more precise automated evaluation and/or retrieval of digital objects. With the help of the metadata documents, digital objects can also be evaluated/compared according to content-related aspects. XML and JSON are very common as data formats for such machine-interpretable documents. However, they are only meaningful if they adhere to a certain structure and are correctly filled in. MetaStore supports the use of XML and JSON schema as the definition for the document structure. It allows you to register your own and/or existing schemas in these two formats to ensure that the documents have the appropriate structure. When ingesting metadata documents, the structure is checked and invalid documents are rejected. All valid documents are assigned a persistent identifier and can be automatically indexed for search. Public documents can be harvested via a standardized protocol (OAI-PMH). The supplied web interface also provides a low-threshold entry point for managing documents and also allows documents to be created/edited without additional tools. This work has been supported by the research program ‘Engineering Digital Futures’ of the Helmholtz Association of German Research Centers and the Helmholtz Metadata Collaboration Platform

    RDA Collection Registry Adoption

    Get PDF
    During her carreer, probably every scientist comes to the point where an aggregation of potentially diverse resources is required. May it be measurements of the same sample using different instruments, linking a publication to a certain dataset or providing data, source code and analysis results in a citable manner. The RDA Working Group on Research Data Collections (RDC) has identified the need for leveraging such aggregations in a unified, cross-community way as for sure, each research data management platform offers aggregations in a certain way, but without paying too much attention on interoperability and reusability aspects of such resource collections. In order to overcome this gap, the RDC WG formulated recommendations on how to build up cross-community Research Data Collection Registries taking into account other RDA recommendations like PID Information Types and Type Registries. The final recommendation was published in 2017 together with some first adoptions. At Karlsruhe Institute of Technology, the department „Data Exploitation Methods“ (DEM), which is part of the local computing center „Steinbuch Centre for Computing“, took up the recommendations and implemented a fully featured version of a Collection Registry that is in the meantime available in version 1.1. The poster describes the adoption and full implementation of the Collection Registry which is based on the output of the RDA Working Group on Research Data Collections. It introduces the current implementation of the recommendation and elaborate adaptions, e.g. ETag support and pagination, made compared to the final output of the WG. Furthermore, the poster will present possible usage scenarios and future plans involving the use of the Collection Registry. This work has been supported by the research program ‘Engineering Digital Futures’ of the Helmholtz Association of German Research Centers and the Helmholtz Metadata Collaboration Platform

    Automatic Grading of Diabetic Retinopathy on a Public Database

    Get PDF
    With the growing diabetes epidemic, retina specialists have to examine a tremendous amount of fundus images for the detection and grading of diabetic retinopathy. In this study, we propose a first automatic grading system for diabetic retinopathy. First, a red lesion detection is performed to generate a lesion probability map. The latter is then represented by 35 features combining location, size and probability information, which are finally used for classification. A leave-one-out cross-validation using a random forest is conducted on a public database of 1200 images, to classify the images into 4 grades. The proposed system achieved a classification accuracy of 74.1% and a weighted kappa value of 0.731 indicating a significant agreement with the reference. These preliminary results prove that automatic DR grading is feasible, with a performance comparable to that of human experts
    • 

    corecore