32 research outputs found

    Supplement: "Localization and broadband follow-up of the gravitational-wave transient GW150914" (2016, ApJL, 826, L13)

    Get PDF
    This Supplement provides supporting material for Abbott et al. (2016a). We briefly summarize past electromagnetic (EM) follow-up efforts as well as the organization and policy of the current EM follow-up program. We compare the four probability sky maps produced for the gravitational-wave transient GW150914, and provide additional details of the EM follow-up observations that were performed in the different bands

    Localization and broadband follow-up of the gravitational-wave transient GW150914

    Get PDF
    A gravitational-wave (GW) transient was identified in data recorded by the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) detectors on 2015 September 14. The event, initially designated G184098 and later given the name GW150914, is described in detail elsewhere. By prior arrangement, preliminary estimates of the time, significance, and sky location of the event were shared with 63 teams of observers covering radio, optical, near-infrared, X-ray, and gamma-ray wavelengths with ground- and space-based facilities. In this Letter we describe the low-latency analysis of the GW data and present the sky localization of the first observed compact binary merger. We summarize the follow-up observations reported by 25 teams via private Gamma-ray Coordinates Network circulars, giving an overview of the participating facilities, the GW sky localization coverage, the timeline, and depth of the observations. As this event turned out to be a binary black hole merger, there is little expectation of a detectable electromagnetic (EM) signature. Nevertheless, this first broadband campaign to search for a counterpart of an Advanced LIGO source represents a milestone and highlights the broad capabilities of the transient astronomy community and the observing strategies that have been developed to pursue neutron star binary merger events. Detailed investigations of the EM data and results of the EM follow-up campaign are being disseminated in papers by the individual teams

    Localization and Broadband Follow-up of the Gravitational-wave Transient GW150914

    Get PDF
    A gravitational-wave (GW) transient was identified in data recorded by the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) detectors on 2015 September 14. The event, initially designated G184098 and later given the name GW150914, is described in detail elsewhere. By prior arrangement, preliminary estimates of the time, significance, and sky location of the event were shared with 63 teams of observers covering radio, optical, near-infrared, X-ray, and gamma-ray wavelengths with ground- and space-based facilities. In this Letter we describe the low-latency analysis of the GW data and present the sky localization of the first observed compact binary merger. We summarize the follow-up observations reported by 25 teams via private Gamma-ray Coordinates Network circulars, giving an overview of the participating facilities, the GW sky localization coverage, the timeline, and depth of the observations. As this event turned out to be a binary black hole merger, there is little expectation of a detectable electromagnetic (EM) signature. Nevertheless, this first broadband campaign to search for a counterpart of an Advanced LIGO source represents a milestone and highlights the broad capabilities of the transient astronomy community and the observing strategies that have been developed to pursue neutron star binary merger events. Detailed investigations of the EM data and results of the EM follow-up campaign are being disseminated in papers by the individual teams. </p

    Localization and broadband follow-up of the gravitational-wave transient GW150914

    Get PDF
    A gravitational-wave transient was identified in data recorded by the Advanced LIGO detectors on 2015 September 14. The event candidate, initially designated G184098 and later given the name GW150914, is described in detail elsewhere. By prior arrangement, preliminary estimates of the time, significance, and sky location of the event were shared with 63 teams of observers covering radio, optical, near-infrared, X-ray, and gamma-ray wavelengths with ground- and space-based facilities. In this Letter we describe the low-latency analysis of the gravitational wave data and present the sky localization of the first observed compact binary merger. We summarize the follow-up observations reported by 25 teams via private Gamma-ray Coordinates Network Circulars, giving an overview of the participating facilities, the gravitational wave sky localization coverage, the timeline and depth of the observations. As this event turned out to be a binary black hole merger, there is little expectation of a detectable electromagnetic signature. Nevertheless, this first broadband campaign to search for a counterpart of an Advanced LIGO source represents a milestone and highlights the broad capabilities of the transient astronomy community and the observing strategies that have been developed to pursue neutron star binary merger events. Detailed investigations of the electromagnetic data and results of the electromagnetic follow-up campaign will be disseminated in the papers of the individual teams

    Indexing the patient recordfile : a 15 year experience in a French hospital

    No full text
    International audienc

    Fouille et fusion de données médicales

    No full text
    National audienceL'accroissement des sources et des volumes d'informations, la diversité structurelle de ces informations et le besoin accru pour un accès rapide à des connaissances pertinentes à des fins de prise de décision font qu'un système de fouilles de données doit être capable d'une part de traiter divers types de données (données symboliques, données numériques, données alphanumériques, etc.) et d'autre part, de proposer de nouvelles connaissances en fusionnant une multitude de données hétérogènes. Concernant le domaine médical, le travail présenté consiste à appliquer un modèle de fouille de données sur une base de cas afin de définir une base de connaissances sur laquelle repose un système d?aide à la décision. Ce système permet de retrouver le type de lésion à partir d?une description symbolique de cette lésion

    Reversible watermarking for knowledge digest embedding and reliability control in medical images

    No full text
    International audienceTo improve medical image sharing in applications such as e-learning or remote diagnosis aid, we propose to make the image more usable by watermarking it with a digest of its associated knowledge. The aim of such a knowledge digest (KD) is for it to be used for retrieving similar images with either the same findings or differential diagnoses. It summarizes the symbolic descriptions of the image, the symbolic descriptions of the findings semiology, and the similarity rules that contribute to balancing the importance of previous descriptors when comparing images. Instead of modifying the image file format by adding some extra header information, watermarking is used to embed the KD in the pixel gray-level values of the corresponding images. When shared through open networks, watermarking also helps to convey reliability proofs (integrity and authenticity) of an image and its KD. The interest of these new image functionalities is illustrated in the updating of the distributed users' databases within the framework of an e-learning application demonstrator of endoscopic semiology

    iRMA: a Web Interface for ICD10 code and context retrieval

    No full text
    International audienceIntroduction: Most of the European countries require indexing the hospital discharge abstract. Physicians and clinical coders, who are legally responsible for this task, need to get the simplest way to quickly retrieve the right code, especially in a decentralized coding environment. The context includes definitions, multiple labels, inclusion/exclusion notes and ICD10 tree structure position for the same disease. Although its richness, information contained in ICD10 hard-copy is accessed with difficulty in current practice. This project brings out associations between time and disease and between disease and aetiology, in order to enhance the disease context. The aim is to improve code retrieval and navigation be- tween all these notions using web technologies. Methods: iRMA data structure is based on a tree representation. Every node contains a code field associated with a text field. Moreover, for each node, an ordered set of linked information forms the siblings. Applied to ICD10, a node contains code, label and a set of linked information involving all analytical volume notes, alphabetical volume labels and local coding booklets. The same information can be linked to several nodes. Each informa- tion unit has its proper class in order to categorize meaning and presentation. The meaning of a node is formed by accumulating information from the root to the current position, following the branch. For example, information about a node with liver notifica- tion differs according to its location on the malignant neoplasm, benign neoplasm or intestinal infectious disease branch. The model is implemented as a relational database with Microsoft Access. As the classification tends to change quite rapidly, an interface allows its maintenance. This process can be either ma- nual updating or automated data importing. Files from the French official organization in charge of distributing classifications, "Agence Technique de l‟Information Hospitalière", are used as the source. Context building consists in looking for links between ICD10 codes. Studied relationships are on the one hand, disease and aetiology, and on the other hand, disease, sequellae and person- al/familial previous story. Most of the links for dagger and star codes are supplied by WHO-FIC. The other part is achieved by parsing text data with the help of PHP Programming language. We then produce a list of associations represented as couples, each of them being typed according to the studied relationship. Searching tool is implemented with HTML/PHP language. Two searching levels have been defined: the standard mode only looks for ICD10 analytical volume data; extended search looks for ICD alphabetical index and local data. Assessment of code retrieval was realized on the last real requests from physicians and clinical coders. An ICD code had been pre- viously attributed to each request by an expert coder. Each re- quest has been successively used as query terms for standard and extended search mode. The response was considered as a suc- cess if the expert‟s code was retrieved. The total number of re- turned codes was recorded. The mean recall and precision rates were then computed. Results: One hundred requests have been analysed. Mean num- ber of propositions per query was 1.1 ± 2.7 in standard mode and 2.9 ± 4.7 in extended mode (mean ± standard error). The query returned no code proposition for 52 and 12 requests respectively. The recall rate was 38% ± 49% in standard mode and 76% ± 43% in extended mode respectively. The precision rate was 60% ± 43% in standard mode and 51% ± 38% in extended mode. The interface is currently in use at Brest University Hospital, and also available on internet via: http://i3se009d.univ- brest.fr/IRMA/CIM10/CIM10.php. The user can browse ICD10 tree structure. He can also perform a search by single or multiple terms, and/or by code. The two search modes are proposed sepa- rately according to the user needs. Hits are displayed according to ICD10 hierarchy. Icons and colour list underline the code weight according to the French hospital payment system. By clicking on a code, its whole context is displayed. Hyperlinks allow switching from one context to another. Conclusions: This data model can be extended to other healthcare classifications. The web application shows encouraging results. First evaluations demonstrated the effectiveness of this method with a two times higher recall rate in extended method, despite a lower precision. Additional terms from local booklets improve appropriation by the local staff. Further developments are already planned, such as including other information sources, implement- ing a query follow-up tool and improving the code context with multilingual labels

    Computer-Assisted Diagnosis System in Digestive Endoscopy

    No full text
    International audienceThe purpose of this paper is to present an intelligent atlas of indexed endoscopic lesions that could be used in com- puter-assisted diagnosis as reference data. The development of such a system requires a mix of medical and engineering skills for analyzing and reproducing the cognitive processes that un- derlie the medical decision-making process. The analysis of both endoscopists experience and endoscopic terminologies developed by professional associations shows that diagnostic reasoning in digestive endoscopy uses a scene-object approach. The objects correspond to the endoscopic findings and the medical context of examination and the scene to the endoscopic diagnosis. According to expert assessment, the classes of endoscopic findings and diagnoses, their primitive characteristics (or indices), and their relationships have been listed. Each class describes an endoscopic finding or diagnosis in an intensive way. The retrieval method is based on a similarity metric that estimates the membership value of the case under investigation and the prototype of the class. A simulation test with randomized objects demonstrates a good classification of endoscopic findings. The correct class is the unique response in 68% of the tested objects, the first of multiple responses in 28%. Four descriptors are shown to be of major im- portance in the classification algorithm: anatomic location, shape, color, and relief. At the present time, the application database contains approximately 150 endoscopic images and is accessible via Internet. Experiments are in progress with endoscopists for the validation of the system and for the understanding of the similarity between images. The next step will integrate the system in a learning tool for junior endoscopists
    corecore