70 research outputs found

    Oxygen Deficiency Hazard (ODH) Monitoring System in the LHC Tunnel

    Get PDF
    The Large Hadron Collider (LHC) presently under construction at CERN, will contain about 100 tons of helium mostly located in equipment in the underground tunnel and in caverns. Potential failure modes of the accelerator, which may be followed by helium discharge to the tunnel, have been identified and the corresponding helium flows calculated [1, 2, 3]. In case of helium discharge in the tunnel causing oxygen deficiency, personnel working in the tunnel shall be warned and evacuate safely. This paper describes oxygen deficiency monitoring system based on the parameter of limited visibility due to the LHC tunnel curvature and acceptable delay time between the failure and the system activation

    An improved model for joint segmentation and registration based on linear curvature smoother

    Get PDF
    Image segmentation and registration are two of the most challenging tasks in medical imaging. They are closely related because both tasks are often required simultaneously. In this article, we present an improved variational model for a joint segmentation and registration based on active contour without edges and the linear curvature model. The proposed model allows large deformation to occur by solving in this way the difficulties other jointly performed segmentation and registration models have in case of encountering multiple objects into an image or their highly dependence on the initialisation or the need for a pre-registration step, which has an impact on the segmentation results. Through different numerical results, we show that the proposed model gives correct registration results when there are different features inside the object to be segmented or features that have clear boundaries but without fine details in which the old model would not be able to cope. </jats:p

    Cryogenic and vacuum sectorisation of the LHC arcs

    Get PDF
    Following the recommendation of the LHC TC of June 20th, 1995 to introduce a separate cryogenic distribution line (QRL), which opened the possibility to have a finer cryogenic and vacuum sectorisation of the LHC machine than the original 8 arcs scheme, a working group was set up to study the implications: technical feasibility, advantages and drawbacks as well as cost of such a sectorisation (DG/DI/LE/dl, 26 July 1995). This report presents the conclusions of the Working Group. In the LHC Conceptual Design Report, ref. CERN/AC/95-05 (LHC), 20 October 1995, the so-called "Yellow Book", a complete cryostat arc (~ 2.9 km) would have to be warmed up in order to replace a defective cryomagnet. Even by coupling the two large refrigerators feeding adjacent arcs at even points to speed up the warm-up and cool down of one arc, the minimum down-time of the machine needed to replace a cryomagnet would be more than a full month (and even 52 days with only one cryoplant). Cryogenic and vacuum sectorisation of an arc into smaller sectors is technically feasible and would allow to reduce the down-times considerably (by one to three weeks with four sectors of 750 m in length, with respectively two or one cryoplants). In addition, sectorisation of the arcs may permit a more flexible quality control and commissioning of the main machine systems, including cold testing of small magnet strings. Sectorisation, described in detail in the following paragraphs, consists essentially of installing several additional cryogenic and vacuum valves as well as some insulation vacuum barriers. Additional cryogenic valves are needed in the return lines of the circuits feeding each half-cell in order to complete the isolation of the cryoline QRL from the machine, allowing intervention (i.e. venting to atmospheric pressure) on machine sectors without affecting the rest of an arc. Secondly, and for the same purpose, special vacuum and cryogenic valves must be installed, at the boundaries of machine sectors, for the circuits not passing through the cryoline QRL. Finally, some additional vacuum barriers must be installed around the magnet cold masses to divide the insulation vacuum of the magnet cryostats into independent sub-sectors, permitting to keep under insulating vacuum the cryogenically floating cold masses, while a sector (or part of it) is warmed up and opened to atmosphere. A reasonable scenario of sectorisation, namely with four 650-750 m long sectors per arc, and each consisting of 3 or 4 insulation vacuum sub-sectors with two to four half-cells, would represent an additional total cost of about 6.6 MCHF for the machine. It is estimated that this capital investment would be paid off by time savings in less than three long unscheduled interventions such as the change of a cryomagnet

    A Simplified Cryogenic Distribution Scheme for the Large Hadron Collider

    Get PDF
    The Large Hadron Collider (LHC), currently under construction at CERN, will make use of superconducting magnets operating in superfluid helium below 2 K. The reference cryogenic distribution scheme was based, in each 3.3 km sector served by a cryogenic plant, on a separate cryogenic distribution line which feeds elementary cooling loops corresponding to the length of a half-cell (53 m). In order to decrease the number of active components, cryogenic modules and jumper connections between distribution line and magnet strings a simplified cryogenic scheme is now implemented, based on cooling loops corresponding to the length of a full-cell (107 m) and compatible with the LHC requirements. Performance and redundancy limitations are discussed with respect to the previous scheme and balanced against potential cost savings

    Mechanical design and layout of the LHC standard half-cell

    Get PDF
    The LHC Conceptual Design Report issued on 20th October 1995 [1] introduced significant changes to some fundamental features of the LHC standard half-cell, composed of one quadrupole, 3 dipoles and a set of corrector magnets. A separate cryogenic distribution line has been adopted containing most of the distribution lines previously installed inside the main cryostat. The dipole length has been increased from 10 to 15 m and independent powering of the focusing and defocusing quadrupole magnets has been chosen. Individual quench protection diodes were introduced in magnet interconnects and many auxiliary bus bars were added to feed in series the various families of superconducting corrector magnets. The various highly intricate basic systems such as: cryostats and cryogenics feeders, superconducting magnets and their electrical powering and protection, vacuum beam screen and its cooling, support and alignment devices have been redesigned, taking into account the very tight space available. These space constraints are imposed by the desire to have maximum integral bending field strength for maximum LHC energy, in the existing LEP tunnel. Finally, cryogenic and vacuum sectorisation have been introduced to reduce downtimes and facilitate commissioning

    A variational joint segmentation and registration framework for multimodal images

    Get PDF
    Image segmentation and registration are closely related image processing techniques and often required as simultaneous tasks. In this work, we introduce an optimization-based approach to a joint registration and segmentation model for multimodal images deformation. The model combines an active contour variational term with mutual information (MI) smoothing fitting term and solves in this way the difficulties of simultaneously performed segmentation and registration models for multimodal images. This combination takes into account the image structure boundaries and the movement of the objects, leading in this way to a robust dynamic scheme that links the object boundaries information that changes over time. Comparison of our model with state of art shows that our method leads to more consistent registrations and accurate results

    How Much Research Shared On Facebook Happens Outside Of Public Pages And Groups? A Comparison of Public and Private Online Activity around PLOS ONE Papers

    Get PDF
    Despite its undisputed position as the biggest social media platform, Facebook has never entered the main stage of altmetrics research. In this study, we argue that the lack of attention by altmetrics researchers is due, in part, to the challenges in collecting Facebook data regarding activity that takes place outside of public pages and groups. We present a new method of collecting aggregate counts of shares, reactions, and comments across the platform—including users’ personal timelines—and use it to gather data for all articles published between 2015 to 2017 in the journal PLOS ONE. We compare the gathered data with altmetrics collected and aggregated by Altmetric. The results show that 58.7% of papers shared on Facebook happen outside of public spaces and that, when collecting all shares, the volume of activity approximates patterns of engagement previously only observed for Twitter. Both results suggest that the role and impact of Facebook as a medium for science and scholarly communication has been underestimated. Furthermore, they emphasize the importance of openness and transparency around the collection and aggregation of altmetrics

    Appearance-driven conversion of polygon soup building models with level of detail control for 3D geospatial applications

    No full text
    In many 3D applications, building models in polygon-soup representation are commonly used for the purposes of visualization, for example, in movies and games. Their appearances are fine, however geometry-wise, they may have limited information of connectivity and may have internal intersections between their parts. Therefore, they are not well-suited to be directly used in 3D geospatial applications, which usually require geometric analysis. For an input building model in polygon-soup representation, we propose a novel appearance-driven approach to interactively convert it to a two-manifold model, which is more well-suited for 3D geospatial applications. In addition, the level of detail (LOD) can be controlled interactively during the conversion. Because a model in polygon-soup representation is not well-suited for geometric analysis, the main idea of the proposed method is extracting the visual appearance of the input building model and utilizing it to facilitate the conversion and LODs generation. The silhouettes are extracted and used to identify the features of the building. After this, according to the locations of these features, horizontal cross-sections are generated. We then connect two adjacent horizontal cross-sections to reconstruct the building. We control the LOD by processing the features on the silhouettes and horizontal cross-sections using a 2D approach. We also propose facilitating the conversion and LOD control by integrating a variety of rasterization methods. The results of our experiments demonstrate the effectiveness of our method

    Towards automatic optical inspection of soldering defects

    No full text
    This paper proposes a method for automatic image-based classification of solder joint defects in the context of Automatic Optical Inspection (AOI) of Printed Circuit Boards (PCBs). Machine learning-based approaches are frequently used for image-based inspection. However, a main challenge is to manually create sufficiently large labeled training databases to allow for high accuracy of defect detection. Creating such large training databases is time-consuming, expensive, and often unfeasible in industrial production settings. In order to address this problem, an active learning framework is proposed which starts with only a small labeled subset of training data. The labeled dataset is then enlarged step-by-step by combining K-means clustering with active user input to provide representative samples for the training of an SVM classifier. Evaluations on two databases with insufficient and shifting solder joints samples have shown that the proposed method achieved high accuracy while requiring only minimal user input. The results also demonstrated that the proposed method outperforms random and representative sampling by ~ 3.2% and ~ 2.7%, respectively, and it outperforms the uncertainty sampling method by ~ 0.5%

    Statistical shape modeling from gaussian distributed incomplete data for image segmentation

    No full text
    Statistical shape models are widely used in medical image segmentation. However, getting sufficient high quality manually generated ground truth data to generate such models is often not possible due to time constraints of clinical experts. In this work, a method for automatically constructing statistical shape models from incomplete data is proposed. The incomplete data is assumed to be the result of any segmentation algorithm or may originate from other sources, e.g. non expert manual delineations. The proposed work flow consists of (1) identifying areas of high probability in the segmentation output of being a boundary, (2) interpolating between the boundary areas, (3) reconstructing the missing high frequency data in the interpolated areas by an iterative back-projection from other data sets of the same population. For evaluation, statistical shape models where constructed from 63 clinical CT data sets using ground truth data, artificial incomplete data, and incomplete data resulting from an existing segmentation algorithm. The results show that a statistical shape model from incomplete data can be built with an added average error of 6 mm compared to a model built from ground truth data
    • …
    corecore