19 research outputs found

    Selected Papers from 2020 IEEE International Conference on High Voltage Engineering (ICHVE 2020)

    Get PDF
    The 2020 IEEE International Conference on High Voltage Engineering (ICHVE 2020) was held on 6–10 September 2020 in Beijing, China. The conference was organized by the Tsinghua University, China, and endorsed by the IEEE Dielectrics and Electrical Insulation Society. This conference has attracted a great deal of attention from researchers around the world in the field of high voltage engineering. The forum offered the opportunity to present the latest developments and different emerging challenges in high voltage engineering, including the topics of ultra-high voltage, smart grids, and insulating materials

    A Systematic Review on Object Localisation Methods in Images

    Full text link
    [EN] Currently, many applications require a precise localization of the objects that appear in an image, to later process them. This is the case of visual inspection in the industry, computer-aided clinical diagnostic systems, the obstacle detection in vehicles or in robots, among others. However, several factors such as the quality of the image and the appearance of the objects to be detected make this automatic location difficult. In this article, we carry out a systematic revision of the main methods used to locate objects by considering since the methods based on sliding windows, as the detector proposed by Viola and Jones, until the current methods that use deep learning networks, such as Faster-RCNN or Mask-RCNN. For each proposal, we describe the relevant details, considering their advantages and disadvantages, as well as the main applications of these methods in various areas. This paper aims to provide a clean and condensed review of the state of the art of these techniques, their usefulness and their implementations in order to facilitate their knowledge and use by any researcher that requires locating objects in digital images. We conclude this work by summarizing the main ideas presented and discussing the future trends of these methods.[ES] Actualmente, muchas aplicaciones requieren localizar de forma precisa los objetos que aparecen en una imagen, para su posterior procesamiento. Este es el caso de la inspección visual en la industria, los sistemas de diagnóstico clínico asistido por computador, la detección de obstáculos en vehículos o en robots, entre otros. Sin embargo, diversos factores como la calidad de la imagen y la apariencia de los objetos a detectar, dificultan la localización automática. En este artículo realizamos una revisión sistemática de los principales métodos utilizados para localizar objetos, considerando desde los métodos basados en ventanas deslizantes, como el detector propuesto por Viola y Jones, hasta los métodos actuales que usan redes de aprendizaje profundo, tales como Faster-RCNNo Mask-RCNN. Para cada propuesta, describimos los detalles relevantes, considerando sus ventajas y desventajas, así como sus aplicaciones en diversas áreas. El artículo pretende proporcionar una revisión ordenada y condensada del estado del arte de estas técnicas, su utilidad y sus implementaciones a fin de facilitar su conocimiento y uso por cualquier investigador que requiera localizar objetos en imágenes digitales. Concluimos este trabajo resumiendo las ideas presentadas y discutiendo líneas de trabajo futuro.Este trabajo ha sido financiado parcialmente por diferentes instituciones. Deisy Chaves cuenta con una beca “Estudios de Doctorado en Colombia 2013” de COLCIENCIAS. Surajit Saikia cuenta con una beca de la Junta de Castilla y León con referencia EDU/529/2017. También queremos agradecer el apoyo de INCIBE (Instituto Nacional de Ciberseguridad) mediante la Adenda 22 al convenio con la Universidad de León.Chaves, D.; Saikia, S.; Fernández-Robles, L.; Alegre, E.; Trujillo, M. (2018). Una Revisión Sistemática de Métodos para Localizar Automáticamente Objetos en Imágenes. Revista Iberoamericana de Automática e Informática industrial. 15(3):231-242. https://doi.org/10.4995/riai.2018.10229OJS231242153Akselrod-Ballin, A., Karlinsky, L., Alpert, S., Hasoul, S., Ben-Ari, R., Barkan, E., 2016. A region based convolutional network for tumor detection and classification in breast mammography. In: Deep Learning and Data Labe-ling for Medical Applications. pp. 197-205.Alexe, B., Deselaers, T., Ferrari, V., 2010. What is an object? In: CVPR. pp.73-80.Ammour, N., Alhichri, H., Bazi, Y., Benjdira, B., Alajlan, N., Zuair, M., 2017.Deep learning approach for car detection in uav imagery. Remote Sens. 9 (4). DOI:10.3390/rs9040312Boser, B. E., Guyon, I. M., Vapnik, V. N., 1992. A training algorithm for opti-mal margin classifiers. In: COLT. pp. 144-152.Brazil, G., Yin, X., Liu, X., 2017. Illuminating pedestrians via simultaneous detection & segmentation. CoRR abs/1706.08564.Cai, Z., Fan, Q., Feris, R. S., Vasconcelos, N., 2016. A unified multi-scale deep convolutional neural network for fast object detection. CoRRabs/1607.07155.Cao, X., Gong, G., Liu, M.,Qi, J., 2016. Foreign object debris detection on air-field pavement using region based convolution neural network. In: DICTA. pp. 1-6. DOI:10.1109/DICTA.2016.7797045Cao, X., Wang, P., Meng, C., Bai, X., Gong, G., Liu, M., Qi, J., 2018. Region based cnn for foreign object debris detection on airfield pavement. Sensors18 (3). DOI:10.3390/s18030737Chen, J., Liu, Z., Wang, H., Núñez, A., Han, Z., 2018. Automatic defect detection of fasteners on the catenary support device using deep convolutional neural network. IEEE T Instrum Meas 67 (2), 257-269. DOI:10.1109/TIM.2017.2775345Cireʂan, D. C., Giusti, A., Gambardella, L. M., Schmidhuber, J., 2013. Mitosis detection in breast cancer histology images with deep neural networks. In: MICCAI. pp. 411-418.Coifman, B., McCord, M., Mishalani, R. G., Iswalt, M., Ji, Y., 2006. Roadway traffic monitoring from an unmanned aerial vehicle. IEE Proceedings - Intelligent Transport Systems 153 (1),11-20. DOI:10.1049/ip-its:20055014Dai, J., Li, Y., He, K., Sun, J., 2016. R-FCN: object detection via region-based fully convolutional networks. CoRR abs/1605.06409.Dalal, N., Triggs, B., June2005. Histograms of oriented gradients for human detection. In: CVPR. Vol. 1. pp. 886-893 vol. 1. DOI:10.1109/CVPR.2005.177Deng, L., 2014. A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Transactions on Signal and Information Processing 3, e2.Deng, L., Yu, D., 2014. Deep learning: Methods and applications. Foundations and Trends in Signal Processing 7 (3-4), 197-387.Dollár, P., Tu, Z., Perona, P., Belongie, S. J., 2009. Integral channel features. In: BMVC. pp. 1-11.Dollar, P., Zitnick, L., 2013. Structured forests for fast edge detection. In: ICCV. pp. 1841-1848.Donoser, M., Bischof, H., 2006. Efficient maximally stable extremal region (mser) tracking. In: CVPR. pp. 553-560. DOI:10.1109/CVPR.2006.107Du, X., El-Khamy, M., Lee, J., Davis, L., 2017. Fused dnn: A deep neural net-work fusion approach to fast and robust pedestrian detection. In: WACV. pp.953-961. DOI:10.1109/WACV.2017.111Dženan, Z., Aleš, V., Jan, E., Daniel, H., Christopher, N., Andreas, K., 2014. Robust detection and segmentation for diagnosis of vertebral diseases using routine mr images. Computer Graphics Forum 33 (6), 190-204. DOI:10.1111/cgf.12343Felzenszwalb, P. F., Girshick, R. B., McAllester, D., Ramanan, D., 2010. Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32 (9), 1627-1645. DOI:10.1109/TPAMI.2009.167Felzenszwalb, P. F., Huttenlocher, D. P., 2004. Efficient graph-based image segmentation. IJCV 59 (2), 167-181. DOI:10.1023/B:VISI.0000022288.19776.77Ferguson, M., Ak, R., Lee, Y. T. T., Law, K. H., 2017. Automatic localization of casting defects with convolutional neural networks. In: IEEE International Conference on Big Data. pp. 1726-1735. DOI:10.1109/BigData.2017.8258115Fernández-Robles, L., Azzopardi, G., Alegre, E., Petkov, N., 2017a. Machine-vision-based identification of broken inserts in edge profile milling heads. Robot Comput Integr Manuf 44, 276 - 283. DOI:https://doi.org/10.1016/j.rcim.2016.10.004Fernández-Robles, L., Azzopardi, G., Alegre, E., Petkov, N., Castejón-Limas ,M., 2017b. Identification of milling inserts in situ based on a versatile machine vision system. JMSY 45, 48 - 57. DOI: https://doi.org/10.1016/j.jmsy.2017.08.002Freund, Y., Schapire, R. E., 1999. A short introduction to boosting. In: IJCAI. pp. 1401-1406.García-Ordás, M. T., Alegre, E., González-Castro, V., Alaiz-Rodríguez, R.,2017. A computer vision approach to analyze and classify tool wear level in milling processes using shape descriptors and machine learning techniques. Int J Adv Manuf Technol 90 (5), 1947-1961. DOI:10.1007/s00170-016-9541-0García-Olalla, O., Alegre, E., Fernández-Robles, L., Fidalgo, E., Saikia, S., 2018. Textile retrieval based on image content from cdc and webcam cameras in indoor environments. Sensors 18 (5). DOI:10.3390/s18051329Garnett, N., Silberstein, S., Oron, S., Fetaya, E., Verner, U., Ayash, A., Goldner,V., Cohen, R., Horn, K., Levi, D., 2017. Real-time category-based and general obstacle detection for autonomous driving. In: ICCVW. pp. 198-205. DOI:10.1109/ICCVW.2017.32Girshick, R. B., 2015. Fast R-CNN. CoRR abs/1504.08083.Girshick, R. B., Donahue, J., Darrell, T., Malik, J., 2013. Rich feature hierarchies for accurate object detection and semantic segmentation. CoRRabs/1311.2524.He, B., Xiao, D., Hu, Q., Jia, F., 2018. Automatic magnetic resonance image prostate segmentation based on adaptive feature learning probability boos-ting tree initialization and cnn-asm refinement. IEEE Access 6, 2005-2015.He, K., Gkioxari, G., Doll ́ar, P., Girshick, R. B., 2017. Mask R-CNN. CoRRabs/1703.06870.He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: CVPR. pp. 770-778.Heo, Y. J., Lee, D., Kang, J., Lee, K., Chung, W. K., 2017. Real-time Image Processing for Microscopy-based Label-free Imaging Flow Cytometry in a Microfluidic Chip. Scientific Reports 7 (1), 11651. DOI:10.1038/s41598-017-11534-0Hosang, J., Benenson, R., Doll ́ar, P., Schiele, B., 2016. What makes for effective detection proposals? IEEE Trans. Pattern Anal. Mach. Intell. 38 (4),814-830. DOI:10.1109/TPAMI.2015.2465908Jiamin, L., David, W., Le, L., Zhuoshi, W., Lauren, K., B., T. E., Berkman,S., A., P. N., M., S. R., 2017. Detection and diagnosis of colitis on computed tomography using deep convolutional neural networks. Medical Physics44 (9), 4630-4642. DOI:10.1002/mp.12399Jung, F., Kirschner, M., Wesarg, S., 2013. A generic approach to organ detection using 3d haar-like features. In: Bildverarbeitung für die Medizin 2013.pp. 320-325.Kisilev, P., Sason, E., Barkan, E., Hashoul, S., 2016. Medical image description nusing multi-task-loss cnn. In: Deep Learning and Data Labeling for Medical Applications. pp. 121-129.Krizhevsky, A., Sutskever, I., Hinton, G. E., 2012. Imagenet classification with deep convolutional neural networks. In: Adv Neural Inf Process Syst. pp. 1097-1105.Lampert, C. H., Blaschko, M. B., Hofmann, T., 2008. Beyond sliding windows: Object localization by efficient subwindow search. In: CVPR. pp. 1-8. DOI:10.1109/CVPR.2008.4587586Lecun, Y., Bengio, Y., Hinton, G., 2015. Deep learning. Nature 521, 436-444.Lee, C. J., Tseng, T. H., Huang, B. J., Jun-Weihsieh, Tsai, C. M., 2015. Obstacle detection and avoidance via cascade classifier for wheeled mobile robot. In: ICMLC. Vol. 1. pp. 403-407. DOI:10.1109/ICMLC.2015.7340955Lee, J., Wang, J., Crandall, D., Šabanovic, S., Fox, G., 2017. Real-time, cloud-based object detection for unmanned aerial vehicles. In: IRC. pp. 36-43. DOI:10.1109/IRC.2017.77Levi, D., Garnett, N., Fetaya, E., September 2015a. Stixelnet: A deep convolutional network for obstacle detection and road segmentation. In: BMVC. pp. 109.1-109.12. DOI:10.5244/C.29.109Levi, D., Garnett, N., Fetaya, E., 2015b. Stixelnet: A deep convolutional network for obstacle detection and road segmentation. In: BMVC. pp. 109.1-109.12. DOI:10.5244/C.29.109Li, J., Liang, X., Shen, S., Xu, T., Feng, J., Yan, S., 2018. Scale-aware fast r-cnn for pedestrian detection. IEEE Trans Multimedia 20 (4), 985-996. DOI:10.1109/TMM.2017.2759508Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A. C.,2016. Ssd: Single shot multibox detector. In: ECCV. pp. 21-37.Luo, S., Lu, H., Xiao, J., Yu, Q., Zheng, Z., 2017. Robot detection and localization based on deep learning. In: CAC. pp. 7091-7095.Ma, Y., Jiang, Z., Zhang, H., Xie, F., Zheng, Y., Shi, H., 2017. Proposing regions from histopathological whole slide image for retrieval using selective search. In: ISBI. pp. 156-159. DOI:10.1109/ISBI.2017.7950491Mery, D., Rio, V., Zscherpel, U., Mondrag ́on, G., Lillo, I., Zuccar, I., Lobel,H., Carrasco, M., 2015. Gdxray: The database of x-ray images for nondestructive testing. Journal of Nondestructive Evaluation 34 (4), 42. DOI:10.1007/s10921-015-0315-7Park, J.-K., Kwon, B.-K., Park, J.-H., Kang, D.-J., 2016. Machine learning-based imaging system for surface defect inspection. IJPEM-GT 3 (3), 303-310. DOI:10.1007/s40684-016-0039-xRedmon, J., Divvala, S. K., Girshick, R. B., Farhadi, A., 2015. You only look once: Unified, real-time object detection. CoRR abs/1506.02640.Ren, S., He, K., Girshick, R. B., Sun, J., 2015. Faster R-CNN: towards real-time object detection with region proposal networks. CoRR abs/1506.01497.Říha, K., Mašek, J., Burget, R., Beneš, R., Závodná, E., 2013. Novel method for localization of common carotid artery transverse section in ultrasound images using modified viola-jones detector. Ultrasound Med Biol 39 (10),1887 - 1902. DOI:10.1016/j.ultrasmedbio.2013.04.013Sa, R., Owens, W., Wiegand, R., Studin, M., Capoferri, D., Barooha, K.,Greaux, A., Rattray, R., Hutton, A., Cintineo, J., Chaudhary, V., 2017. Intervertebral disc detection in x-ray images using faster r-cnn. In: EMBC. pp. 564-567. DOI:10.1109/EMBC.2017.8036887Saikia, S., Fidalgo, E., Alegre, E., Fernández-Robles, L., 2017. Object detection for crime scene evidence analysis using deep learning. In: ICIAP. pp.14-24.Sepúlveda, G. V., Torriti, M. T.,Calero, M. F., 2017. Sistema de detección de señales de tráfico para la localización de intersecciones viales y frenado anticipado. Revista Iberoamericana de Automática e Informática Industrial14 (2), 152-162. DOI:10.1016/j.riai.2016.09.010Shah, V. R., Maru, S. V., Jhaveri, R. H., 2018. An obstacle detection scheme for vehicles in an intelligent transportation system. IJCNIS 8 (10), 23-28. DOI:10.5815/ijcnis.2016.10.03Shi, Y., Li, Y., Wei, X., Zhou, Y., 2017. A faster-rcnn based chemical fiber paper tube defect detection method. In: International Conference on Enterprise Systems. pp. 173-177. DOI:10.1109/ES.2017.35Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556.Szegedy, C., Ioe, S., Vanhoucke, V., Alemi, A. A., 2017. Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI. pp. 4278-4284.Tang, T., Zhou, S., Deng, Z., Zou, H., Lei, L., 2017. Vehicle detection in aerial images based on region convolutional neural networks and hard negative example mining. Sensors 17 (2). DOI:10.3390/s17020336Tek, F., 2013. Mitosis detection using generic features and an ensemble of cascade adaboosts. J Pathol Inform 4 (1), 12. DOI:10.4103/2153-3539.112697Uijlings, J. R. R., van de Sande, K. E. A., Gevers, T., Smeulders, A. W. M. ,2013. Selective search for object recognition. IJCV 104 (2), 154-171.Viola, P., Jones, M. J., May 2004. Robust real-time face detection. IJCV 57 (2), 137-154 .DOI:10.1023/B:VISI.0000013087.49260.fbWang, S., Cheng, J., Liu, H., Tang, M., 2018. Pcn: Part and context information for pedestrian detection with cnns. CoRR abs/1804.04483.Xu, Y., Yu, G., Wang, Y., Ma, Y., 2017a. Car detection from low-altitude uav imagery with the faster r-cnn. JAT 2017. DOI:https://doi.org/10.1155/2017/2823617Xu, Y., Yu, G., Wang, Y., Wu, X., Ma, Y., 2016. A hybrid vehicle detection method based on viola-jones and hog+svm from uav images. Sensors 16 (8). DOI:10.3390/s16081325Xu, Y., Yu, G., Wu, X., Wang, Y., Ma, Y., 2017b. An enhanced viola-jones vehicle detection method from unmanned aerial vehicles imagery. IEEE trans Intell Transp Syst 18 (7), 1845-1856. DOI:10.1109/TITS.2016.2617202Yang, S., Fang, B., Tang, W., Wu, X., Qian, J., Yang, W., 2017. Faster r-cnn based microscopic cell detection. In: SPAC. pp. 345-350. DOI:10.1109/SPAC.2017.8304302Yi, X., Song, G., Derong, T., Dong, G., Liang, S., Yuqiong, W., 2018. Fast road obstacle detection method based on maximally stable extremal regions. IJARS 15 (1), 1-10. DOI:10.1177/1729881418759118Zeiler, M. D., Fergus, R., 2014. Visualizing and understanding convolutional networks. In: ECCV. pp. 818-833.Zhang, L., Lin, L., Liang, X., He, K., 2016. Is faster r-cnn doing well for pedestrian detection? In: ECCV. pp. 443-457.Zhong, J., Lei, T., Yao, G., 2017. Robust vehicle detection in aerial images based on cascaded convolutional neural networks. Sensors 17 (12). DOI:10.3390/s17122720Zitnick, L., Dollar, P., 2014. Edge boxes: Locating object proposals from edges. In: ECCV. pp. 391-405

    Manufacturing Metrology

    Get PDF
    Metrology is the science of measurement, which can be divided into three overlapping activities: (1) the definition of units of measurement, (2) the realization of units of measurement, and (3) the traceability of measurement units. Manufacturing metrology originally implicates the measurement of components and inputs for a manufacturing process to assure they are within specification requirements. It can also be extended to indicate the performance measurement of manufacturing equipment. This Special Issue covers papers revealing novel measurement methodologies and instrumentations for manufacturing metrology from the conventional industry to the frontier of the advanced hi-tech industry. Twenty-five papers are included in this Special Issue. These published papers can be categorized into four main groups, as follows: Length measurement: covering new designs, from micro/nanogap measurement with laser triangulation sensors and laser interferometers to very-long-distance, newly developed mode-locked femtosecond lasers. Surface profile and form measurements: covering technologies with new confocal sensors and imagine sensors: in situ and on-machine measurements. Angle measurements: these include a new 2D precision level design, a review of angle measurement with mode-locked femtosecond lasers, and multi-axis machine tool squareness measurement. Other laboratory systems: these include a water cooling temperature control system and a computer-aided inspection framework for CMM performance evaluation

    Laser Surface Treatment and Laser Powder Bed Fusion Additive Manufacturing Study Using Custom Designed 3D Printer and the Application of Machine Learning in Materials Science

    Get PDF
    Selective Laser Melting (SLM) is a laser powder bed fusion (L-PBF) based additive manufacturing (AM) method, which uses a laser beam to melt the selected areas of the metal powder bed. A customized SLM 3D printer that can handle a small quantity of metal powders was built in the lab to achieve versatile research purposes. The hardware design, electrical diagrams, and software functions are introduced in Chapter 2. Several laser surface engineering and SLM experiments were conducted using this customized machine which showed the functionality of the machine and some prospective fields that this machine can be utilized. Chapter 3 evaluated the effects of laser beam irradiation-based surface modifications of Ti-10Mo alloy samples under either Ar or N2 environment to the corrosion resistance and cell integration properties. The customized 3D printer was used to conduct the laser surface treatment. The electrochemical behaviors of the Ti-10Mo samples were evaluated in simulated body fluid maintained at 37 ± 0.5 ̊C, and a cell-material interaction test was conducted using the MLO-Y4 cells. Laser surface modification in the Ar environment was found to enhance corrosion behavior but did not affect the surface roughness, element distribution, or cell behavior, compared to the non-laser scanned samples. Processing the Ti-10Mo alloy in N2 formed a much rougher TiN surface that improved both the corrosion resistance and cell-material integration compared with the other two conditions. The mechanical behavior of spark plasma sintering (SPS) treated SLM Inconel 939 samples was evaluated in Chapter 4. Flake-like precipitates (η and σ phases) are observed on the 800-SPS sample surface which increased the hardness and tensile strength compared with the as-fabricated samples. However, the strain-to-failure value decreased due to the local stress concentration. γ’/ γ’’ phases were formed on the 1200-SPS sample. Although not fully formed due to the short holding time, the 1200-SPS sample still showed the highest hardness value and best tensile strength and deductibility. Apply machine learning to the materials science field was discussed in the fifth chapter. Firstly, a simple (Deep Neural Network) DNN model is created to predict the Anti-phase Boundary Energy (APBE) based on the limited training data. It achieves the best performance compared with Random Forest Regressor model and K Neighbors Regressor model. Secondly, the defects classification, the defects detection, and the defects image segmentation are successfully performed using a simple CNN model, YOLOv4 and Detectron2, respectively. Furthermore, defects detection is successfully applied on video by using a sequence of CT scan images. It demonstrates that Machine Learning (ML) can enable more efficient and economical materials science research

    Development of quality assurance procedures and methods for the CBM Silicon Tracking System

    Get PDF
    The Compressed Baryonic Matter (CBM) experiment at the future Facility for Antiproton and Ion Research (FAIR) aims to study the properties of nuclear matter at high net-baryon densities and moderate temperatures. It is expected that, utilizing ultra-relativistic heavy-ion collisions, a phase transition from hadronic matter to QCD matter will be probed. Among the key objectives are the determination of the nature and order of the transition (deconfinement and/or chiral) and the observation of a critical end-point. To measure and determine the physics phenomena occurring in these collisions, appropriate detectors are required. The Silicon Tracking System (STS) is the key detector to reconstruct charged particle tracks created in heavy-ion collisions. In order to assure the necessary detector performance, about 900 silicon microstrip sensors must be checked and tested for their quality. For these tasks highly efficient and highly automated procedures and methods have to be developed. The first part of this dissertation reports on a novel automated inspection system developed for the optical quality control of silicon microstrip sensors. Proposed methods and procedures allow to scan along the individual sensors to recognize and classify sensor defects. Examples of these defects are: surface scratches, implant defects, metalization layer lithography defects and others. In order to separate and classify these defects various image-processing algorithms based on machine vision are used. The silicon sensors are also characterized geometrically to ensure the mechanical precision targeted for the detector assembly procedures. Since the STS detector will be operated in a high radiation environment with a total non-ionizing radiation dose up to 1x10^14 n_eq/cm^2 over 6 years of operation, the silicon sensors need to be kept in the temperature range of -5 to -10 °C at all times to minimize reverse annealing effects and to avoid thermal runaway. The second part of this work is devoted to the development and optimization of the design of cooling bodies, which remove the thermal energy of overall more than 40 kW produced by the front-end readout electronics. In particular, thermodynamical models were developed to estimate the cooling regimes and thermal simulations of the cooling bodies were carried out. Based on the performed calculations an innovative bi-phase CO2 cooling system of up to 200 W cooling power was built and allowed to verify the simulated cooling body designs experimentally.In der geplanten Experimentieranlage für Antiprotonen- und Ionenforschung (Facility for Antiproton and Ion Research, FAIR) wird das Compressed Baryonic Matter Experiment (CBM) nukleare Materie bei hoher Baryonendichte und moderaten Temperaturen untersuchen. Der Phasenübergang zwischen hadronischer und QCD-Materie kann mithilfe von ultrarelativistischen Schwerionenkollisionen untersucht werden. Die wichtigsten Ziele sind die Bestimmung der Art des Übergangs (Deconfinement- und/oder chiraler Phasenübergang) und die Untersuchung des kritischen Endpunktes im Phasendiagramm. Um diese Phänomene zu untersuchen, sind geeignete Detektorsysteme notwendig. Das Silicon Tracking System (STS) ist der zentrale Detektor, mit Hilfe dessen die Spuren der in den Schwerionenkollisionen erzeugten geladenen Teilchen rekonstruiert werden. Um die volle Funktionsfähigkeit des STS sicherzustellen, müssen die mehr als 900 Siliziumstreifensensoren vor dem Zusammenbau überprüft und getestet werden. Hierfür müssen die hocheffiziente und automatisierte Prozeduren und Methoden entwickelt werden. In erstem Teil dieser Dissertation wird über ein automatisiertes optisches Inspektionssystem berichtet. Das System erlaubt es, die einzelnen Siliziumsensoren auf potentielle vorhandene Oberflächendefekte zu untersuchen und sie zu klassifizieren. Beispiele hierfür sind: Kratzer auf der Oberfläche, Implantierungsdefekte oder Lithographiedefekte der Metallisierungsschicht. Für das Erkennen dieser Defekte werden mehrere “Machine Vision” Bildbearbeitungsalgorithmen benutzt. Außerdem werden die geometrischen Parameter der Sensoren, die für den Zusammenbau des STS wichtig sind, optisch kontrolliert. Der STS Detektor wird bei extrem hohen Kollisionsraten betrieben. Innerhalb einer Betriebsbszeit von 6 Jahren wird eine Strahlungsdosis von bis zu 1x10^14 n_eq/cm^2 akkumuliert, was zu einer deutlichen Erhöhung des Dunkelstrom führt und letztlich des “end-of-life” Kriterium darstellt. Die Siliziumsensoren müssen deswegen auf -5 bis -10 °C gekühlt werden, um “reverse Annealing” Effekte zu minimieren und das “Thermal Runaway” Phänomen zu verzögern. Durch die Ausleselektronik werden andererseits mehr als 40 kW an thermischer Energie nahe der Sensoren produziert, die deshalb mit Kühlkörpern komplett abgeleitet werden muß. Das zweite Teil dieser Dissertation wurde der Optimierung von Kühlkörpern gewidmet. Dafür wurden thermodynamische Modelle implementiert und entsprechende thermische Simulationen durchgeführt. Im Rahmen der Arbeit wurde ein 200 W CO2 Kühlungssystem gebaut, das es erlaubt, die Modellberechnungen und Simulationen einer Kühlung mit 2-phasigem CO2 zu überprüfen

    Robotic surface exploration with vision and tactile sensing for cracks detection and characterisation

    Get PDF
    This thesis presents a novel algorithm for crack localisation and detection based on visual and tactile analysis via fibre-optics. A finger-shaped sensor based on fibre-optics is employed for the data acquisition to collect data for the analysis and the experiments. Three pairs of fibre optics are used to measure the sensor's soft part deformation via changes in the reflected light intensity. A fourth pair of optical fibre cables is positioned at the tip of the finger and it is used to sense the proximity to external objects. To detect the possible locations of cracks a camera is used to scan an environment while running an object detection algorithm. Once the crack is detected, a fully-connected graph is created from a skeletonised version of the crack. Minimum spanning tree is then employed for calculating the shortest path to explore the crack which is then used to develop the motion planner for the robotic manipulator. The motion planner divides the crack into multiple nodes which are then explored one by one. Then, the manipulator starts the exploration and performs the tactile data classification to confirm if there is indeed a crack in that location or just a false positive from the vision algorithm. This is repeated until all the nodes of the crack are explored. If a crack is not detected from vision, then it won't be further explored in the tactile step. Because of this, false negative have the biggest weight and recall is the most import metric in this study. I perform experiments to investigate the improvements for the time required during exploration when using visual and tactile modalities together. The experimental studies demonstrate that exploring a fractured surface with a combination of visual and tactile modalities is four times faster than using solely the tactile mode. The accuracy of detection is also improved when the two modalities were combined. Experiments are also performed in order to develop a robust machine learning model to analyse and classify the tactile data acquired during exploration via the fibre-optics sensor. Frequency domain features are explored to investigate the spectrum of the signal. Results show that when training machine learning models and deep learning networks using these features, the resulting models are more robust when tested across different databases, on which they are not trained. Thus, when computer vision techniques may fail because of light conditions or extreme environments, fibre-optics sensors can be employed to analyse the presence of cracks on explored surfaces via machine learning and deep neural network algorithms. Still, when introducing tactile in extreme environments, caution must be used when making contact with possible fragile surfaces which may break because of the friction produced by the tactile sensor. Proximity may be used in this case to calculate the distance between the sensor and the object and to reduce speed when getting closer to the object. In conclusion, the thesis has contributed to advances in crack detection by introducing a multi-modal algorithm that is used to detect cracks in the environment via computer vision and then confirming the presence of a crack via tactile exploration and machine learning classification of the data acquired from a fibre-optic-based sensor. Few methods currently use tactile sensing for crack characterisation and detection and this is the first study which shows the reliability of tactile-based methodologies for crack detection via machine learning analysis. Furthermore, this is the first method which combines both tactile and vision for crack analysis

    Numerical modelling of additive manufacturing process for stainless steel tension testing samples

    Get PDF
    Nowadays additive manufacturing (AM) technologies including 3D printing grow rapidly and they are expected to replace conventional subtractive manufacturing technologies to some extents. During a selective laser melting (SLM) process as one of popular AM technologies for metals, large amount of heats is required to melt metal powders, and this leads to distortions and/or shrinkages of additively manufactured parts. It is useful to predict the 3D printed parts to control unwanted distortions and shrinkages before their 3D printing. This study develops a two-phase numerical modelling and simulation process of AM process for 17-4PH stainless steel and it considers the importance of post-processing and the need for calibration to achieve a high-quality printing at the end. By using this proposed AM modelling and simulation process, optimal process parameters, material properties, and topology can be obtained to ensure a part 3D printed successfully

    2023- The Twenty-seventh Annual Symposium of Student Scholars

    Get PDF
    The full program book from the Twenty-seventh Annual Symposium of Student Scholars, held on April 18-21, 2023. Includes abstracts from the presentations and posters.https://digitalcommons.kennesaw.edu/sssprograms/1027/thumbnail.jp

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens
    corecore