10 research outputs found

    Test case design for transactional flows using a dependency-based approach

    Get PDF
    Transactions are a key issue to develop reliable web service based applications. The advanced models used to manage this kind of transactions rely on the dependencies between the involved activities (subtransactions). Dependencies are constraints on the processing produced by the concurrent execution of interdependent activities. Existing work uses formal approaches to verify the consistency and correctness of dependencies in web service transactions, but there is no work on testing their implementation. This paper identifies and defines a set of possible dependencies using logical expressions. These expressions define the preconditions necessary for executing the subtransactions primitive tasks. By using those conditions, we propose a family of test criteria based on control-flow for checking the dependencies between subtransactions. The test criteria provide guidance for test case generation in order to specifically test the implementation of web service subtransactions dependencie

    Experiencia del efecto de la docencia no presencial sobre la docencia inversa

    Full text link
    [EN] This study analyzes the opinion of the students and their academic performance in two groups of the subject of Physics for Computer Science of the Bachelor's Degree in Informatics Engineering of the Universitat Politècnica de València, during two successive years, in which the methodology of flip teaching is used, the first of them in face-to-face modality, and the second online due to the situation caused by the COVID-19 pandemic. The perception by students of the flip teaching methodology as well as the online material used in this methodology has improved in this last year, compared to the previous one. This may be due to the fact that the subjects involved in the flip teaching methodology have been able to adapt more quickly and efficiently to the new situation caused by the pandemic, which has made it possible to improve the assessment of this educational methodology. This positive general assessment has resulted in an improvement in the academic results of the students because when they value the subject better, they are more motivated, work more and better, and their performance improves. In summary, we can conclude that, in the analyzed subject, the performance and perception of the flip teaching methodology has improved in the time of the pandemic, thus showing the adaptability of students and teachers in the face of adverse circumstances.[ES] Este trabajo analiza la opinión de los alumnos y su rendimiento académico en dos grupos de alumnos de la asignatura Fundamentos Físicos de la informática del Grado en Ingeniería Informática de la Universitat Politècnica de València, durante dos cursos consecutivos, en los cuales se siguió la metodología de docencia inversa. El primer curso se realizó de forma presencial y el segundo online debido a la situación provocada por la pandemia de la COVID-19. La percepción por parte de los alumnos de la metodología de docencia inversa, así como del material online utilizado en esta metodología, ha mejorado en este último curso en comparación con el anterior. Esto puede ser debido a que las asignaturas involucradas en la metodología de docencia inversa han podido adaptarse de forma más rápida y eficaz a la nueva situación provocada por la pandemia, lo que ha permitido mejorar la apreciación de esta metodología educativa. Esta valoración general positiva ha tenido como consecuencia una mejora en los resultados académicos de los estudiantes, posiblemente debido a que cuando valoran mejor la asignatura, están más motivados, trabajan más y mejor, y mejora su rendimiento. En resumen, podemos concluir que, en la asignatura analizada, el rendimiento y la percepción de la metodología de docencia inversa ha mejorado en la época de la pandemia, mostrando así la capacidad de adaptación de alumnado y profesorado frente a las circunstancias adversas.Los autores agradecen al Instituto de Ciencias de la Educación de la Universitat Politècnica de València su apoyo al grupo de Innovación e-MACAFI y a los Proyecto PIME/2018/B25 y PIME/2018/B26.Vidaurre Garayo, AJ.; Gámiz González, MA.; Tort Ausina, I.; Molina Mateo, J.; Serrano Jareño, MA.; Meseguer Dueñas, JM.; Riera Guasp, J.... (2021). Experiencia del efecto de la docencia no presencial sobre la docencia inversa. En IN-RED 2021: VII Congreso de Innovación Edicativa y Docencia en Red. Editorial Universitat Politècnica de València. 502-511. https://doi.org/10.4995/INRED2021.2021.13763OCS50251

    Automatic System to Detect Both Distraction and Drowsiness in Drivers Using Robust Visual Features

    Get PDF
    [ES] De acuerdo con un reciente estudio publicado por la Organización Mundial de la Salud (OMS), se estima que 1.25 millones de personas mueren como resultado de accidentes de tráfico. De todos ellos, muchos son provocados por lo que se conoce como inatención, cuyos principales factores contribuyentes son tanto la distracción como la somnolencia. En líneas generales, se calcula que la inatención ocasiona entre el 25% y el 75% de los accidentes y casi-accidentes. A causa de estas cifras y sus consecuencias se ha convertido en un campo ampliamente estudiado por la comunidad investigadora, donde diferentes estudios y soluciones han sido propuestos, pudiendo destacar los métodos basados en visión por computador como uno de los más prometedores para la detección robusta de estos eventos de inatención. El objetivo del presente artículo es el de proponer, construir y validar una arquitectura especialmente diseñada para operar en entornos vehiculares basada en el análisis de características visuales mediante el empleo de técnicas de visión por computador y aprendizaje automático para la detección tanto de la distracción como de la somnolencia en los conductores. El sistema se ha validado, en primer lugar, con bases de datos de referencia testeando los diferentes módulos que la componen. En concreto, se detecta la presencia o ausencia del conductor con una precisión del 100%, 90.56%, 88.96% por medio de un marcador ubicado en el reposacabezas del conductor, por medio del operador LBP, o por medio del operador CS-LBP, respectivamente. En lo que respecta a la validación mediante la base de datos CEW para la detección del estado de los ojos, se obtiene una precisión de 93.39% y de 91.84% utilizando una nueva aproximación basada en LBP (LBP_RO) y otra basada en el operador CS-LBP (CS-LBP_RO). Tras la realización de varios experimentos para ubicar la cámara en el lugar más adecuado, se posicionó la misma en el salpicadero, pudiendo aumentar la precisión en la detección de la región facial de un 86.88% a un 96.46%. Las pruebas en entornos reales se realizaron durante varios días recogiendo condiciones lumínicas muy diferentes durante las horas diurnas involucrando a 16 conductores, los cuales realizaron diversas actividades para reproducir síntomas de distracción y somnolencia. Dependiendo del tipo de actividad y su duración, se obtuvieron diferentes resultados. De manera general y considerando de forma conjunta todas las actividades se obtiene una tasa media de detección del 93.11%.[EN] According to the most recent studies published by the World Health Organization (WHO) in 2013, it is estimated that 1.25 million people die as a result of traffic crashes. Many of them are caused by what it is known as inattention, whose main contributing factors are both distraction and drowsiness. Overall, it is estimated that inattention causes between 25% and 75% of the crashes and near-crashes. That is why this is a thoroughly studied field by the research community, where solutions to combat distraction and drowsiness, in particular, and inattention, in general, can be classified into three main categories, and, where computer vision has clearly become a non-obtrusive effective tool for the detection of both distraction and drowsiness. The aim of this paper is to propose, build and validate an architecture based on the analysis of visual characteristics by using computer vision techniques and machine learning to detect both distraction and drowsiness in drivers. Firstly, the modules have been tested with all its components independently using several datasets. More specifically, the presence/absence of the driver is detected with an accuracy of 100%, 90.56%, 88.96% by using a marker positioned onto the headrest, the LBP operator and the CS-LBP operator, respectively. Regarding the eye closeness validation with CEW dataset, an accuracy of 93.39% and 91.84% is obtained using a new method using both LBP (LBP_RO) and CS-LBP (CS-LBP_RO). After performing several tests, the camera is positioned on the dashboard, increasing the accuracy of face detection from 86.88% to 96.46%. In connection with the tests performed in real-world settings, 16 drivers were involved performing several activities imitating different sings of sleepiness and distraction. Overall, an accuracy of 93.11%is obtained considering all activities and all drivers.El origen de las actividades del presente trabajo ha sido realizado parcialmente gracias al apoyo tanto de la Fundación para el fomento en Asturias de la investigación científica aplicada y la tecnología (FICYT) y de la empresa SINERCO SL, por medio de la ejecución del proyecto "Creación de algoritmos de visión artificial ", con referencia IE09-511.El presente trabajo se engloba en la tesis doctoral de Alberto Fernández Villán.Fernández Villán, A.; Usamentiaga Fernández, R.; Casado Tejedor, R. (2017). Sistema Automático Para la Detección de Distracción y Somnolencia en Conductores por Medio de Características Visuales Robustas. Revista Iberoamericana de Automática e Informática industrial. 14(3):307-328. https://doi.org/10.1016/j.riai.2017.05.001OJS307328143Abtahi, S., Omidyeganeh, M., Shirmohammadi, S., Hariri, B., 2014. Yawdd: a yawning detection dataset. In: Proceedings of the 5th ACM Multimedia Systems Conference. ACM, pp. 24-28.Ahlstrom, C., Dukic, T., 2010. Comparison of eye tracking systems with one and three cameras. In: Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research. ACM, p. 3.Ahonen, T., Hadid, A., Pietikainen, M., 2006. Face description with local binary patterns: Application to face recognition. IEEE transactions on pattern analysis and machine intelligence 28 (12), 2037-2041.Asthana, A., Marks, T. K., Jones, M. J., Tieu, K. H., Rohith, M., 2011. Fully automatic pose-invariant face recognition via 3d pose normalization. In: 2011 International Conference on Computer Vision. IEEE, pp. 937-944.Berri, R. A., Silva, A. G., Parpinelli, R. S., Girardi, E., Arthur, R., 2014. A pattern recognition system for detecting use of mobile phones while driving. In: Computer Vision Theory and Applications (VISAPP), 2014 International Conference on. Vol. 2. IEEE, pp. 411-418.Bolme, D. S., Draper, B. A., Beveridge, J. R., 2009. Average of synthetic exact filters. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, pp. 2105-2112.Boyraz, P., Yang, X., Hansen, J. H., 2012. Computer vision systems for contextaware active vehicle safety and driver assistance. In: Digital Signal Processing for In-Vehicle Systems and Safety. Springer, pp. 217-227.Chang, C.-C., Lin, C.-J., 2011. Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST) 2 (3), 27.Dalal, N., Triggs, B., 2005. Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). Vol. 1. IEEE, pp. 886-893.Daniluk, M., Rezaei, M., Nicolescu, R., Klette, R., 2014. Eye status based on eyelid detection: A driver assistance system. In: International Conference on Computer Vision and Graphics. Springer, pp. 171-178.Dasgupta, A., George, A., Happy, S., Routray, A., Shanker, T., 2013. An onboard vision based system for drowsiness detection in automotive drivers. International Journal of Advances in Engineering Sciences and Applied Mathematics 5 (2-3), 94-103.Devi, M. S., Bajaj, P. R., 2008. Driver fatigue detection based on eye tracking. In: 2008 First International Conference on Emerging Trends in Engineering and Technology. IEEE, pp. 649-652.Dinges, D. F., Grace, R., 1998. Perclos: A valid psychophysiological measure of alertness as assessed by psychomotor vigilance. US Department of Transportation, Federal Highway Administration, Publication Number FHWAMCRT-98-006.Dong, Y., Hu, Z., Uchimura, K., Murayama, N., 2011. Driver inattention monitoring system for intelligent vehicles: A review. IEEE transactions on intelligent transportation systems 12 (2), 596-614.Fernandez, A., Carus, J., Usamentiaga, R., Alvarez, E., Casado, R., 2017. Wearable and ambient sensors to health monitoring using computer vision and signal processing techniques. Journal of Networks In press.Fernandez, A., Carus, J. L., Usamentiaga, R., Alvarez, E., Casado, R., 2015a. Unobtrusive health monitoring system using video-based physiological information and activity measurements. In: Computer, Information and Telecommunication Systems (CITS), 2015 International Conference on. IEEE, pp. 1-5.Fernandez, A., Casado, R., Usamentiaga, R., 2015b. A real-time big data architecture for glasses detection using computer vision techniques. In: Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on. IEEE, pp. 591-596.Fernandez, A., García, R., Usamentiaga, R., Casado, R., 2015c. Glasses detection on real images based on robust alignment. Machine Vision and Applications 26 (4), 519-531.Fernandez, A., Usamentiaga, R., Carus, ' J. L., Casado, R., 2016. Driver distraction using visual-based sensors and algorithms. Sensors 16 (11), 1805.Flores, M. J., Armingol, J. M., de la Escalera, A., 2010. Real-time warning system for driver drowsiness detection using visual information. Journal of Intelligent & Robotic Systems 59 (2), 103-125.Flores, M. J., de la Escalera, A., et al., 2011. Sistema avanzado de asistencia a la conduccion para la detecci ' on de la somnolencia. Revista Iberoamericana ' de Automatica ' e Informatica Industrial RIAI ' 8 (3), 216-228.Forsman, P. M., Vila, B. J., Short, R. A., Mott, C. G., Van Dongen, H. P., 2013. Efficient driver drowsiness detection at moderate levels of drowsiness. Accident Analysis & Prevention 50, 341-350.Hadid, A., Pietikainen, M., 2013. Demographic classification from ¨ face videos using manifold learning. Neurocomputing 100, 197-205.Hammoud, R. I., Wilhelm, A., Malawey, P., Witt, G. J., 2005. Efficient real-time algorithms for eye state and head pose tracking in advanced driver support systems. In: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. 2. IEEE, pp. 1181-vol.Hansen, D. W., Ji, Q., 2010. In the eye of the beholder: A survey of models for eyes and gaze. IEEE Transactions on pattern analysis and machine intelligence 32 (3), 478-500.Hattori, A., Tokoro, S., Miyashita, M., Tanaka, I., Ohue, K., Uozumi, S., 2006. Development of forward collision warning system using the driver behavioral information. Tech. rep., SAE Technical Paper.Heikkila, M., Pietik ¨ ainen, M., Schmid, C., 2009. Description ¨ of interest regions with local binary patterns. Pattern recognition 42 (3), 425-436.Hong, T., Qin, H., 2007. Drivers drowsiness detection in embedded system. In: Vehicular Electronics and Safety, 2007. ICVES. IEEE International Conference on. IEEE, pp. 1-5.Hsu, C.-W., Chang, C.-C., Lin, C.-J., et al., 2003. A practical guide to support vector classification.Jain, V., Learned-Miller, E. G., 2010. Fddb: A benchmark for face detection in unconstrained settings. UMass Amherst Technical Report.Jo, J., Lee, S. J., Park, K. R., Kim, I.-J., Kim, J., 2014. Detecting driver drowsiness using feature-level fusion and user-specific classification. Expert Systems with Applications 41 (4), 1139-1152.Jung, J.-Y., Kim, S.-W., Yoo, C.-H., Park, W.-J., Ko, S.-J., 2016. Lbp-fernsbased feature extraction for robust facial recognition. IEEE Transactions on Consumer Electronics 62 (4), 446-453.Lee, S. J., Jo, J., Jung, H. G., Park, K. R., Kim, J., 2011. Real-time gaze estimator based on driver's head orientation for forward collision warning system. IEEE Transactions on Intelligent Transportation Systems 12 (1), 254-267.Li, H., Lin, Z., Shen, X., Brandt, J., Hua, G., 2015. A convolutional neural network cascade for face detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5325-5334.Liu, C. C., Hosking, S. G., Lenne, M. G., 2009. Predicting driver drowsiness ' using vehicle measures: Recent insights and future challenges. Journal of safety research 40 (4), 239-245.Lopez Romero, ' W. L., 2016. Sistema de control del estado de somnolencia en conductores de vehículos.Losada, D. G., Lopez, ' G. A. R., Acevedo, R. G., Villan, ' A. F., 2013. Aviueartificial vision to improve the user experience. In: New Concepts in Smart Cities: Fostering Public and Private Alliances (SmartMILE), 2013 International Conference on. IEEE, pp. 1-6.Lu, L., Ning, X., Qian, M., Zhao, Y., 2011. Close eye detected based on synthesized gray projection. In: Advances in Multimedia, Software Engineering and Computing Vol. 2. Springer, pp. 345-351.Markus, N., Frljak, M., Pand ˇ ziˇ c, I. S., Ahlberg, J., Forchheimer, R., 2014. Object detection with pixel intensity comparisons organized in decision trees. arXiv preprint arXiv:1305.4537.Martin, E., 2006. Breakthrough research on real-world driver behavior released. National Highway Traffic Safety Administration.Mbouna, R. O., Kong, S. G., Chun, M.-G., 2013. Visual analysis of eye state and head pose for driver alertness monitoring. IEEE transactions on intelligent transportation systems 14 (3), 1462-1469.Murphy-Chutorian, E., Trivedi, M. M., 2010. Head pose estimation and augmented reality tracking: An integrated system and evaluation for monitoring driver awareness. IEEE Transactions on intelligent transportation systems 11 (2), 300-311.Noori, S. M. R., Mikaeili, M., 2016. Driving drowsiness detection using fusion of electroencephalography, electrooculography, and driving quality signals. Journal of medical signals and sensors 6 (1), 39.Nuevo, J., Bergasa, L. M., Jimenez, ' P., 2010. Rsmat: Robust simultaneous modeling and tracking. Pattern Recognition Letters 31 (16), 2455-2463.of Transportation, D., 2016. Pennsylvania driver's manual. https://goo.gl/ XCER8C, accessed: 2016-09-018.Ojala, T., Pietikainen, M., Harwood, D., 1996. ¨ A comparative study of texture measures with classification based on featured distributions. Pattern recognition 29 (1), 51-59.Ojala, T., Pietikainen, M., Maenpaa, T., 2002. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on pattern analysis and machine intelligence 24 (7), 971-987.Organization, W. H., 2016. Global status report on road safety 2015. http: //goo.gl/jMoJ4l, accessed: 2016-07-01.Pan, G., Sun, L., Wu, Z., Lao, S., 2007. Eyeblink-based anti-spoofing in face recognition from a generic webcamera. In: Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on. IEEE, pp. 1-8.Peden, M., Toroyan, T., Krug, E., Iaych, K., et al., 2016. The status of global road safety: The agenda for sustainable development encourages urgent action. Journal of the Australasian College of Road Safety 27 (2), 37.Phillips, P. J., Moon, H., Rizvi, S. A., Rauss, P. J., 2000. The feret evaluation methodology for face-recognition algorithms. IEEE Transactions on pattern analysis and machine intelligence 22 (10), 1090-1104.RACE, A. y. l. D., 2016. Los conductores espanoles reconocen sufrir m ˜ as som- ' nolencia al volante que los usuarios europeos. http://goo.gl/mui9S3, accessed: 2016-07-01.Regan, M. A., Hallett, C., Gordon, C. P., 2011. Driver distraction and driver inattention: Definition, relationship and taxonomy. Accident Analysis & Prevention 43 (5), 1771-1781.Sahayadhas, A., Sundaraj, K., Murugappan, M., 2012. Detecting driver drowsiness based on sensors: a review. Sensors 12 (12), 16937-16953.Selvakumar, K., Jerome, J., Rajamani, K., Shankar, N., 2015. Real-time vision based driver drowsiness detection using partial least squares analysis. Journal of Signal Processing Systems, 1-12.Shan, C., 2012. Learning local binary patterns for gender classification on realworld face images. Pattern Recognition Letters 33 (4), 431-437.Shan, C., Gong, S., McOwan, P. W., 2009. Facial expression recognition based on local binary patterns: A comprehensive study. Image and Vision Computing 27 (6), 803-816.Sigari, M. H., 2009. Driver hypo-vigilance detection based on eyelid behavior. In: Advances in Pattern Recognition, 2009. ICAPR'09. Seventh International Conference on. IEEE, pp. 426-429.Slawinski, E., Mut, ˜ V., Penizzotto, F., 2015. Sistema de alerta al conductor basado en realimentacion vibro-t ' actil. ' Revista Iberoamericana de Automatica ' e Informatica Industrial RIAI ' 12 (1), 36-48.Song, F., Tan, X., Chen, S., Zhou, Z.-H., 2013. A literature survey on robust and efficient eye localization in real-life scenarios. Pattern Recognition 46 (12), 3157-3173.Song, F., Tan, X., Liu, X., Chen, S., 2014. Eyes closeness detection from still images with multi-scale histograms of principal oriented gradients. Pattern Recognition 47 (9), 2825-2838.StopChatear, 2016. Uso de los smartphones en la conduccion. ' http://goo. gl/67dvtn, accessed: 2016-07-01.Talbot, R., Fagerlind, H., Morris, A., 2013. Exploring inattention and distraction in the safetynet accident causation database. Accident Analysis & Prevention 60, 445-455.Tan, X., Triggs, B., 2010. Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE transactions on image processing 19 (6), 1635-1650.Timm, F., Barth, E., 2011. Accurate eye centre localisation by means of gradients. VISAPP 11, 125-130.Uˇricˇa'ˇr, M., Franc, V., Hlava'c, V., 2012. Detector of facial landmarks learned by ˇ the structured output svm. VIsAPP 12, 547-556.Vapnik, V., 1998. Statistical learning theory wiley new york google scholar.Vicente, F., Huang, Z., Xiong, X., De la Torre, F., Zhang, W., Levi, D., 2015. Driver gaze tracking and eyes off the road detection system. IEEE Transactions on Intelligent Transportation Systems 16 (4), 2014-2027.Villan, A. F., Candas, J. L. C., Fernandez, R. U., Tejedor, R. C., 2016. Face recognition and spoofing detection system adapted to visually-impaired people. IEEE Latin America Transactions 14 (2), 913-921.Viola, P., Jones, M. J., 2004. Robust real-time face detection. International journal of computer vision 57 (2), 137-154.Vural, E., Cetin, M., Ercil, A., Littlewort, G., Bartlett, M., Movellan, J., 2007. Drowsy driver detection through facial movement analysis. In: International Workshop on Human-Computer Interaction. Springer, pp. 6-18.You, C.-W., Lane, N. D., Chen, F., Wang, R., Chen, Z., Bao, T. J., Montes-de Oca, M., Cheng, Y., Lin, M., Torresani, L., et al., 2013. Carsafe app: alerting drowsy and distracted drivers using dual cameras on smartphones. In: Proceeding of the 11th annual international conference on Mobile systems, applications, and services. ACM, pp. 13-26.Zhang, Z., Zhang, J.-s., 2006. Driver fatigue detection based intelligent vehicle control. In: 18th International Conference on Pattern Recognition (ICPR'06). Vol. 2. IEEE, pp. 1262-1265

    Especificación y prueba de requisitos de recuperabilidad en transaccioones WS-BusinessActivity

    No full text
    Las transacciones son un concepto clave en la fiabilidad de aplicaciones basadas en servicios web, siendo WS-Coordination y WS-BusinessActivity los más recientes y aceptados estándares para su manejo. Este artículo aborda la prueba de transacciones en servicios web, tema al que la investigación actual ha prestado escasa atención. Se define un modelo para la especificación de requisitos funcionales de una transacción según el estándar WS-BusinessActivity, así como se propone la utilización de técnicas basadas en riesgos para la definición de casos de prueba que validen el proceso. Se presenta un caso de estudio para ilustrar el métod

    Subcutaneous anti-COVID-19 hyperimmune immunoglobulin for prevention of disease in asymptomatic individuals with SARS-CoV-2 infection: a double-blind, placebo-controlled, randomised clinical trialResearch in context

    No full text
    Summary: Background: Anti-COVID-19 hyperimmune immunoglobulin (hIG) can provide standardized and controlled antibody content. Data from controlled clinical trials using hIG for the prevention or treatment of COVID-19 outpatients have not been reported. We assessed the safety and efficacy of subcutaneous anti-COVID-19 hyperimmune immunoglobulin 20% (C19-IG20%) compared to placebo in preventing development of symptomatic COVID-19 in asymptomatic individuals with SARS-CoV-2 infection. Methods: We did a multicentre, randomized, double-blind, placebo-controlled trial, in asymptomatic unvaccinated adults (≥18 years of age) with confirmed SARS-CoV-2 infection within 5 days between April 28 and December 27, 2021. Participants were randomly assigned (1:1:1) to receive a blinded subcutaneous infusion of 10 mL with 1 g or 2 g of C19-IG20%, or an equivalent volume of saline as placebo. The primary endpoint was the proportion of participants who remained asymptomatic through day 14 after infusion. Secondary endpoints included the proportion of individuals who required oxygen supplementation, any medically attended visit, hospitalisation, or ICU, and viral load reduction and viral clearance in nasopharyngeal swabs. Safety was assessed as the proportion of patients with adverse events. The trial was terminated early due to a lack of potential benefit in the target population in a planned interim analysis conducted in December 2021. ClinicalTrials.gov registry: NCT04847141. Findings: 461 individuals (mean age 39.6 years [SD 12.8]) were randomized and received the intervention within a mean of 3.1 (SD 1.27) days from a positive SARS-CoV-2 test. In the prespecified modified intention-to-treat analysis that included only participants who received a subcutaneous infusion, the primary outcome occurred in 59.9% (91/152) of participants receiving 1 g C19-IG20%, 64.7% (99/153) receiving 2 g, and 63.5% (99/156) receiving placebo (difference in proportions 1 g C19-IG20% vs. placebo, −3.6%; 95% CI -14.6% to 7.3%, p = 0.53; 2 g C19-IG20% vs placebo, 1.1%; −9.6% to 11.9%, p = 0.85). None of the secondary clinical efficacy endpoints or virological endpoints were significantly different between study groups. Adverse event rate was similar between groups, and no severe or life-threatening adverse events related to investigational product infusion were reported. Interpretation: Our findings suggested that administration of subcutaneous human hyperimmune immunoglobulin C19-IG20% to asymptomatic individuals with SARS-CoV-2 infection was safe but did not prevent development of symptomatic COVID-19. Funding: Grifols
    corecore