34 research outputs found

    Detección de fatiga en conductores mediante fusión de sistemas ADAS

    Get PDF
    Se ha identificado la somnolencia como una de las causas más importante de accidentes de tráfico, ya que se encuentra implicada en el 20% de los mismos, por lo que existe un interés creciente en encontrar sistemas ADAS (Advanced Driver Assistance Systems) capaces de detectar el estado de fatiga del conductor para prevenir posibles accidentes. En esta tesis se propone una técnica, basada en el procesado de imágenes monoculares consistente en la detección, seguimiento y caracterización de la apertura de los ojos, que trabaja automáticamente con distintos usuarios y en condiciones de conducción real. A partir de esta información y de otras señales relativas a la conducción, se infiere la somnolencia del conductor. Para la detección de la cara se ha empleado el algoritmo de detección por apariencia de Viola y Jones, y para la de los ojos se ha mejorado con técnicas de clustering y un filtro de Kalman como predictor. La medida de la apertura de los ojos se ha obtenido aplicando filtros adaptativos, integrales proyectivas y un modelo Gaussiano cuya desviación estándar coincide con la apertura, consiguiendo un sistema en tiempo real y robusto frente a cambios de iluminación. Conocida la apertura se calcula el Porcentaje de Ojo Cerrado (PERCLOS), que es uno de los indicadores más importantes en la detección de somnolencia. Todos resultados han sido obtenidos a partir de una amplia colección de vídeos de la cara de diferentes conductores, en simulación y en condiciones reales, en estado normal y de privación de sueño. Los resultados obtenidos sobre la detección de somnolencia demuestran que la utilización del PERCLOS es determinante para la estimación del estado del conductor y que su fusión con otros indicadores de conducción mejora su tasa de aciertos individual. En términos generales, los resultados obtenidos están en concordancia con otros importantes trabajos sobre detección de somnolencia, a excepción de la discusión sobre la importancia de la variable PERCLOS ya que, en esta tesis, se concluye que es el mejor indicador de somnolencia

    Detección de fatiga en conductores mediante fusión de sistemas ADAS

    Get PDF
    Se ha identificado la somnolencia como una de las causas más importante de accidentes de tráfico, ya que se encuentra implicada en el 20% de los mismos, por lo que existe un interés creciente en encontrar sistemas ADAS (Advanced Driver Assistance Systems) capaces de detectar el estado de fatiga del conductor para prevenir posibles accidentes. En esta tesis se propone una técnica, basada en el procesado de imágenes monoculares consistente en la detección, seguimiento y caracterización de la apertura de los ojos, que trabaja automáticamente con distintos usuarios y en condiciones de conducción real. A partir de esta información y de otras señales relativas a la conducción, se infiere la somnolencia del conductor. Para la detección de la cara se ha empleado el algoritmo de detección por apariencia de Viola y Jones, y para la de los ojos se ha mejorado con técnicas de clustering y un filtro de Kalman como predictor. La medida de la apertura de los ojos se ha obtenido aplicando filtros adaptativos, integrales proyectivas y un modelo Gaussiano cuya desviación estándar coincide con la apertura, consiguiendo un sistema en tiempo real y robusto frente a cambios de iluminación. Conocida la apertura se calcula el Porcentaje de Ojo Cerrado (PERCLOS), que es uno de los indicadores más importantes en la detección de somnolencia. Todos resultados han sido obtenidos a partir de una amplia colección de vídeos de la cara de diferentes conductores, en simulación y en condiciones reales, en estado normal y de privación de sueño. Los resultados obtenidos sobre la detección de somnolencia demuestran que la utilización del PERCLOS es determinante para la estimación del estado del conductor y que su fusión con otros indicadores de conducción mejora su tasa de aciertos individual. En términos generales, los resultados obtenidos están en concordancia con otros importantes trabajos sobre detección de somnolencia, a excepción de la discusión sobre la importancia de la variable PERCLOS ya que, en esta tesis, se concluye que es el mejor indicador de somnolencia

    Evaluación de un alimento desarrollado por Fermentación en Estado Sólido (FES), sobre la respuesta productiva de vacas holstein en pastoreo, suplementadas con diferentes niveles de inclusión de FES-papa

    Get PDF
    1 recurso en línea (117 páginas) : ilustraciones color, tablas, cuadros, gráficas, figuras.The potato is one of the main monocultures of Boyacá, and about 30% of production is not suitable for marketing and has long been used in animal feed directly. This work shows the evaluation of a food technology developed under solid state fermentation FES-papa, and productive response compared with a commercial concentrate widely used in animal husbandry in the region. The study was conducted on the farm the valley; Finca San Jose, the Poravita village in the municipality of Oicata. 3 cows age and stage similar lactation were taken, which were supplemented with three diets and control diet or treatment 1 T1 = 6 g / kg bw concentrate, treatment 2, T2 = 6 g / kg bw food FES-papa, treatment 3 T3 = 9 g / kg pv food; production parameters were evaluated daily milk production kg / day, milk composition and daily weight gain; resulting in the FES-papa food behaved similarly to commercial feed way, except for the daily weight gain where T3 achievement daily gain of more than 1000 g / day; on the other serological parameters as glucose, total protein, cholesterol, ketones and pH were evaluated, without encountering major differences between the different treatments. Keywords: Feeding, daily milk production, daily gain, milk composition, serological parameters, glucose, ketones, pH, cholesterol, total protein, cholesterol, ketones and pH were evaluated, without encountering major differences between the different treatments.La papa es uno de los principales monocultivos de Boyacá, y cerca del 30% de la producción no es apta para su comercialización y por mucho tiempo ha sido utilizada en la alimentación animal de forma directa. Este trabajo muestra la evaluación de un alimento elaborado bajo la tecnología de fermentación en estado sólido, FES-papa, y su respuesta productiva comparándola con un concentrado comercial usado ampliamente en la ganadería de la región. El estudio se realizó en la hacienda el Valle, finca san José, de la vereda Poravita del municipio de Oicata. Se tomaron 3 vacas de edades y etapa de lactancia similares, las cuales fueron suplementadas con 3 dietas así: dieta control o tratamiento 1, T1= 6 gr/kg pv de concentrado, tratamiento 2, T2= 6 gr/kg pv de alimento FES-papa, tratamiento 3, T3= 9 gr/kg pv de alimento; los parámetros productivos evaluados fueron producción de leche diaria kg/día, composición de la leche y ganancia de peso diaria; obteniendo como resultado que el alimento FES-papa se comportó de manera similar al alimento concentrado comercial, exceptuando la ganancia de peso diaria donde T3 logro una ganancia diaria de peso superior a los 1000 gr/día; por otro lado se evaluaron parámetros serológicos como glucosa, proteínas totales, colesterol, pH y cetonas, sin encontrar mayores diferencias entre los diferentes tratamientos.Bibliografía y webgrafía: páginas 101-108.PregradoMédico Veterinario Zootecnist

    Realistic pedestrian behaviour in the CARLA simulator using VR and mocap

    Full text link
    Simulations are gaining increasingly significance in the field of autonomous driving due to the demand for rapid prototyping and extensive testing. Employing physics-based simulation brings several benefits at an affordable cost, while mitigating potential risks to prototypes, drivers, and vulnerable road users. However, there exit two primary limitations. Firstly, the reality gap which refers to the disparity between reality and simulation and prevents the simulated autonomous driving systems from having the same performance in the real world. Secondly, the lack of empirical understanding regarding the behavior of real agents, such as backup drivers or passengers, as well as other road users such as vehicles, pedestrians, or cyclists. Agent simulation is commonly implemented through deterministic or randomized probabilistic pre-programmed models, or generated from real-world data; but it fails to accurately represent the behaviors adopted by real agents while interacting within a specific simulated scenario. This paper extends the description of our proposed framework to enable real-time interaction between real agents and simulated environments, by means immersive virtual reality and human motion capture systems within the CARLA simulator for autonomous driving. We have designed a set of usability examples that allow the analysis of the interactions between real pedestrians and simulated autonomous vehicles and we provide a first measure of the user's sensation of presence in the virtual environment.Comment: This is a pre-print of the following work: Communications in Computer and Information Science (CCIS, volume 1882), 2023, Computer-Human Interaction Research and Applications reproduced with permission of Springer Nature. The final authenticated version is available online at: https://link.springer.com/chapter/10.1007/978-3-031-41962-1_5. arXiv admin note: substantial text overlap with arXiv:2206.0033

    Digital twin in virtual reality for human-vehicle interactions in the context of autonomous driving

    Full text link
    This paper presents the results of tests of interactions between real humans and simulated vehicles in a virtual scenario. Human activity is inserted into the virtual world via a virtual reality interface for pedestrians. The autonomous vehicle is equipped with a virtual Human-Machine interface (HMI) and drives through the digital twin of a real crosswalk. The HMI was combined with gentle and aggressive braking maneuvers when the pedestrian intended to cross. The results of the interactions were obtained through questionnaires and measurable variables such as the distance to the vehicle when the pedestrian initiated the crossing action. The questionnaires show that pedestrians feel safer whenever HMI is activated and that varying the braking maneuver does not influence their perception of danger as much, while the measurable variables show that both HMI activation and the gentle braking maneuver cause the pedestrian to cross earlier.Comment: 26th IEEE International Conference on Intelligent Transportation Systems ITSC 202

    High-Level Interpretation of Urban Road Maps Fusing Deep Learning-Based Pixelwise Scene Segmentation and Digital Navigation Maps

    Get PDF
    This paper addresses the problem of high-level road modeling for urban environments. Current approaches are based on geometric models that fit well to the road shape for narrow roads. However, urban environments are more complex and those models are not suitable for inner city intersections or other urban situations. The approach presented in this paper generates a model based on the information provided by a digital navigation map and a vision-based sensing module. On the one hand, the digital map includes data about the road type (residential, highway, intersection, etc.), road shape, number of lanes, and other context information such as vegetation areas, parking slots, and railways. On the other hand, the sensing module provides a pixelwise segmentation of the road using a ResNet-101 CNN with random data augmentation, as well as other hand-crafted features such as curbs, road markings, and vegetation. The high-level interpretation module is designed to learn the best set of parameters of a function that maps all the available features to the actual parametric model of the urban road, using a weighted F-score as a cost function to be optimized. We show that the presented approach eases the maintenance of digital maps using crowd-sourcing, due to the small number of data to send, and adds important context information to traditional road detection systems

    Urban intersection classification: a comparative analysis

    Get PDF
    Understanding the scene in front of a vehicle is crucial for self-driving vehicles and Advanced Driver Assistance Systems, and in urban scenarios, intersection areas are one of the most critical, concentrating between 20% to 25% of road fatalities. This research presents a thorough investigation on the detection and classification of urban intersections as seen from onboard front-facing cameras. Different methodologies aimed at classifying intersection geometries have been assessed to provide a comprehensive evaluation of state-of-the-art techniques based on Deep Neural Network (DNN) approaches, including single-frame approaches and temporal integration schemes. A detailed analysis of most popular datasets previously used for the application together with a comparison with ad hoc recorded sequences revealed that the performances strongly depend on the field of view of the camera rather than other characteristics or temporal-integrating techniques. Due to the scarcity of training data, a new dataset is created by performing data augmentation from real-world data through a Generative Adversarial Network (GAN) to increase generalizability as well as to test the influence of data quality. Despite being in the relatively early stages, mainly due to the lack of intersection datasets oriented to the problem, an extensive experimental activity has been performed to analyze the individual performance of each proposed systems.European Commissio

    Fail-aware LIDAR-based odometry for autonomous vehicles

    Get PDF
    Autonomous driving systems are set to become a reality in transport systems and, so, maximum acceptance is being sought among users. Currently, the most advanced architectures require driver intervention when functional system failures or critical sensor operations take place, presenting problems related to driver state, distractions, fatigue, and other factors that prevent safe control. Therefore, this work presents a redundant, accurate, robust, and scalable LiDAR odometry system with fail-aware system features that can allow other systems to perform a safe stop manoeuvre without driver mediation. All odometry systems have drift error, making it difficult to use them for localisation tasks over extended periods. For this reason, the paper presents an accurate LiDAR odometry system with a fail-aware indicator. This indicator estimates a time window in which the system manages the localisation tasks appropriately. The odometry error is minimised by applying a dynamic 6-DoF model and fusing measures based on the Iterative Closest Points (ICP), environment feature extraction, and Singular Value Decomposition (SVD) methods. The obtained results are promising for two reasons: First, in the KITTI odometry data set, the ranking achieved by the proposed method is twelfth, considering only LiDAR-based methods, where its translation and rotation errors are 1.00% and 0.0041 deg/m, respectively. Second, the encouraging results of the fail-aware indicator demonstrate the safety of the proposed LiDAR odometry system. The results depict that, in order to achieve an accurate odometry system, complex models and measurement fusion techniques must be used to improve its behaviour. Furthermore, if an odometry system is to be used for redundant localisation features, it must integrate a fail-aware indicator for use in a safe manner

    Fail-Aware LIDAR-Based Odometry for Autonomous Vehicles

    Get PDF
    Autonomous driving systems are set to become a reality in transport systems and, so, maximum acceptance is being sought among users. Currently, the most advanced architectures require driver intervention when functional system failures or critical sensor operations take place, presenting problems related to driver state, distractions, fatigue, and other factors that prevent safe control. Therefore, this work presents a redundant, accurate, robust, and scalable LiDAR odometry system with fail-aware system features that can allow other systems to perform a safe stop manoeuvre without driver mediation. All odometry systems have drift error, making it difficult to use them for localisation tasks over extended periods. For this reason, the paper presents an accurate LiDAR odometry system with a fail-aware indicator. This indicator estimates a time window in which the system manages the localisation tasks appropriately. The odometry error is minimised by applying a dynamic 6-DoF model and fusing measures based on the Iterative Closest Points (ICP), environment feature extraction, and Singular Value Decomposition (SVD) methods. The obtained results are promising for two reasons: First, in the KITTI odometry data set, the ranking achieved by the proposed method is twelfth, considering only LiDAR-based methods, where its translation and rotation errors are 1.00% and 0.0041 deg/m, respectively. Second, the encouraging results of the fail-aware indicator demonstrate the safety of the proposed LiDAR odometry system. The results depict that, in order to achieve an accurate odometry system, complex models and measurement fusion techniques must be used to improve its behaviour. Furthermore, if an odometry system is to be used for redundant localisation features, it must integrate a fail-aware indicator for use in a safe manner

    Extended Floating Car Data System - Experimental Study-

    Get PDF
    IEEE Intelligent Vehicles Symposium (IV), , 06/06/2011-10/06/2011, Baden-Baden, AlemaniaThis paper presents the results of a set of extensive experiments carried out in daytime and nighttime conditions in real traffic using an enhanced or extended Floating Car Data system (xFCD) that includes a stereo vision sensor for detecting the local traffic ahead. The detection component implies the use of previously monocular approaches developed by our group in combination with new stereo vision algorithms that add robustness to the detection and increase the accuracy of the measurements corresponding to relative distance and speed. Besides the stereo pair of cameras, the vehicle is equipped with a low-cost GPS and an electronic device for CAN Bus interfacing. The xFCD system has been tested in a 198-minutes sequence recorded in real traffic scenarios with different weather and illumination conditions, which represents the main contribution of this paper. The results are promising and demonstrate that the system is ready for being used as a source of traffic state information
    corecore