23 research outputs found

    Els fars davanters a ull de càmara

    Get PDF
    La contínua innovació en els sistemes d'il·luminació dels automòbils comporta també una millora en els sistemes d'avaluació. Aquests estan basats en la comparació dinàmica, és a dir, que siguin els propis experts o usuaris qui comprovin la qualitat dels fars durant una sèrie de proves de conducció. L'inconvenient d'aquesta mena d'avaluació és que resulta força costós, i la capacitat de retenció visual a curt termini de les persones no assegura uns resultats definitius. Per això, el departament de Desenvolupament Elèctric, Il·luminació i Senyalització del Centre Tècnic de SEAT, a Martorell, i el Centre de Visió per Computador de la UAB han ideat un sistema de gravació, del que després es podran visionar els fotogrames i fer-ne la comparació. És necessari però, sincronització i alineació espacial entre els fotogrames per ajustar correctament els resultats a la realitat de la conducció.La continua innovación en los sistemas de iluminación de los automóviles comporta también una mejora en los sistemas de evaluación. Éstos están basados en la comparación dinámica, es decir, que sean los propios expertos o usuarios quienes comprueben la calidad de los faros durante una serie de pruebas de conducción. El inconveniente de este tipo de evaluación es que resulta bastante costoso, y la capacidad de retención visual a corto plazo de las personas no asegura unos resultados definitivos. Por eso, el departamento de Desarrollo Eléctrico, Iluminación y Señalización del Centro Técnico de SEAT, en Martorell, i el Centro de Visión por Computador de la UAB han ideado un sistema de grabación, del que después se podrán visionar los videos y realizar la comparación. Es necesario sin embargo, sincronización y alineación espacial entre los fotogramas para ajustar correctamente los resultados a la realidad de la conducción.Continuous innovation in headlamp systems also implies an improvement in how they are assessed. These assessment systems are based on dynamic comparison, i.e., experts or users themselves assess the quality of headlamps by means of different driving tests. The disadvantages of this type of assessment are the elevated cost and the fact that short-term visual retention does not guarantee definitive results. For this reason, the Department of Electrical, Lighting and Signal Development at SEAT's Technical Centre in Martorell, and Computer Vision Centre of UAB have created a recording system. The frames from these recordings will later be viewed and compared. However, first they will have to be spatially synchronised and aligned in order to adjust the results to real driving situations

    Unsupervised system to classify SO2 pollutant concentrations in Salamanca, Mexico

    Get PDF
    Salamanca is cataloged as one of the most polluted cities in Mexico. In order to observe the behavior and clarify the influence of wind parameters on the Sulphur Dioxide (SO2) concentrations a Self-Organizing Maps (SOM) Neural Network have been implemented at three monitoring locations for the period from January 1 to December 31, 2006. The maximum and minimum daily values of SO2 concentrations measured during the year of 2006 were correlated with the wind parameters of the same period. The main advantages of the SOM Neural Network is that it allows to integrate data from different sensors and provide readily interpretation results. Especially, it is powerful mapping and classification tool, which others information in an easier way and facilitates the task of establishing an order of priority between the distinguished groups of concentrations depending on their need for further research or remediation actions in subsequent management steps. For each monitoring location, SOM classifications were evaluated with respect to pollution levels established by Health Authorities. The classification system can help to establish a better air quality monitoring methodology that is essential for assessing the effectiveness of imposed pollution controls, strategies, and facilitate the pollutants reduction

    Estimation of Admission D-dimer Cut-off Value to Predict Venous Thrombotic Events in Hospitalized COVID-19 Patients: Analysis of the SEMI-COVID-19 Registry

    Get PDF
    Background: Venous thrombotic events (VTE) are frequent in COVID-19, and elevated plasma D-dimer (pDd) and dyspnea are common in both entities. Objective: To determine the admission pDd cut-off value associated with in-hospital VTE in patients with COVID-19. Methods: Multicenter, retrospective study analyzing the at-admission pDd cut-off value to predict VTE and anticoagulation intensity along hospitalization due to COVID-19. Results: Among 9386 patients, 2.2% had VTE: 1.6% pulmonary embolism (PE), 0.4% deep vein thrombosis (DVT), and 0.2% both. Those with VTE had a higher prevalence of tachypnea (42.9% vs. 31.1%; p = 0.0005), basal O2 saturation <93% (45.4% vs. 33.1%; p = 0.0003), higher at admission pDd (median [IQR]: 1.4 [0.6–5.5] vs. 0.6 [0.4–1.2] µg/ml; p < 0.0001) and platelet count (median [IQR]: 208 [158–289] vs. 189 [148–245] platelets × 109/L; p = 0.0013). A pDd cut-off of 1.1 µg/ml showed specificity 72%, sensitivity 49%, positive predictive value (PPV) 4%, and negative predictive value (NPV) 99% for in-hospital VTE. A cut-off value of 4.7 µg/ml showed specificity of 95%, sensitivity of 27%, PPV of 9%, and NPV of 98%. Overall mortality was proportional to pDd value, with the lowest incidence for each pDd category depending on anticoagulation intensity: 26.3% for those with pDd >1.0 µg/ml treated with prophylactic dose (p < 0.0001), 28.8% for pDd for patients with pDd >2.0 µg/ml treated with intermediate dose (p = 0.0001), and 31.3% for those with pDd >3.0 µg/ml and full anticoagulation (p = 0.0183). Conclusions: In hospitalized patients with COVID-19, a pDd value greater than 3.0 µg/ml can be considered to screen VTE and to consider full-dose anticoagulation. © 2021, Society of General Internal Medicine

    Combining Priors, Appearance, and Context for Road Detection

    Full text link
    Detecting the free road surface ahead of a moving vehicle is an important research topic in different areas of computer vision, such as autonomous driving or car collision warning. Current vision-based road detection methods are usually based solely on low-level features. Furthermore, they generally assume structured roads, road homogeneity, and uniform lighting conditions, constraining their applicability in real-world scenarios. In this paper, road priors and contextual information are introduced for road detection. First, we propose an algorithm to estimate road priors online using geographical information, providing relevant initial information about the road location. Then, contextual cues, including horizon lines, vanishing points, lane markings, 3-D scene layout, and road geometry, are used in addition to low-level cues derived from the appearance of roads. Finally, a generative model is used to combine these cues and priors, leading to a road detection method that is, to a large degree, robust to varying imaging conditions, road types, and scenarios

    Understanding road scenes using visual cues and GPS information

    Full text link
    Understanding road scenes is important in computer vision with different applications to improve road safety (e.g., advanced driver assistance systems) and to develop autonomous driving systems (e.g., Google driver-less vehicle). Current vision---based approaches rely on the robust combination of different technologies including color and texture recognition, object detection, scene context understanding. However, the performance of these approaches drops---off in complex acquisition conditions with reduced visibility (e.g., dusk, dawn, night) or adverse weather conditions (e.g., rainy, snowy, foggy). In these adverse situations any prior information about the scene is relevant to constraint the process. Therefore, in this demo we show a novel approach to obtain on---line prior information about the road ahead a moving vehicle to improve road scene understanding algorithms. This combination exploits the robustness of digital databases and the adaptation of algorithms based on visual information acquired in real time. Experimental results in challenging road scenarios show the applicability of the algorithm to improve vision---based road scene understanding algorithms. Furthermore, the algorithm can also be applied to correct imprecise road information in the database

    Determination of beryllium by electrothermal atomic absorption spectrometry using tungsten surfaces and zirconium modifier

    Full text link
    Electrothermal atomization of beryllium from graphite and tungsten surfaces was compared with and without the use of various chemical modifiers. Tungsten proved to be the best substrate, giving the more sensitive integrated atomic absorption signals of beryllium. Tungsten platform atomization with zirconium as a chemical modifier was used for the determination of beryllium in several NIST SRM certified reference samples, with good agreement obtained between the results found and the certified values. The precision of the measurements (at 10 mu g L-1), the limit of detection (3 sigma), and the characteristic mass of beryllium were 2.50%, 0.009 mu g L-1 and 0.42 pg, respectively

    Understanding road scenes using visual cues and GPS information

    Full text link
    Understanding road scenes is important in computer vision with different applications to improve road safety (e.g., advanced driver assistance systems) and to develop autonomous driving systems (e.g., Google driver-less vehicle). Current vision-based approaches rely on the robust combination of different technologies including color and texture recognition, object detection, scene context understanding. However, the performance of these approaches drops-off in complex acquisition conditions with reduced visibility (e.g., dusk, dawn, night) or adverse weather conditions (e.g., rainy, snowy, foggy). In these adverse situations any prior information about the scene is relevant to constraint the process. Therefore, in this demo we show a novel approach to obtain on-line prior information about the road ahead a moving vehicle to improve road scene understanding algorithms. This combination exploits the robustness of digital databases and the adaptation of algorithms based on visual information acquired in real time. Experimental results in challenging road scenarios show the applicability of the algorithm to improve vision-based road scene understanding algorithms. Furthermore, the algorithm can also be applied to correct imprecise road information in the database
    corecore