3,531 research outputs found

    Vision-based Detection, Tracking and Classification of Vehicles using Stable Features with Automatic Camera Calibration

    Get PDF
    A method is presented for segmenting and tracking vehicles on highways using a camera that is relatively low to the ground. At such low angles, 3D perspective effects cause significant appearance changes over time, as well as severe occlusions by vehicles in neighboring lanes. Traditional approaches to occlusion reasoning assume that the vehicles initially appear well-separated in the image, but in our sequences it is not uncommon for vehicles to enter the scene partially occluded and remain so throughout. By utilizing a 3D perspective mapping from the scene to the image, along with a plumb line projection, a subset of features is identified whose 3D coordinates can be accurately estimated. These features are then grouped to yield the number and locations of the vehicles, and standard feature tracking is used to maintain the locations of the vehicles over time. Additional features are then assigned to these groups and used to classify vehicles as cars or trucks. The technique uses a single grayscale camera beside the road, processes image frames incrementally, works in real time, and produces vehicle counts with over 90% accuracy on challenging sequences. Adverse weather conditions are handled by augmenting feature tracking with a boosted cascade vehicle detector (BCVD). To overcome the need of manual camera calibration, an algorithm is presented which uses BCVD to calibrate the camera automatically without relying on any scene-specific image features such as road lane markings

    Lane and Road Marking Detection with a High Resolution Automotive Radar for Automated Driving

    Get PDF
    Die Automobilindustrie erlebt gerade einen beispiellosen Wandel, und die Fahrerassistenz und das automatisierte Fahren spielen dabei eine entscheidende Rolle. Automatisiertes Fahren System umfasst haupts\"achlich drei Schritte: Wahrnehmung und Modellierung der Umgebung, Fahrtrichtungsplanung, und Fahrzeugsteuerung. Mit einer guten Wahrnehmung und Modellierung der Umgebung kann ein Fahrzeug Funktionen wie intelligenter Tempomat, Notbremsassistent, Spurwechselassistent, usw. erfolgreich durchf\"uhren. F\"ur Fahrfunktionen, die die Fahrpuren erkennen m\"ussen, werden gegenw\"artig ausnahmslos Kamerasensoren eingesetzt. Bei wechselnden Lichtverh\"altnissen, unzureichender Beleuchtung oder bei Sichtbehinderungen z.B. durch Nebel k\"onnen Videokameras aber empfindlich gest\"ort werden. Um diese Nachteile auszugleichen, wird in dieser Doktorarbeit eine \glqq Radar\textendash taugliche\grqq{} Fahrbahnmakierungerkennung entwickelt, mit der das Fahrzeug die Fahrspuren bei allen Lichtverh\"altnissen erkennen kann. Dazu k\"onnen bereits im Fahrzeug verbaute Radare eingesetzt werden. Die heutigen Fahrbahnmarkierungen k\"onnen mit Kamerasensoren sehr gut erfasst werden. Wegen unzureichender R\"uckstreueigenschaften der existierenden Fahrbahnmarkierungen f\"ur Radarwellen werden diese vom Radar nicht erkannt. Um dies zu bewerkstelligen, werden in dieser Arbeit die R\"uckstreueigenschaften von verschiedenen Reflektortypen, sowohl durch Simulationen als auch mit praktischen Messungen, untersucht und ein Reflektortyp vorgeschlagen, der zur Verarbeitung in heutige Fahrbahnmakierungen oder sogar f\"ur direkten Verbau in der Fahrbahn geeignet ist. Ein weiterer Schwerpunkt dieser Doktorarbeit ist der Einsatz von K\"unstliche Intelligenz (KI), um die Fahrspuren auch mit Radar zu detektieren und zu klassifizieren. Die aufgenommenen Radardaten werden mittels semantischer Segmentierung analysiert und Fahrspurverl\"aufe sowie Freifl\"achenerkennung detektiert. Gleichzeitig wird das Potential von KI\textendash tauglichen Umgebungverstehen mit bildgebenden Radardaten aufgezeigt

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    AN ARTIFICIAL INTELLIGENCE APPROACH TO THE PROCESSING OF RADAR RETURN SIGNALS FOR TARGET DETECTION

    Get PDF
    Most of the operating vessel traffic management systems experience problems, such as track loss and track swap, which may cause confusion to the traffic regulators and lead to potential hazards in the harbour operation. The reason is mainly due to the limited adaptive capabilities of the algorithms used in the detection process. The decision on whether a target is present is usually based on the magnitude of the returning echoes. Such a method has a low efficiency in discriminating between the target and clutter, especially when the signal to noise ratio is low. The performance of radar target detection depends on the features, which can be used to discriminate between clutter and targets. To have a significant improvement in the detection of weak targets, more obvious discriminating features must be identified and extracted. This research investigates conventional Constant False Alarm Rate (CFAR) algorithms and introduces the approach of applying ar1ificial intelligence methods to the target detection problems. Previous research has been unde11aken to improve the detection capability of the radar system in the heavy clutter environment and many new CFAR algorithms, which are based on amplitude information only, have been developed. This research studies these algorithms and proposes that it is feasible to design and develop an advanced target detection system that is capable of discriminating targets from clutters by learning the .different features extracted from radar returns. The approach adopted for this further work into target detection was the use of neural networks. Results presented show that such a network is able to learn particular features of specific radar return signals, e.g. rain clutter, sea clutter, target, and to decide if a target is present in a finite window of data. The work includes a study of the characteristics of radar signals and identification of the features that can be used in the process of effective detection. The use of a general purpose marine radar has allowed the collection of live signals from the Plymouth harbour for analysis, training and validation. The approach of using data from the real environment has enabled the developed detection system to be exposed to real clutter conditions that cannot be obtained when using simulated data. The performance of the neural network detection system is evaluated with further recorded data and the results obtained are compared with the conventional CFAR algorithms. It is shown that the neural system can learn the features of specific radar signals and provide a superior performance in detecting targets from clutters. Areas for further research and development arc presented; these include the use of a sophisticated recording system, high speed processors and the potential for target classification

    Investigations into the accuracy of radar measurements for static and moving objects

    Get PDF
    Los fabricantes de automĂłviles estĂĄn precipitando esta industria hacia los vehĂ­culos de conducciĂłn autĂłnoma. Cada dĂ­a se avanza hacia este objetivo, utilizando tecnologĂ­as disruptivas o reinventando otras que llevan años entre nosotros. Este Ășltimo es el caso del radar. Los sistemas de radar ya se utilizan hoy en dĂ­a en los coches mĂĄs avanzados para garantizar la comodidad y la seguridad de los conductores, los pasajeros y el entorno que les rodea, y desempeñarĂĄn un papel fundamental en la plena autonomĂ­a de los vehĂ­culos, cada vez mĂĄs habitual en nuestros coches y en nuestras carreteras. Los fundamentos y el comportamiento de este sistema serĂĄn estudiados en este proyecto a partir de la experimentaciĂłn y el tratamiento de los datos obtenidos de estos experimentos, exponiendo los puntos fuertes y dĂ©biles de este sistema junto con los potenciales beneficios que puede aportar a la industria del automĂłvil. La experimentaciĂłn ha consistido en una serie de mediciones con objetivos estĂĄticos y dinĂĄmicos en diferentes situaciones. Se entiende el principio de funcionamiento del sistema a travĂ©s del anĂĄlisis de los datos previamente tratados y filtrados utilizando herramientas informĂĄticas como Matlab y Simulink, y se concluye cuĂĄles son los mejores objetivos y las mejores formas de llevar a cabo este tipo de experimentaciones.The automotive manufacturers are rushing the car industry to the autonomously driven vehicles. Every day, advances are made towards this goal by using disruptive technologies or by reinventing others that have been with us for years. This last one is the case of the radar. Radar systems are already used today in the most advanced cars to ensure the comfort and safety of drivers, passengers and their surrounding environment and will play a key role in the full autonomy of vehicles, becoming more and more common in our cars and on our roads. The fundamentals and behavior of this system will be studied in this project from the experimentation and the treatment of the data obtained from these experiments, providing the strengths and weaknesses of this system together with the potential benefits it can bring to the automotive industry. The experimentation consisted of a series of measurements with static and dynamic targets in different situations. It is understood the working principle of the system by the analysis of the previously treated and filtered data using informatic tools such as Matlab and Simulink, and it is concluded which are the best targets and best ways to carry out this kind of experimentations.Outgoin

    Modeling Backscattering Behavior of Vulnerable Road Users Based on High-Resolution Radar Measurements

    Get PDF
    Bei der Weiterentwicklung der Technologie des autonomen Fahrens (AD) ist die Beschaffung zuverlĂ€ssiger dreidimensionaler Umgebungsinformationen eine unverzichtbare Aufgabe, um ein sicheres Fahren zu ermöglichen. Diese Herausforderung kann durch den Einsatz von Fahrzeugradaren zusammen mit optischen Sensoren, z. B. Kameras oder Lidars, bewĂ€ltigt werden, sei es in der Simulation oder in konventionellen Tests auf der Straße. Das Betriebsverhalten von Fahrzeugradaren kann in einer Over-the-Air (OTA) Vehicle-in-the-Loop (ViL) Umgebung genau bewertet werden. FĂŒr eine umfassende experimentelle Verifizierung der Fahrzeugradare muss jedoch die Umgebung, insbesondere die gefĂ€hrdeten Verkehrsteilnehmer (VRUs), möglichst realistisch modelliert werden. Moderne Radarsensoren sind in der Lage, hochaufgelöste Erkennungsinformationen von komplexen Verkehrszielen zu liefern, um diese zu verfolgen. Diese hochauflösenden Erkennungsdaten, die die reflektierten Signale von den Streupunkten (SPs) der VRUs enthalten, können zur Erzeugung von RĂŒckstreumodelle genutzt werden. DarĂŒber hinaus kann ein realistischeres RĂŒckstreumodell der VRUs, insbesondere von Menschen als FußgĂ€nger oder Radfahrer, durch die Modellierung der Bewegung ihrer ExtremitĂ€ten in Verkehrsszenarien erreicht werden. Die Voraussetzung fĂŒr die Erstellung eines solchen detaillierten Modells in verschiedenen Situationen sind der Radarquerschnitt (RCS) und die Doppler-Signaturen, die sich aus den menschlichen ExtremitĂ€ten in einer bewegten Situation ergeben. Diese Daten können durch die gesammelten Radardaten aus hochauflösenden RCS-Messungen im Radial- und Winkelbereich gewonnen werden, was durch die Analyse der Range-Doppler-Spezifikation der menschlichen ExtremitĂ€ten in verschiedenen Bewegungen möglich ist. Die entwickelten realistischen Radarmodelle können bei der Wellenausbreitung im Radarkanal, bei der Zielerkennung und -klassifizierung sowie bei Datentrainingsalgorithmen zur Validierung und Verifizierung der Kfz-Radarfunktionen eingesetzt werden. Anschließend kann mit dieser Bewertung die Sicherheit von fortschrittlichen Fahrerassistenzsystemen (ADAS) beurteilt werden. Daher wird in dieser Arbeit ein hochauflösendes RCS-Messverfahren vorgeschlagen, um die relevanten SPs verschiedener VRUs mit hoher radialer und winkelmĂ€ĂŸiger Auflösung zu bestimmen. Eine Gruppe unterschiedliche VRUs wird in statischen Situationen gemessen, und die notwendigen Signalverarbeitungsschritte, um die relevanten SPs mit den entsprechenden RCS-Werten zu extrahieren, werden im Detail beschrieben. WĂ€hrend der Analyse der gemessenen Daten wird ein Algorithmus entwickelt, um die physischen GrĂ¶ĂŸen der gemessenen Testpersonen aus dem extrahierten RĂŒckstreumodell zu schĂ€tzen und sie anhand ihrer GrĂ¶ĂŸe und Statur zu klassifizieren. ZusĂ€tzlich wird ein Dummy-Mensch vermessen, der eine vergleichbare GrĂ¶ĂŸe wie die vermessenen Probanden hat. Das extrahierte RĂŒckstreuverhalten einer beispielhaften VRU-Gruppe wird fĂŒr ihre verschiedenen Typen ausgewertet, um die Übereinstimmung zwischen virtuellen Validierungen und der RealitĂ€t aufzuzeigen und den Genauigkeitsgrad der Modelle sicherzustellen. In einem weiteren Schritt wird diese hochauflösende RCS-Messtechnik mit der Motion Capture Technologie kombiniert, um die ReflektivitĂ€t der SPs von den menschlichen Körperregionen in verschiedenen Bewegungen zu erfassen und die Radarsignaturen der menschlichen ExtremitĂ€ten genau zu schĂ€tzen. Spezielle Signalverarbeitungsschritte werden eingesetzt, um die Radarsignaturen aus den Messergebnissen des sich bewegenden Menschen zu extrahieren. Diese nachbearbeiteten Daten ermöglichen es der Technik, die zeitlich variierenden SPs an den ExtremitĂ€ten des menschlichen Körpers mit den entsprechenden RCS-Werten und Dopplersignaturen einzufĂŒhren. Das extrahierte RĂŒckstreumodell der VRUs enthĂ€lt eine Vielzahl von SPs. Daher wird ein Clustering-Algorithmus entwickelt, um die BerechnungskomplexitĂ€t bei Radarkanalsimulationen durch die EinfĂŒhrung einiger virtueller Streuzentren (SCs) zu minimieren. Jedes entwickelte virtuelle SCs hat seine eigene spezifische Streueigenschaft

    Rearward visibility issues related to agricultural machinery: Contributing factors, potential solutions

    Get PDF
    As the size, complexity, and speed of tractors and other agricultural self-propelled machinery have increased, so have the visibility-related issues, placing significant importance on the visual skills, alertness, and reactive abilities of the operator. Rearward movement of large agricultural equipment has been identified in the literature as causing not only damage to both machine and stationary objects, but also injuries (even fatalities) to bystanders not visible to the operator. Fortunately, monitoring assistance, while not a new concept, has advanced significantly, offering operators today more options for increasing awareness of the area surrounding their machines. In this research, an attempt is made to (1) identify and describe the key contributors to agricultural machinery visibility issues (both operator and machine-related), and (2) enumerate and evaluate the potential solutions and technologies that address these issues via modifications of ISO, SAE, and DOT standardized visibility testing methods. Enhanced operator safety and efficiency should result from a better understanding of the visibility problems (especially with regard to rearward movement) inherent in large tractors and self-propelled agricultural machinery. Used in this study were nine machines of different types that varied widely in size, horsepower rating, and operator station configuration to provide a broad representation of what is found on many U.S. farms/ranches. The two main rearward monitoring ‘technologies’ evaluated were the machines’ factory-equipped mirrors and cameras that the researchers affixed to these machines. A 58.06 m2 (625 ft2) testing grid was centered on the rear-most location of the tested machinery with height indicators centered in each of twenty-five grid cells. In general, the findings were consistent across all the machines tested—i.e., rather obstructed rearward visibility using mirrors alone versus considerably less obstructed rearward visibility with the addition of cameras. For example, having exterior extended-arm and interior mirrors only, a MFWD tractor with 1,100-bushel grain cart in tow measured, from the operator’s perspective, 68% obstructed view of the grid’s kneeling-worker-height markers and 100% throughout the midline of rearward travel; but when equipped with a rearview camera system, the obstructed area was decreased to only 4%. The visibility models created identified (1) a moderate-positive Pearson r correlation, indicating that many of the obstructed locations of the rearward area affected both mirrors and cameras similarly and (2) a strong-positive Pearson r correlation of kneeling worker height visibility, indicating that mirrors and camera systems share commonality of areas with high visibility (along the midline of travel and outward with greater distance from the rear of the machine, without implements in tow). Of the recommendations coming from this research, the key one is for establishment of engineering standards aimed at (1) enhancing operator ability to identify those locations around agricultural machinery that are obstructed from view, (2) reducing the risk of run-overs through improved monitoring capabilities of machine surroundings and components, and (3) alerting operators and co-workers of these hazardous locations

    Visual anemometry: physics-informed inference of wind for renewable energy, urban sustainability, and environmental science

    Full text link
    Accurate measurements of atmospheric flows at meter-scale resolution are essential for a broad range of sustainability applications, including optimal design of wind and solar farms, safe and efficient urban air mobility, monitoring of environmental phenomena such as wildfires and air pollution dispersal, and data assimilation into weather and climate models. Measurement of the relevant microscale wind flows is inherently challenged by the optical transparency of the wind. This review explores new ways in which physics can be leveraged to "see" environmental flows non-intrusively, that is, without the need to place measurement instruments directly in the flows of interest. Specifically, while the wind itself is transparent, its effect can be visually observed in the motion of objects embedded in the environment and subjected to wind -- swaying trees and flapping flags are commonly encountered examples. We describe emerging efforts to accomplish visual anemometry, the task of quantitatively inferring local wind conditions based on the physics of observed flow-structure interactions. Approaches based on first-principles physics as well as data-driven, machine learning methods will be described, and remaining obstacles to fully generalizable visual anemometry will be discussed.Comment: In revie
    • 

    corecore