3,462 research outputs found

    The definition, production and validation of the direct vision standard (DVS) for HGVS. Final Report for TfL review

    Get PDF
    This report presents research performed by Loughborough Design School (LDS) on behalf of Transport for London. The research has been conducted against a background of over representation of heavy goods vehicles (HGVs) being involved in road traffic accidents with vulnerable road users (VRUs) where ‘failed to look properly’ and ‘vehicle blind-spot’ are often reported as the main casual factors in the accident data. Previous work by LDS on driver’s vision from HGVs has identified the need to reduce reliance on indirect vision via mirrors through the specification of a direct vision standard (DVS) for HGVs. Recent work commissioned by TfL and performed by the Transport Research Laboratory (TRL) resulted in a draft DVS. This draft DVS has been evaluated and reworked by the LDS team to produce a viable and robust method to quantify direct vision performance of an HGV together with a means to rate that vision performance against a star rating standard. Throughout this process significant stakeholder consultation has been used to support the development of the DVS. A total of 27 vehicles representing the majority of the current Euro 6 N3 HGV fleet have been modelled in CAD. Where data were available these have been mounted at the highest, lowest and most sold heights to produce a sample of 54 test vehicles. A methodology has been developed that utilises volumetric projection of the field of view of the driver via the windows in the cab. This projection is then intersected with an assessment volume. The result is a volumetric representation of the space around a HGV cab that the driver can see to the front, driver and passenger sides. The volume of this space can be calculated to provide a rating of direct vision performance. An iterative design process was followed that explored different specifications of the assessment zone around the cab, factoring in the collision data with VRUs and the use of weightings to prioritise what needs to be seen. Two weighting schemes were evaluated one prioritising the volumes vertically, recognising the importance of being able to see closer to the ground, and a second prioritising the volumes directionally to address the prevalence of accidents being greater to the front and passenger side when compared to the driver’s side. The final specification of the volumetric assessment consists of a single, unweighted zone around the cab, informed by the current coverage of mirrors specified in UNECE regulation 46. This was done to foster direct vision that aims to remove the reliance on mirrors and thus should focus on providing direct vision of the areas currently covered by mirrors. The vehicle sample was then evaluated for its performance using this assessment, providing a volumetric score for each vehicle. These volumetric scores were then quantified by correlating them with a VRU simulation. Thirteen 5th %ile Italian female VRUs were placed around the vehicle and moved laterally to a point at which their head and shoulders could be seen. This served to provide context for the volumetric results such that a particular volume could be equated to an average distance at which the small adult could be seen. Furthermore, the VRU simulations provided a means to translate the volumetric performance into star ratings. Four star rating specifications were produced following an absolute (based on risk/safety) and a relative (based on the performance of the current fleet) approach. For both absolute and relative two iterations were proposed: 1. the VRU simulation distances were used to establish a threshold value, 2. the median volumetric result was used to establish a threshold value. The final option taken forwards used the VRU simulation distances for a 5th %ile Italian female to define the 1 star boundary. Vehicles able to provide direct vision of the VRUs at an average of <2m to the front, <4.5m to the passenger side and <0.6m to the driver’s side achieved a star rating 1 star or above. All others achieved a rating of zero star. Star ratings from 1 to 5 star were sub divided equally. The final result consists of three main outcomes: The definition, production and validation of the direct vision standard (DVS) for HGVs December 2018 Transport for London 4 Loughborough Design School © 1. A robust, repeatable and validated method for the volumetric analysis of direct vision performance using a CAD based process 2. A process to map a volumetric score for a given vehicle onto the 5 star rating scale to produce a DVS rating for any vehicle. 3. Star ratings for the majority of the Euro 6 N3/N3G HGV fleet showing that of the 41 configurations analysed, two vehicles are rated 5 star, no vehicles are rated 4 star, five vehicles are able to achieve 3 star, three vehicles are able to achieve 2 star, and six vehicles are able to achieve 1 star, the remainder 25 vehicles were rated as zero star

    Pre-Trained Driving in Localized Surroundings with Semantic Radar Information and Machine Learning

    Get PDF
    Entlang der Signalverarbeitungskette von Radar Detektionen bis zur Fahrzeugansteuerung, diskutiert diese Arbeit eine semantischen Radar Segmentierung, einen darauf aufbauenden Radar SLAM, sowie eine im Verbund realisierte autonome Parkfunktion. Die Radarsegmentierung der (statischen) Umgebung wird durch ein Radar-spezifisches neuronales Netzwerk RadarNet erreicht. Diese Segmentierung ermöglicht die Entwicklung des semantischen Radar Graph-SLAM SERALOC. Auf der Grundlage der semantischen Radar SLAM Karte wird eine beispielhafte autonome Parkfunktionalität in einem realen Versuchsträger umgesetzt. Entlang eines aufgezeichneten Referenzfades parkt die Funktion ausschließlich auf Basis der Radar Wahrnehmung mit bisher unerreichter Positioniergenauigkeit. Im ersten Schritt wird ein Datensatz von 8.2 · 10^6 punktweise semantisch gelabelten Radarpunktwolken über eine Strecke von 2507.35m generiert. Es sind keine vergleichbaren Datensätze dieser Annotationsebene und Radarspezifikation öffentlich verfügbar. Das überwachte Training der semantischen Segmentierung RadarNet erreicht 28.97% mIoU auf sechs Klassen. Außerdem wird ein automatisiertes Radar-Labeling-Framework SeRaLF vorgestellt, welches das Radarlabeling multimodal mittels Referenzkameras und LiDAR unterstützt. Für die kohärente Kartierung wird ein Radarsignal-Vorfilter auf der Grundlage einer Aktivierungskarte entworfen, welcher Rauschen und andere dynamische Mehrwegreflektionen unterdrückt. Ein speziell für Radar angepasstes Graph-SLAM-Frontend mit Radar-Odometrie Kanten zwischen Teil-Karten und semantisch separater NDT Registrierung setzt die vorgefilterten semantischen Radarscans zu einer konsistenten metrischen Karte zusammen. Die Kartierungsgenauigkeit und die Datenassoziation werden somit erhöht und der erste semantische Radar Graph-SLAM für beliebige statische Umgebungen realisiert. Integriert in ein reales Testfahrzeug, wird das Zusammenspiel der live RadarNet Segmentierung und des semantischen Radar Graph-SLAM anhand einer rein Radar-basierten autonomen Parkfunktionalität evaluiert. Im Durchschnitt über 42 autonome Parkmanöver (∅3.73 km/h) bei durchschnittlicher Manöverlänge von ∅172.75m wird ein Median absoluter Posenfehler von 0.235m und End-Posenfehler von 0.2443m erreicht, der vergleichbare Radar-Lokalisierungsergebnisse um ≈ 50% übertrifft. Die Kartengenauigkeit von veränderlichen, neukartierten Orten über eine Kartierungsdistanz von ∅165m ergibt eine ≈ 56%-ige Kartenkonsistenz bei einer Abweichung von ∅0.163m. Für das autonome Parken wurde ein gegebener Trajektorienplaner und Regleransatz verwendet

    Implementation of Unmanned aerial vehicles (UAVs) for assessment of transportation infrastructure - Phase II

    Get PDF
    Technological advances in unmanned aerial vehicle (UAV) technologies continue to enable these tools to become easier to use, more economical, and applicable for transportation-related operations, maintenance, and asset management while also increasing safety and decreasing cost. This Phase 2 project continued to test and evaluate five main UAV platforms with a combination of optical, thermal, and lidar sensors to determine how to implement them into MDOT workflows. Field demonstrations were completed at bridges, a construction site, road corridors, and along highways with data being processed and analyzed using customized algorithms and tools. Additionally, a cost-benefit analysis was conducted, comparing manual and UAV-based inspection methods. The project team also gave a series of technical demonstrations and conference presentations, enabling outreach to interested audiences who gained understanding of the potential implementation of this technology and the advanced research that MDOT is moving to implementation. The outreach efforts and research activities performed under this contract demonstrated how implementing UAV technologies into MDOT workflows can provide many benefits to MDOT and the motoring public; such as advantages in improved cost-effectiveness, operational management, and timely maintenance of Michigan’s transportation infrastructure

    Real-time performance-focused on localisation techniques for autonomous vehicle: a review

    Get PDF

    Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources

    Get PDF
    Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges

    Vehicle localization with enhanced robustness for urban automated driving

    Get PDF

    Real-time Aerial Vehicle Detection and Tracking using a Multi-modal Optical Sensor

    Get PDF
    Vehicle tracking from an aerial platform poses a number of unique challenges including the small number of pixels representing a vehicle, large camera motion, and parallax error. For these reasons, it is accepted to be a more challenging task than traditional object tracking and it is generally tackled through a number of different sensor modalities. Recently, the Wide Area Motion Imagery sensor platform has received reasonable attention as it can provide higher resolution single band imagery in addition to its large area coverage. However, still, richer sensory information is required to persistently track vehicles or more research on the application of WAMI for tracking is required. With the advancements in sensor technology, hyperspectral data acquisition at video frame rates become possible as it can be cruical in identifying objects even in low resolution scenes. For this reason, in this thesis, a multi-modal optical sensor concept is considered to improve tracking in adverse scenes. The Rochester Institute of Technology Multi-object Spectrometer is capable of collecting limited hyperspectral data at desired locations in addition to full-frame single band imagery. By acquiring hyperspectral data quickly, tracking can be achieved at reasonableframe rates which turns out to be crucial in tracking. On the other hand, the relatively high cost of hyperspectral data acquisition and transmission need to be taken into account to design a realistic tracking. By inserting extended data of the pixels of interest we can address or avoid the unique challenges posed by aerial tracking. In this direction, we integrate limited hyperspectral data to improve measurement-to-track association. Also, a hyperspectral data based target detection method is presented to avoid the parallax effect and reduce the clutter density. Finally, the proposed system is evaluated on realistic, synthetic scenarios generated by the Digital Image and Remote Sensing software
    • …
    corecore