11,702 research outputs found

    PlaNet - Photo Geolocation with Convolutional Neural Networks

    Full text link
    Is it possible to build a system to determine the location where a photo was taken using just its pixels? In general, the problem seems exceptionally difficult: it is trivial to construct situations where no location can be inferred. Yet images often contain informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details, which in combination may allow one to determine an approximate location and occasionally an exact location. Websites such as GeoGuessr and View from your Window suggest that humans are relatively good at integrating these cues to geolocate images, especially en-masse. In computer vision, the photo geolocation problem is usually approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. While previous approaches only recognize landmarks or perform approximate matching using global image descriptors, our model is able to use and integrate multiple visible cues. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman levels of accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, we demonstrate that this model achieves a 50% performance improvement over the single-image model

    Cost analysis of advanced turbine blade manufacturing processes

    Get PDF
    A rigorous analysis was conducted to estimate relative manufacturing costs for high technology gas turbine blades prepared by three candidate materials process systems. The manufacturing costs for the same turbine blade configuration of directionally solidified eutectic alloy, an oxide dispersion strengthened superalloy, and a fiber reinforced superalloy were compared on a relative basis to the costs of the same blade currently in production utilizing the directional solidification process. An analytical process cost model was developed to quantitatively perform the cost comparisons. The impact of individual process yield factors on costs was also assessed as well as effects of process parameters, raw materials, labor rates and consumable items

    Rain Removal in Traffic Surveillance: Does it Matter?

    Get PDF
    Varying weather conditions, including rainfall and snowfall, are generally regarded as a challenge for computer vision algorithms. One proposed solution to the challenges induced by rain and snowfall is to artificially remove the rain from images or video using rain removal algorithms. It is the promise of these algorithms that the rain-removed image frames will improve the performance of subsequent segmentation and tracking algorithms. However, rain removal algorithms are typically evaluated on their ability to remove synthetic rain on a small subset of images. Currently, their behavior is unknown on real-world videos when integrated with a typical computer vision pipeline. In this paper, we review the existing rain removal algorithms and propose a new dataset that consists of 22 traffic surveillance sequences under a broad variety of weather conditions that all include either rain or snowfall. We propose a new evaluation protocol that evaluates the rain removal algorithms on their ability to improve the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms under rain and snow. If successful, the de-rained frames of a rain removal algorithm should improve segmentation performance and increase the number of accurately tracked features. The results show that a recent single-frame-based rain removal algorithm increases the segmentation performance by 19.7% on our proposed dataset, but it eventually decreases the feature tracking performance and showed mixed results with recent instance segmentation methods. However, the best video-based rain removal algorithm improves the feature tracking accuracy by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System

    Optimisation for Large-scale Maintenance, Scheduling and Vehicle Routing Problems

    Get PDF
    Solving real-world combinatorial problems is involved in many industry fields to minimise operational cost or to maximise profit, or both. Along with continuous growth in computing power, many asset management decision-making processes that were originally solved by hand now tend to be based on big data analysis. Larger scale problem can be solved and more detailed operation instructions can be delivered. In this thesis, we investigate models and algorithms to solve large scale Geographically Distributed asset Maintenance Problems (GDMP). Our study of the problem was motivated by our business partner, Gaist solutions Ltd., to optimise scheduling of maintenance actions for a drainage system in an urban area. The models and solution methods proposed in the thesis can be applied to many similar issues arising in other industry fields. The thesis contains three parts. We firstly built a risk driven model considering vehicle routing problems and the asset degradation information. A hyperheuristic method embedded with customised low-level heuristics is employed to solve our real-world drainage maintenance problem in Blackpool. Computational results show that our hyperheuristic approach can, within reasonable CPU time, produce much higher quality solutions than the scheduling strategy currently implemented by Blackpool council. We then attempt to develop more efficient solution approaches to tackle our GDMP. We study various hyperheuristics and propose efficient local search strategies in part II. We present computational results on standard periodic vehicle routing problem instances and our GDMP instances. Based on manifold experimental evidences, we summarise the principles of designing heuristic based solution approaches to solve combinatorial problems. Last bu not least, we investigate a related decision making problem from highway maintenance, that is again of interest to Gaist solutions Ltd. We aim to make a strategical decision to choose a cost effective method of delivering the road inspection at a national scale. We build the analysis based on the Chinese Postman Problem and theoretically proof the modelling feasibility in real-world road inspection situations. We also propose a novel graph reduction process to allow effective computation over very large data sets

    The significance of the Cape trade route to economic activity in the Cape colony: a medium-term business cycle analysis

    Get PDF
    Trade is a critical component of economic growth in newly settled societies. This paper tests the impact of ship traffic on the Cape economy using a time series smoothing technique borrowed from the business cycle literature and employing an econometric procedure to test for long-run relationships. The results suggest a strong systematic co-movement between wheat production and ship traffic, with less evidence for wine production and stock herding activities. While ship traffic created demand for wheat exports, the size of the co-movement provides evidence that ship traffic also stimulated local demand through secondary and tertiary sector activities, supporting the hypothesis that ship traffic acted as a catalyst for growth in the Cape economy.Colonial trade, Cape of Good Hope, Dutch East India, Band-pass filter, Medium-term fluctuations, Business cycle, South Africa, Ships, Harvest cycles, Colonial economy

    Applications of optical processing for improving ERTS data, volume 1

    Get PDF
    Application of optically diagnosed noise information toward development of filtering subroutines for improvement of digital sensing data tape quality - Vol.

    Road Condition Mapping by Integration of Laser Scanning, RGB Imaging and Spectrometry

    Get PDF
    Roads are important infrastructure and are primary means of transportation. Control and maintenance of roads are substantial as the pavement surface deforms and deteriorates due to heavy load and influences of weather. Acquiring detailed information about the pavement condition is a prerequisite for proper planning of road pavement maintenance and rehabilitation. Many companies detect and localize the road pavement distresses manually, either by on-site inspection or by digitizing laser data and imagery captured by mobile mapping. The automation of road condition mapping using laser data and colour images is a challenge. Beyond that, the mapping of material properties of the road pavement surface with spectrometers has not yet been investigated. This study aims at automatic mapping of road surface condition including distress and material properties by integrating laser scanning, RGB imaging and spectrometry. All recorded data are geo-referenced by means of GNSS/ INS. Methods are developed for pavement distress detection that cope with a variety of different weather and asphalt conditions. Further objective is to analyse and map the material properties of the pavement surface using spectrometry data. No standard test data sets are available for benchmarking developments on road condition mapping. Therefore, all data have been recorded with a mobile mapping van which is set up for the purpose of this research. The concept for detecting and localizing the four main pavement distresses, i.e. ruts, potholes, cracks and patches is the following: ruts and potholes are detected using laser scanning data, cracks and patches using RGB images. For each of these pavement distresses, two or more methods are developed, implemented, compared to each other and evaluated to identify the most successful method. With respect to the material characteristics, spectrometer data of road sections are classified to indicate pavement quality. As a spectrometer registers almost a reflectivity curve in VIS, NIR and SWIR wavelength, indication of aging can be derived. After detection and localization of the pavement distresses and pavement quality classes, the road condition map is generated by overlaying all distresses and quality classes. As a preparatory step for rut and pothole detection, the road surface is extracted from mobile laser scanning data based on a height jump criterion. For the investigation on rut detection, all scanlines are processed. With an approach based on iterative 1D polynomial fitting, ruts are successfully detected. For streets with the width of 6 m to 10 m, a 6th order polynomial is found to be most suitable. By 1D cross-correlation, the centre of the rut is localized. An alternative method using local curvature shows a high sensitivity to the shape and width of a rut and is less successful. For pothole detection, the approach based on polynomial fitting generalized to two dimensions. As an alternative, a procedure using geodesic morphological reconstruction is investigated. Bivariate polynomial fitting encounters problems with overshoot at the boundary of the regions. The detection is very successful using geodesic morphology. For the detection of pavement cracks, three methods using rotation invariant kernels are investigated. Line Filter, High-pass Filter and Modified Local Binary Pattern kernels are implemented. A conceptual aspect of the procedure is to achieve a high degree of completeness. The most successful variant is the Line Filter for which the highest degree of completeness of 81.2 % is achieved. Two texture measures, the gradient magnitude and the local standard deviation are employed to detect pavement patches. As patches may differ with respect to homogeneity and may not always have a dark border with the intact pavement surface, the method using the local standard deviation is more suitable for detecting the patches. Linear discriminant analysis is utilized for asphalt pavement quality analysis and classification. Road pavement sections of ca. 4 m length are classified into two classes, namely: “Good” and “Bad” with the overall accuracy of 77.6 %. The experimental investigations show that the developed methods for automatic distress detection are very successful. By 1D polynomial fitting on laser scanlines, ruts are detected. In addition to ruts also pavement depressions like shoving can be revealed. The extraction of potholes is less demanding. As potholes appear relatively rare in the road networks of a city, the road segments which are affected by potholes are selected interactively. While crack detection by Line Filter works very well, the patch detection is more challenging as patches sometimes look very similar to the intact surface. The spectral classification of pavement sections contributes to road condition mapping as it gives hints on aging of the road pavement.Straßen bilden die primären Transportwege für Personen und Güter und sind damit ein wichtiger Bestandteil der Infrastruktur. Der Aufwand für Instandhaltung und Wartung der Straßen ist erheblich, da sich die Fahrbahnoberfläche verformt und durch starke Belastung und Wettereinflüsse verschlechtert. Die Erfassung detaillierter Informationen über den Fahrbahnzustand ist Voraussetzung für eine sachgemäße Planung der Fahrbahnsanierung und -rehabilitation. Viele Unternehmen detektieren und lokalisieren die Fahrbahnschäden manuell entweder durch Vor-Ort-Inspektion oder durch Digitalisierung von Laserdaten und Bildern aus mobiler Datenerfassung. Eine Automatisierung der Straßenkartierung mit Laserdaten und Farbbildern steht noch in den Anfängen. Zudem werden bisher noch nicht die Alterungszustände der Asphaltdecke mit Hilfe der Spektrometrie bewertet. Diese Studie zielt auf den automatischen Prozess der Straßenzustandskartierung einschließlich der Straßenschäden und der Materialeigenschaften durch Integration von Laserscanning, RGB-Bilderfassung und Spektrometrie ab. Alle aufgezeichneten Daten werden mit GNSS / INS georeferenziert. Es werden Methoden für die Erkennung von Straßenschäden entwickelt, die sich an unterschiedliche Datenquellen bei unterschiedlichem Wetter- und Asphaltzustand anpassen können. Ein weiteres Ziel ist es, die Materialeigenschaften der Fahrbahnoberfläche mittels Spektrometrie-Daten zu analysieren und abzubilden. Derzeit gibt es keine standardisierten Testdatensätze für die Evaluierung von Verfahren zur Straßenzustandsbeschreibung. Deswegen wurden alle Daten, die in dieser Studie Verwendung finden, mit einem eigens für diesen Forschungszweck konfigurierten Messfahrzeug aufgezeichnet. Das Konzept für die Detektion und Lokalisierung der wichtigsten vier Arten von Straßenschäden, nämlich Spurrillen, Schlaglöcher, Risse und Flickstellen ist das folgende: Spurrillen und Schlaglöcher werden aus Laserdaten extrahiert, Risse und Flickstellen aus RGB- Bildern. Für jede dieser Straßenschäden werden mindestens zwei Methoden entwickelt, implementiert, miteinander verglichen und evaluiert um festzustellen, welche Methode die erfolgreichste ist. Im Hinblick auf die Materialeigenschaften werden Spektrometriedaten der Straßenabschnitte klassifiziert, um die Qualität des Straßenbelages zu bewerten. Da ein Spektrometer nahezu eine kontinuierliche Reflektivitätskurve im VIS-, NIR- und SWIR-Wellenlängenbereich aufzeichnet, können Merkmale der Asphaltalterung abgeleitet werden. Nach der Detektion und Lokalisierung der Straßenschäden und der Qualitätsklasse des Straßenbelages wird der übergreifende Straßenzustand mit Hilfe von Durchschlagsregeln als Kombination aller Zustandswerte und Qualitätsklassen ermittelt. In einem vorbereitenden Schritt für die Spurrillen- und Schlaglocherkennung wird die Straßenoberfläche aus mobilen Laserscanning-Daten basierend auf einem Höhensprung-Kriterium extrahiert. Für die Untersuchung zur Spurrillen-Erkennung werden alle Scanlinien verarbeitet. Mit einem Ansatz, der auf iterativer 1D-Polynomanpassung basiert, werden Spurrillen erfolgreich erkannt. Für eine Straßenbreite von 8-10m erweist sich ein Polynom sechsten Grades als am besten geeignet. Durch 1D-Kreuzkorrelation wird die Mitte der Spurrille erkannt. Eine alternative Methode, die die lokale Krümmung des Querprofils benutzt, erweist sich als empfindlich gegenüber Form und Breite einer Spurrille und ist weniger erfolgreich. Zur Schlaglocherkennung wird der Ansatz, der auf Polynomanpassung basiert, auf zwei Dimensionen verallgemeinert. Als Alternative wird eine Methode untersucht, die auf der Geodätischen Morphologischen Rekonstruktion beruht. Bivariate Polynomanpassung führt zu Überschwingen an den Rändern der Regionen. Die Detektion mit Hilfe der Geodätischen Morphologischen Rekonstruktion ist dagegen sehr erfolgreich. Zur Risserkennung werden drei Methoden untersucht, die rotationsinvariante Kerne verwenden. Linienfilter, Hochpassfilter und Lokale Binäre Muster werden implementiert. Ein Ziel des Konzeptes zur Risserkennung ist es, eine hohe Vollständigkeit zu erreichen. Die erfolgreichste Variante ist das Linienfilter, für das mit 81,2 % der höchste Grad an Vollständigkeit erzielt werden konnte. Zwei Texturmaße, nämlich der Betrag des Grauwert-Gradienten und die lokale Standardabweichung werden verwendet, um Flickstellen zu entdecken. Da Flickstellen hinsichtlich der Homogenität variieren können und nicht immer eine dunkle Grenze mit dem intakten Straßenbelag aufweisen, ist diejenige Methode, welche die lokale Standardabweichung benutzt, besser zur Erkennung von Flickstellen geeignet. Lineare Diskriminanzanalyse wird zur Analyse der Asphaltqualität und zur Klassifikation benutzt. Straßenabschnitte von ca. 4m Länge werden zwei Klassen („Gut“ und „Schlecht“) mit einer gesamten Accuracy von 77,6 % zugeordnet. Die experimentellen Untersuchungen zeigen, dass die entwickelten Methoden für die automatische Entdeckung von Straßenschäden sehr erfolgreich sind. Durch 1D Polynomanpassung an Laser-Scanlinien werden Spurrillen entdeckt. Zusätzlich zu Spurrillen werden auch Unebenheiten des Straßenbelages wie Aufschiebungen detektiert. Die Extraktion von Schlaglöchern ist weniger anspruchsvoll. Da Schlaglöcher relativ selten in den Straßennetzen von Städten auftreten, werden die Straßenabschnitte mit Schlaglöchern interaktiv ausgewählt. Während die Rissdetektion mit Linienfiltern sehr gut funktioniert, ist die Erkennung von Flickstellen eine größere Herausforderung, da Flickstellen manchmal der intakten Straßenoberfläche sehr ähnlich sehen. Die spektrale Klassifizierung der Straßenabschnitte trägt zur Straßenzustandsbewertung bei, indem sie Hinweise auf den Alterungszustand des Straßenbelages liefert

    Fully automated urban traffic system

    Get PDF
    The replacement of the driver with an automatic system which could perform the functions of guiding and routing a vehicle with a human's capability of responding to changing traffic demands was discussed. The problem was divided into four technological areas; guidance, routing, computing, and communications. It was determined that the latter three areas being developed independent of any need for fully automated urban traffic. A guidance system that would meet system requirements was not being developed but was technically feasible

    Travel Time in Macroscopic Traffic Models for Origin-Destination Estimation

    Get PDF
    Transportation macroscopic modeling is a tool for analyzing and prioritizing future transportation improvements. Transportation modeling techniques continue to evolve with improvements to computer processing speeds and traffic data collection. These improvements allow transportation models to be calibrated to real life traffic conditions. The transportation models rely on an origin-destination (OD) matrix, which describes the quantity and distribution of trips in a transportation network. The trips defined by the OD matrix are assigned to the network through the process of traffic assignment. Traffic assignment relies on the travel time (cost) of roadways to replicate route choice of trips between OD trip pairs. Travel time is calculated both along the roadway and from delay at the intersections. Actuated traffic signals, one form of signalized intersections, have not been explicitly modeled in macroscopic transportation models. One of the objectives of this thesis is to implement actuated signals in the macroscopic modeling framework, in order to improve traffic assignment by more accurately representing delay at intersections. An actuated traffic signal module was implemented into QRS II, a transportation macroscopic model, using a framework from the 2010 Highway Capacity Manual. Results from actuated intersections analyzed with QRS II indicate the green time for each phase was reasonably distributed and sensitive to lane group volume and input parameters. Private vendor travel time data from companies such as Navteq and INRIX, have extensive travel time coverage on freeways and arterials. Their extensive travel time coverage has the potential to be useful in estimating OD matrices. The second objective of this thesis is to use travel time in the OD estimation framework. The presented OD estimation method uses travel time to determine directional split factors for bi-directional traffic counts. These directional split factors update target volumes during the OD estimation procedure. The OD estimation technique using travel time from floating car runs was tested using a mid-sized network in Milwaukee, WI. The analysis indicates applicability of using travel time in OD estimation
    corecore