10 research outputs found

    Optimized Local Path Planner Implementation for GPU-Accelerated Embedded Systems

    Get PDF
    Autonomous vehicles are latency-sensitive systems. The planning phase is a critical component of such systems, during which the in-vehicle compute platform is responsible for determining the future maneuvers that the vehicle will follow. In this paper, we present a GPU-accelerated optimized implementation of the Frenet Path Planner, a widely known path planning algorithm. Unlike the current state-of-the-art, our implementation accelerates the entire algorithm, including the path generation and collision avoidance phases. We measure the execution time of our implementation and demonstrate dramatic speedups compared to the CPU baseline implementation. Additionally, we evaluate the impact of different precision types (double, float, half) on trajectory errors to investigate the tradeoff between completion latencies and computation precision

    Collective PV-RCNN: A Novel Fusion Technique using Collective Detections for Enhanced Local LiDAR-Based Perception

    Full text link
    Comprehensive perception of the environment is crucial for the safe operation of autonomous vehicles. However, the perception capabilities of autonomous vehicles are limited due to occlusions, limited sensor ranges, or environmental influences. Collective Perception (CP) aims to mitigate these problems by enabling the exchange of information between vehicles. A major challenge in CP is the fusion of the exchanged information. Due to the enormous bandwidth requirement of early fusion approaches and the interchangeability issues of intermediate fusion approaches, only the late fusion of shared detections is practical. Current late fusion approaches neglect valuable information for local detection, this is why we propose a novel fusion method to fuse the detections of cooperative vehicles within the local LiDAR-based detection pipeline. Therefore, we present Collective PV-RCNN (CPV-RCNN), which extends the PV-RCNN++ framework to fuse collective detections. Code is available at https://github.com/ekut-esComment: accepted at IEEE ITSC 202

    Unobtrusive Health Monitoring in Private Spaces: The Smart Vehicle

    Get PDF
    Unobtrusive in-vehicle health monitoring has the potential to use the driving time to perform regular medical check-ups. This work intends to provide a guide to currently proposed sensor systems for in-vehicle monitoring and to answer, in particular, the questions: (1) Which sensors are suitable for in-vehicle data collection? (2) Where should the sensors be placed? (3) Which biosignals or vital signs can be monitored in the vehicle? (4) Which purposes can be supported with the health data? We reviewed retrospective literature systematically and summarized the up-to-date research on leveraging sensor technology for unobtrusive in-vehicle health monitoring. PubMed, IEEE Xplore, and Scopus delivered 959 articles. We firstly screened titles and abstracts for relevance. Thereafter, we assessed the entire articles. Finally, 46 papers were included and analyzed. A guide is provided to the currently proposed sensor systems. Through this guide, potential sensor information can be derived from the biomedical data needed for respective purposes. The suggested locations for the corresponding sensors are also linked. Fifteen types of sensors were found. Driver-centered locations, such as steering wheel, car seat, and windscreen, are frequently used for mounting unobtrusive sensors, through which some typical biosignals like heart rate and respiration rate are measured. To date, most research focuses on sensor technology development, and most application-driven research aims at driving safety. Health-oriented research on the medical use of sensor-derived physiological parameters is still of interest

    Detection and Tracking of Pedestrians Using Doppler LiDAR

    Get PDF
    Pedestrian detection and tracking is necessary for autonomous vehicles and traffic manage- ment. This paper presents a novel solution to pedestrian detection and tracking for urban scenarios based on Doppler LiDAR that records both the position and velocity of the targets. The workflow consists of two stages. In the detection stage, the input point cloud is first segmented to form clus- ters, frame by frame. A subsequent multiple pedestrian separation process is introduced to further segment pedestrians close to each other. While a simple speed classifier is capable of extracting most of the moving pedestrians, a supervised machine learning-based classifier is adopted to detect pedestrians with insignificant radial velocity. In the tracking stage, the pedestrian’s state is estimated by a Kalman filter, which uses the speed information to estimate the pedestrian’s dynamics. Based on the similarity between the predicted and detected states of pedestrians, a greedy algorithm is adopted to associate the trajectories with the detection results. The presented detection and tracking methods are tested on two data sets collected in San Francisco, California by a mobile Doppler LiDAR system. The results of the pedestrian detection demonstrate that the proposed two-step classifier can improve the detection performance, particularly for detecting pedestrians far from the sensor. For both data sets, the use of Doppler speed information improves the F1-score and the recall by 15% to 20%. The subsequent tracking from the Kalman filter can achieve 83.9–55.3% for the multiple object tracking accuracy (MOTA), where the contribution of the speed measurements is secondary and insignificant

    무인 자율주행 차량을 위한 단안 카메라 기반 실시간 주행 환경 인식 기법에 관한 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기공학부, 2014. 2. 서승우.Homo Faber, refers to humans as controlling the environments through tools. From the beginning of the world, humans create tools for chasing the convenient life. The desire for the rapid movement let the human ride on horseback, make the wagon and finally make the vehicle. The vehicle made humans possible to travel the long distance very quickly as well as conveniently. However, since human being itself is imperfect, plenty of people have died due to the car accident, and people are dying at this moment. The research for autonomous vehicle has been conducted to satisfy the humans desire of the safety as the best alternative. And, the dream of autonomous vehicle will be come true in the near future. For the implementation of autonomous vehicle, many kinds of techniques are required, among which, the recognition of the environment around the vehicle is one of the most fundamental and important problems. For the recognition of surrounding objects many kinds of sensors can be utilized, however, the monocular camera can collect the largest information among sensors as well as can be utilized for the variety of purposes, and can be adopted for the various vehicle types due to the good price competitiveness. I expect that the research using the monocular camera for autonomous vehicle is very practical and useful. In this dissertation, I cover four important recognition problems for autonomous driving by using monocular camera in vehicular environment. Firstly, to drive the way autonomously the vehicle has to recognize lanes and keep its lane. However, the detection of lane markings under the various illuminant variation is very difficult in the image processing area. Nevertheless, it must be solved for the autonomous driving. The first research topic is the robust lane marking extraction under the illumination variations for multilane detection. I proposed the new lane marking extraction filter that can detect the imperfect lane markings as well as the new false positive cancelling algorithm that can eliminate noise markings. This approach can extract lane markings successfully even under the bad illumination conditions. Secondly, the problem to tackle, is if there is no lane marking on the road, then how the autonomous vehicle can recognize the road to run? In addition, what is the current lane position of the road? The latter is the important question since we can make a decision for lane change or keeping depending on the current position of lane. The second research is for handling those two problems, and I proposed the approach for the fusing the road detection and the lane position estimation. Thirdly, to drive more safely, keeping the safety distance is very important. Additionally, many equipments for the driving safety require the distance information. Measuring accurate inter-vehicle distance by using monocular camera and line laser is the third research topic. To measure the inter-vehicle distance, I illuminate the line laser on the front side of vehicle, and measure the length of the laser line and lane width in the image. Based on the imaging geometry, the distance calculation problem can be solved with accuracy. There are still many of important problems are remaining to be solved, and I proposed some approaches by using the monocular camera to handle the important problems. I expect very active researches will be continuously conducted and, based on the researches, the era of autonomous vehicle will come in the near future.1 Introduction 1.1 Background and Motivations 1.2 Contributions and Outline of the Dissertation 1.2.1 Illumination-Tolerant Lane Marking Extraction for Multilane Detection 1.2.2 Fusing Road Detection and Lane Position Estimation for the Robust Road Boundary Estimation 1.2.3 Accurate Inter-Vehicle Distance Measurement based on Monocular Camera and Line Laser 2 Illumination-Tolerant Lane Marking Extraction for Multilane Detection 2.1 Introduction 2.2 Lane Marking Candidate Extraction Filter 2.2.1 Requirements of the Filter 2.2.2 A Comparison of Filter Characteristics 2.2.3 Cone Hat Filter 2.3 Overview of the Proposed Algorithm 2.3.1 Filter Width Estimation 2.3.2 Top Hat (Cone Hat) Filtering 2.3.3 Reiterated Extraction 2.3.4 False Positive Cancelling 2.3.4.1 Lane Marking Center Point Extraction 2.3.4.2 Fast Center Point Segmentation 2.3.4.3 Vanishing Point Detection 2.3.4.4 Segment Extraction 2.3.4.5 False Positive Filtering 2.4 Experiments and Evaluation 2.4.1 Experimental Set-up 2.4.2 Conventional Algorithm for Evaluation 2.4.2.1 Global threshold 2.4.2.2 Positive Negative Gradient 2.4.2.3 Local Threshold 2.4.2.4 Symmetry Local Threshold 2.4.2.5 Double Extraction using Symmetry Local Threshold 2.4.2.6 Gaussian Filter 2.4.3 Experimental Results 2.4.4 Summary 3 Fusing Road Detection and Lane Position Estimation for the Robust Road Boundary Estimation 3.1 Introduction 3.2 Chromaticity-based Flood-fill Method 3.2.1 Illuminant-Invariant Space 3.2.2 Road Pixel Selection 3.2.3 Flood-fill Algorithm 3.3 Lane Position Estimation 3.3.1 Lane Marking Extraction 3.3.2 Proposed Lane Position Detection Algorithm 3.3.3 Birds-eye View Transformation by using the Proposed Dynamic Homography Matrix Generation 3.3.4 Next Lane Position Estimation based on the Cross-ratio 3.3.5 Forward-looking View Transformation 3.4 Information Fusion Between Road Detection and Lane Position Estimation 3.4.1 The Case of Detection Failures 3.4.2 The Benefit of Information Fusion 3.5 Experiments and Evaluation 3.6 Summary 4 Accurate Inter-Vehicle Distance Measurement based on Monocular Camera and Line Laser 4.1 Introduction 4.2 Proposed Distance Measurement Algorithm 4.3 Experiments and Evaluation 4.3.1 Experimental System Set-up 4.3.2 Experimental Results 4.4 Summary 5 ConclusionDocto

    Agrupamiento, predicción y clasificación ordinal para series temporales utilizando técnicas de machine learning: aplicaciones

    Get PDF
    In the last years, there has been an increase in the number of fields improving their standard processes by using machine learning (ML) techniques. The main reason for this is that the vast amount of data generated by these processes is difficult to be processed by humans. Therefore, the development of automatic methods to process and extract relevant information from these data processes is of great necessity, giving that these approaches could lead to an increase in the economic benefit of enterprises or to a reduction in the workload of some current employments. Concretely, in this Thesis, ML approaches are applied to problems concerning time series data. Time series is a special kind of data in which data points are collected chronologically. Time series are present in a wide variety of fields, such as atmospheric events or engineering applications. Besides, according to the main objective to be satisfied, there are different tasks in the literature applied to time series. Some of them are those on which this Thesis is mainly focused: clustering, classification, prediction and, in general, analysis. Generally, the amount of data to be processed is huge, arising the need of methods able to reduce the dimensionality of time series without decreasing the amount of information. In this sense, the application of time series segmentation procedures dividing the time series into different subsequences is a good option, given that each segment defines a specific behaviour. Once the different segments are obtained, the use of statistical features to characterise them is an excellent way to maximise the information of the time series and simultaneously reducing considerably their dimensionality. In the case of time series clustering, the objective is to find groups of similar time series with the idea of discovering interesting patterns in time series datasets. In this Thesis, we have developed a novel time series clustering technique. The aim of this proposal is twofold: to reduce as much as possible the dimensionality and to develop a time series clustering approach able to outperform current state-of-the-art techniques. In this sense, for the first objective, the time series are segmented in order to divide the them identifying different behaviours. Then, these segments are projected into a vector of statistical features aiming to reduce the dimensionality of the time series. Once this preprocessing step is done, the clustering of the time series is carried out, with a significantly lower computational load. This novel approach has been tested on all the time series datasets available in the University of East Anglia and University of California Riverside (UEA/UCR) time series classification (TSC) repository. Regarding time series classification, two main paths could be differentiated: firstly, nominal TSC, which is a well-known field involving a wide variety of proposals and transformations applied to time series. Concretely, one of the most popular transformation is the shapelet transform (ST), which has been widely used in this field. The original method extracts shapelets from the original time series and uses them for classification purposes. Nevertheless, the full enumeration of all possible shapelets is very time consuming. Therefore, in this Thesis, we have developed a hybrid method that starts with the best shapelets extracted by using the original approach with a time constraint and then tunes these shapelets by using a convolutional neural network (CNN) model. Secondly, time series ordinal classification (TSOC) is an unexplored field beginning with this Thesis. In this way, we have adapted the original ST to the ordinal classification (OC) paradigm by proposing several shapelet quality measures taking advantage of the ordinal information of the time series. This methodology leads to better results than the state-of-the-art TSC techniques for those ordinal time series datasets. All these proposals have been tested on all the time series datasets available in the UEA/UCR TSC repository. With respect to time series prediction, it is based on estimating the next value or values of the time series by considering the previous ones. In this Thesis, several different approaches have been considered depending on the problem to be solved. Firstly, the prediction of low-visibility events produced by fog conditions is carried out by means of hybrid autoregressive models (ARs) combining fixed-size and dynamic windows, adapting itself to the dynamics of the time series. Secondly, the prediction of convective cloud formation (which is a highly imbalance problem given that the number of convective cloud events is much lower than that of non-convective situations) is performed in two completely different ways: 1) tackling the problem as a multi-objective classification task by the use of multi-objective evolutionary artificial neural networks (MOEANNs), in which the two conflictive objectives are accuracy of the minority class and the global accuracy, and 2) tackling the problem from the OC point of view, in which, in order to reduce the imbalance degree, an oversampling approach is proposed along with the use of OC techniques. Thirdly, the prediction of solar radiation is carried out by means of evolutionary artificial neural networks (EANNs) with different combinations of basis functions in the hidden and output layers. Finally, the last challenging problem is the prediction of energy flux from waves and tides. For this, a multitask EANN has been proposed aiming to predict the energy flux at several prediction time horizons (from 6h to 48h). All these proposals and techniques have been corroborated and discussed according to physical and atmospheric models. The work developed in this Thesis is supported by 11 JCR-indexed papers in international journals (7 Q1, 3 Q2, 1 Q3), 11 papers in international conferences, and 4 papers in national conferences

    Interaktive Raumzeitrekonstruktion in der Computergraphik

    Get PDF
    High-quality dense spatial and/or temporal reconstructions and correspondence maps from camera images, be it optical flow, stereo or scene flow, are an essential prerequisite for a multitude of computer vision and graphics tasks, e.g. scene editing or view interpolation in visual media production. Due to the ill-posed nature of the estimation problem in typical setups (i.e. limited amount of cameras, limited frame rate), automated estimation approaches are prone to erroneous correspondences and subsequent quality degradation in many non-trivial cases such as occlusions, ambiguous movements, long displacements, or low texture. While improving estimation algorithms is one obvious possible direction, this thesis complementarily concerns itself with creating intuitive, high-level user interactions that lead to improved correspondence maps and scene reconstructions. Where visually convincing results are essential, rendering artifacts resulting from estimation errors are usually repaired by hand with image editing tools, which is time consuming and therefore costly. My new user interactions, which integrate human scene recognition capabilities to guide a semi-automatic correspondence or scene reconstruction algorithm, save considerable effort and enable faster and more efficient production of visually convincing rendered images.Raumzeit-Rekonstruktion in Form von dichten räumlichen und/oder zeitlichen Korrespondenzen zwischen Kamerabildern, sei es optischer Fluss, Stereo oder Szenenfluss, ist eine wesentliche Voraussetzung für eine Vielzahl von Aufgaben in der Computergraphik, zum Beispiel zum Editieren von Szenen oder Bildinterpolation. Da sowohl die Anzahl der Kameras als auch die Bildfrequenz begrenzt sind, ist das Rekonstruktionsproblem unterbestimmt, weswegen automatisierte Schätzungen häufig fehlerhafte Korrespondenzen für nichttriviale Fälle wie Verdeckungen, mehrdeutige oder große Bewegungen, oder einheitliche Texturen enthalten; jede Bildsynthese basierend auf den partiell falschen Schätzungen muß daher Qualitätseinbußen in Kauf nehmen. Man kann nun zum einen versuchen, die Schätzungsalgorithmen zu verbessern. Komplementär dazu kann man möglichst effiziente Interaktionsmöglichkeiten entwickeln, die die Qualität der Rekonstruktion drastisch verbessern. Dies ist das Ziel dieser Dissertation. Für visuell überzeugende Resultate müssen Bildsynthesefehler bislang manuell in einem aufwändigen Nachbearbeitungsschritt mit Hilfe von Bildbearbeitungswerkzeugen korrigiert werden. Meine neuen Benutzerinteraktionen, welche menschliches Szenenverständnis in halbautomatische Algorithmen integrieren, verringern den Nachbearbeitungsaufwand beträchtlich und ermöglichen so eine schnellere und effizientere Produktion qualitativ hochwertiger synthetisierter Bilder

    Goal reasoning for autonomous agents using automated planning

    Get PDF
    Mención Internacional en el título de doctorAutomated planning deals with the task of finding a sequence of actions, namely a plan, which achieves a goal from a given initial state. Most planning research consider goals are provided by a external user, and agents just have to find a plan to achieve them. However, there exist many real world domains where agents should not only reason about their actions but also about their goals, generating new ones or changing them according to the perceived environment. In this thesis we aim at broadening the goal reasoning capabilities of planningbased agents, both when acting in isolation and when operating in the same environment as other agents. In single-agent settings, we firstly explore a special type of planning tasks where we aim at discovering states that fulfill certain cost-based requirements with respect to a given set of goals. By computing these states, agents are able to solve interesting tasks such as find escape plans that move agents in to safe places, hide their true goal to a potential observer, or anticipate dynamically arriving goals. We also show how learning the environment’s dynamics may help agents to solve some of these tasks. Experimental results show that these states can be quickly found in practice, making agents able to solve new planning tasks and helping them in solving some existing ones. In multi-agent settings, we study the automated generation of goals based on other agents’ behavior. We focus on competitive scenarios, where we are interested in computing counterplans that prevent opponents from achieving their goals. We frame these tasks as counterplanning, providing theoretical properties of the counterplans that solve them. We also show how agents can benefit from computing some of the states we propose in the single-agent setting to anticipate their opponent’s movements, thus increasing the odds of blocking them. Experimental results show how counterplans can be found in different environments ranging from competitive planning domains to real-time strategy games.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidenta: Eva Onaindía de la Rivaherrera.- Secretario: Ángel García Olaya.- Vocal: Mark Robert

    On-Board Vehicle Tracking and Behavior Anticipation for Advanced Driver Assistance Systems

    Get PDF
    This thesis develops a probabilistic vehicle tracking and behavior anticipation system for advanced driver assistance systems. Advanced Driver Assistance System(s) (ADAS) need to know where other vehicles in the surrounding of the ego-vehicle are or will be in a few seconds, in order to maintain a safe driving state by activating evasive actions or issuing warnings to the driver in the event of a critical situation. The difficulty of this task are the uncertainties in driver behavior and sensor readings. Since sensors are never accurate and behavior cannot be predicted exactly, the internal estimate of the surrounding world state has to be modeled probabilistically. In contrast to most state-of-the-art approaches in the vehicle tracking domain, this thesis represents the uncertainty of an individual vehicle position estimate by a probabilistic grid representation also known as a Bayesian Histogram Filter (BHF). This representation handles multi-modal distributions by saving the probability in individual grid cells and propagates them through the grid by model assumptions that simulate real-world vehicle kinematics and driver behavior. In this thesis, first, the probabilistic position and velocity representation by the grid cells is discussed, as well as the models that propagate the probabilities. Then, the ego-movement compensation and how to use the representation for position tracking is illustrated. Next, the thesis deals with specific errors that emerge due to the discrete grid representation. The BHF is then further developed towards an Iterative Context Using Bayesian Histogram Filter (ICUBHF) approach by introducing an attractor-driven behavior model. This addition enables the anticipation of the behavior of the monitored vehicles’ by comparing the likelihood of different behavior alternatives. Surveys of different comparison measures and of related research are provided as well. Finally, the ICUBHF is evaluated in different real-world settings. The evaluation results confirm that the ICUBHF approach is able to track a vehicle and anticipate the behavior in a real-world intersection scenario. In conclusion, we outlined possible improvements necessary to create a productive ADAS application that deals with arbitrary real-world intersection scenarios. Such a system would allow an ADAS to work in complex urban scenarios in which it could track other vehicles and anticipate their behavior.Im Rahmen dieser Dissertation wird ein probabilistisches System zur Positionsbestimmung (Tracking) von Fahrzeugen und zu deren Verhaltensantizipation zur Verwendung in Fahrerassistenzsystemen entwickelt. Um in kritischen Fahrsituationen eingreifen zu können, benötigen Fahrerassistenzsysteme (Advanced Driver Assistance Systems) Wissen über den momentanen und zukünftigen Standort der umgebenden Fahrzeuge. Die Herausforderung dabei ist die Behandlung von Unsicherheiten im Fahrerverhalten und in den Sensormessungen. Sensoren liefern stets ein verrauschtes Abbild der Realität und Fahrverhalten kann nicht exakt prädiziert werden. Ein Fahrerassistenzsystem benötigt verlässliche Daten und kann nicht direkt mit diesen verrauschten Sensorinformationen arbeiten. Aus diesem Grund muss ein wahrscheinlichkeitsbasiertes internes Abbild der Umgebung modelliert werden, welches alle Sensorinformationen auf mathematisch optimalem Wege miteinander verknüpft und das gesammelte Wissen aus den verschiedenen Zeitpunkten bestmöglich kombiniert. Mithilfe dieses probabilistischen Abbilds der Realität kann ein Fahrerassistenzsystem die Unsicherheiten in der Sensorwahrnehmung bei der Aktionsplanung berücksichtigen. Im Gegensatz zu den meisten aktuellen Ansätzen im Fahrzeug-Tracking-Umfeld wird in dieser Arbeit die Unsicherheit in den Positionen der einzelnen umgebenden Fahrzeuge mithilfe eines probabilistischen Netzes, auch bekannt als Bayesischer Histogramm-Filter (BHF), repräsentiert. Diese Repräsentation kann mit multi-modalen Wahrscheinlichkeitsverteilungen umgehen, indem sie die Wahrscheinlichkeit in einzelnen Zellen des Histogramms speichert und anhand von probabilistischen Regeln, welche die Fahrzeugkinematik und das Fahrerverhalten simulieren, durch das Netz propagiert. Zunächst behandeln wir die probabilistische Repräsentation der Fahrzeugposition und Geschwindigkeit durch die Histogrammzellen, sowie die Modelle zur Propagierung der Wahrscheinlichkeiten durch das Netz. Als nächstes wird die Eigenbewegungskompensation behandelt, sowie die Verwendungsmöglichkeit der Repräsentation als Bayesischer Filter. Weiterhin widmen wir uns den Fehlern, die durch die diskrete Repräsentation entstehen. Durch die Erweiterung um ein Attraktorkonzept, welches ein zielgerichtete Fahrerverhalten modelliert, wird das BHF zu einem ICUBHF (Iterative Context Using Bayesian Histogram Filter) weiterentwickelt. Dies ermöglicht die Antizipation des Fahrverhaltes des beobachteten Fahrzeugs, indem die Wahrscheinlichkeiten verschiedener Verhaltensalternativen miteinander verglichen werden. Des Weiteren geben wir einen Überblick über verschiedene Vergleichsmaße und den aktuellen Stand der Forschung. Zuallerletzt wird der ICUBHF in verschiedenen Echtwelt-Szenarien getestet. Die Evaluationsergebnisse bestätigen, dass der ICUBHF-Ansatz dazu in der Lage ist, Fahrzeuge bei der Durchfahrt von Straßenkreuzungen zu tracken und zugleich deren Verhalten zu antizipieren. In der Zusammenfassung legen wir mögliche Verbesserungen dar, die für die Verwendung des ICUBHF in einem Produktivbetrieb eines Fahrerassistenzsystems notwendig sind, welches mit beliebigen RealweltKreuzungsszenarien arbeiten kann. Ein solches System würde Fahrerassistenzsystemen erlauben in komplexen innerstädtischen Szenarien andere Fahrzeuge zu tracken und ihr Verhalten zu antizipieren

    La città, il viaggio, il turismo: Percezione, produzione e trasformazione

    Get PDF
    [English]:The city as a destination of the journey in his long evolution throughout history: a basic human need, an event aimed at knowledge, to education, to business and trade, military and religious conquests, but also related to redundancies for the achievement of mere physical or spiritual salvation. In the frame of one of the world's most celebrated historical city, the cradle of Greek antiquity, myth and beauty, travel timeless destination for culture and leisure, and today, more than ever, strongly tending to the conservation and development of their own identity, this collection of essays aims to provide, in the tradition of AISU studies, a further opportunity for reflection and exchange between the various disciplines related to urban history./ [Italiano]:La città come meta del viaggio nella sua lunga evoluzione nel corso della storia: un bisogno primario dell'uomo, un evento finalizzato alla conoscenza, all'istruzione, agli affari e agli scambi commerciali, alle conquiste militari o religiose, ma anche legato agli esodi per il conseguimento della mera salvezza fisica o spirituale. Nella cornice di una delle città storiche più celebrate al mondo, culla dell'antichità greca, del mito e della bellezza, meta intramontabile di viaggi di cultura e di piacere, e oggi, più che mai, fortemente protesa alla conservazione e alla valorizzazione della propria identità, questa raccolta di saggi intende offrire, nel solco della tradizione di studi dell'AISU, un'ulteriore occasione di riflessione e di confronto tra i più svariati ambiti disciplinari attinenti alla storia urbana
    corecore