5,307 research outputs found

    Analysis of the floating car data of Turin public transportation system: first results

    Get PDF
    Global Navigation Satellite System (GNSS) sensors represent nowadays a mature technology, low-cost and efficient, to collect large spatio-temporal datasets (Geo Big Data) of vehicle movements in urban environments. Anyway, to extract the mobility information from such Floating Car Data (FCD), specific analysis methodologies are required. In this work, the first attempts to analyse the FCD of the Turin Public Transportation system are presented. Specifically, a preliminary methodology was implemented, in view of an automatic and possible real-time impedance map generation. The FCD acquired by all the vehicles of the Gruppo Torinese Trasporti (GTT) company in the month of April 2017 were thus processed to compute their velocities and a visualization approach based on Osmnx library was adopted. Furthermore, a preliminary temporal analysis was carried out, showing higher velocities in weekend days and not peak hours, as could be expected. Finally, a method to assign the velocities to the line network topology was developed and some tests carried out

    Attentional Guidance from Multiple Working Memory Representations: Evidence from Eye Movements

    Get PDF
    Recent studies have shown that the representation of an item in visual working memory (VWM) can bias the deployment of attention to stimuli in the visual scene possessing the same features. When multiple item representations are simultaneously held in VWM, whether these representations, especially those held in a non-prioritized or accessory status, are able to bias attention, is still controversial. In the present study we adopted an eye tracking technique to shed light on this issue. In particular, we implemented a manipulation aimed at prioritizing one of the VWM representation to an active status, and tested whether attention could be guided by both the prioritized and the accessory representations when they reappeared as distractors in a visual search task. Notably, in Experiment 1, an analysis of first fixation proportion (FFP) revealed that both the prioritized and the accessory representations were able to capture attention suggesting a significant attentional guidance effect. However, such effect was not present in manual response times (RT). Most critically, in Experiment 2, we used a more robust experimental design controlling for different factors that might have played a role in shaping these findings. The results showed evidence for attentional guidance from the accessory representation in both manual RTs and FFPs. Interestingly, FFPs showed a stronger attentional bias for the prioritized representation than for the accessory representation across experiments. The overall findings suggest that multiple VWM representations, even the accessory representation, can simultaneously interact with visual attention

    Terrain classification by cluster analisys

    Get PDF
    The digital terrain modelling can be obtained by different methods belonging to two principal categories: deterministic methods (e.g. polinomial and spline functions interpolation, Fourier spectra) and stochastic methods (e.g. least squares collocation and fractals, i.e. the concept of selfsimilarity in probability). To reach good resul ts, both the fi rst and the second methods need same initial suitable information which can be gained by a preprocessing of data named terrain classification. In fact, the deterministic methods require to know how is the roughness of the terrain, related to the density of the data (elevations, deformations, etc.) used for the i nterpo 1 at ion, and the stochast i c methods ask for the knowledge of the autocorrelation function of the data. Moreover, may be useful or very necessary to sp 1 it up the area under consideration in subareas homogeneous according to some parameters, because of different kinds of reasons (too much large initial set of data, so that they can't be processed togheter; very important discontinuities or singularities; etc.). Last but not least, may be remarkable to test the type of distribution (normal or non-normal) of the subsets obtained by the preceding selection, because the statistical properties of the normal distribution are very important (e.g., least squares linear estimations are the same of maximum likelihood and minimum variance ones)

    Upgrade of foss date plug-in: Implementation of a new radargrammetric DSM generation capability

    Get PDF
    Synthetic Aperture Radar (SAR) satellite systems may give important contribution in terms of Digital Surface Models (DSMs) generation considering their complete independence from logistic constraints on the ground and weather conditions. In recent years, the new availability of very high resolution SAR data (up to 20 cm Ground Sample Distance) gave a new impulse to radargrammetry and allowed new applications and developments. Besides, to date, among the software aimed to radargrammetric applications only few show as free and open source. It is in this context that it has been decided to widen DATE (Digital Automatic Terrain Extractor) plug-in capabilities and additionally include the possibility to use SAR imagery for DSM stereo reconstruction (i.e. radargrammetry), besides to the optical workflow already developed. DATE is a Free and Open Source Software (FOSS) developed at the Geodesy and Geomatics Division, University of Rome "La Sapienza", and conceived as an OSSIM (Open Source Software Image Map) plug-in. It has been developed starting from May 2014 in the framework of 2014 Google Summer of Code, having as early purpose a fully automatic DSMs generation from high resolution optical satellite imagery acquired by the most common sensors. Here, the results achieved through this new capability applied to two stacks (one ascending and one descending) of three TerraSAR-X images each, acquired over Trento (Northern Italy) testfield, are presented. Global accuracies achieved are around 6 metres. These first results are promising and further analysis are expected for a more complete assessment of DATE application to SAR imagery

    Monitoring the impact of land cover change on surface urban heat island through google earth engine. Proposal of a global methodology, first applications and problems

    Get PDF
    All over the world, the rapid urbanization process is challenging the sustainable development of our cities. In 2015, the United Nation highlighted in Goal 11 of the SDGs (Sustainable Development Goals) the importance to "Make cities inclusive, safe, resilient and sustainable". In order to monitor progress regarding SDG 11, there is a need for proper indicators, representing different aspects of city conditions, obviously including the Land Cover (LC) changes and the urban climate with its most distinct feature, the Urban Heat Island (UHI). One of the aspects of UHI is the Surface Urban Heat Island (SUHI), which has been investigated through airborne and satellite remote sensing over many years. The purpose of this work is to show the present potential of Google Earth Engine (GEE) to process the huge and continuously increasing free satellite Earth Observation (EO) Big Data for long-term and wide spatio-temporal monitoring of SUHI and its connection with LC changes. A large-scale spatio-temporal procedure was implemented under GEE, also benefiting from the already established Climate Engine (CE) tool to extract the Land Surface Temperature (LST) from Landsat imagery and the simple indicator Detrended Rate Matrix was introduced to globally represent the net effect of LC changes on SUHI. The implemented procedure was successfully applied to six metropolitan areas in the U.S., and a general increasing of SUHI due to urban growth was clearly highlighted. As a matter of fact, GEE indeed allowed us to process more than 6000 Landsat images acquired over the period 1992-2011, performing a long-term and wide spatio-temporal study on SUHI vs. LC change monitoring. The present feasibility of the proposed procedure and the encouraging obtained results, although preliminary and requiring further investigations (calibration problems related to LST determination from Landsat imagery were evidenced), pave the way for a possible global service on SUHI monitoring, able to supply valuable indications to address an increasingly sustainable urban planning of our cities

    High resolution satellite imagery orientation accuracy assessment by leave-one-out method: accuracy index selection and accuracy uncertainty

    Get PDF
    The Leave-one-out cross-validation (LOOCV) was recently applied to the evaluation of High Resolution Satellite Imagery orientation accuracy and it has proven to be an effective method alternative with respect to the most common Hold-out-validation (HOV), in which ground points are split into two sets, Ground Control Points used for the orientation model estimation and Check Points used for the model accuracy assessment. On the contrary, the LOOCV applied to HRSI implies the iterative application of the orientationmodel using all the known ground points as GCPs except one, different in each iteration, used as a CP. In every iteration the residual between imagery derived coordinates with respect to CP coordinates (prediction error of the model on CP coordinates) is calculated; the overall spatial accuracy achievable from the oriented image may be estimated by computing the usual RMSE or, better, a robust accuracy index like the mAD (median Absolute Deviation) of prediction errors on all the iterations. In this way it is possible to overcome some drawbacks of the HOV: LOOCVis a reliable and robustmethod, not dependent on a particular set of CPs and on possible outliers, and it allows us to use each known ground point both as a GCP and as a CP, capitalising all the available ground information. This is a crucial problem in current situations, when the number of GCPs to be collected must be reduced as much as possible for obvious budget problems. The fundamentalmatter to deal with was to assess howwell LOOCVindexes (mADand RMSE) are able to represent the overall accuracy, that is howmuch they are stable and close to the corresponding HOV RMSE assumed as reference. Anyway, in the first tests the indexes comparison was performed in a qualitative way, neglecting their uncertainty. In this work the analysis has been refined on the basis of Monte Carlo simulations, starting from the actual accuracy of ground points and images coordinates, estimating the desired accuracy indexes (e.g. mAD and RMSE) in several trials, computing their uncertainty (standard deviation) and accounting for them in the comparison. Tests were performed on a QuickBird Basic image implementing an ad hoc procedure within the SISAR software developed by the Geodesy and Geomatics Team at the Sapienza University of Rome. The LOOCV method with accuracy evaluated by mAD seemed promising and useful for practical case

    Maturity Models in Industrial Internet: a Review

    Get PDF
    The introduction of assembly lines in industrial plants marked the beginning of the third industrial revolution. The support of information technology has enabled continuous progresses, up to the digitalisation of the processes. In this context, the further innovation characterised by the introduction of Cyber-Physical Systems and other enabling technologies has allowed the fourth industrial revolution. Proposed by the German government, Industry 4.0 appealed to both researchers and practitioners. Since the appearance of the term Industry 4.0, the linked-term Industrial Internet has been introduced to indicate the technology stack and knowledge management required by Industry 4.0. Industrial Internet makes a factory smart by applying advanced information and communication systems and future-oriented technologies, as well as new principles of knowledge management. Undeniably, such a system introduces greater complexity in terms of technologies, knowledge and socio-cultural aspects. Companies are often unprepared to deal with innovation issues, because they lack knowledge and competences and they are not culturally prepared for the relative novelties, but especially because they lack the necessary technological pre-requisites to develop the appropriate technology stack. From this perspective, different models of maturity have been developed, both in academic and technical environments, to support companies in understanding their position within the paradigm of the Industrial Internet. Starting from a quantitative review of the maturity models designed in the general literature, this article develops a qualitative review of the models applied in Industry 4.0, characterising all relevant models and proposing future perspectives to improve existing models and develop new ones

    On-device modeling of user's social context and familiar places from smartphone-embedded sensor data

    Full text link
    Context modeling and recognition are crucial for adaptive mobile and ubiquitous computing. Context-awareness in mobile environments relies on prompt reactions to context changes. However, current solutions focus on limited context information processed on centralized architectures, risking privacy leakage and lacking personalization. On-device context modeling and recognition are emerging research trends, addressing these concerns. Social interactions and visited locations play significant roles in characterizing daily life scenarios. This paper proposes an unsupervised and lightweight approach to model the user's social context and locations directly on the mobile device. Leveraging the ego-network model, the system extracts high-level, semantic-rich context features from smartphone-embedded sensor data. For the social context, the approach utilizes data on physical and cyber social interactions among users and their devices. Regarding location, it prioritizes modeling the familiarity degree of specific locations over raw location data, such as GPS coordinates and proximity devices. The effectiveness of the proposed approach is demonstrated through three sets of experiments, employing five real-world datasets. These experiments evaluate the structure of social and location ego networks, provide a semantic evaluation of the proposed models, and assess mobile computing performance. Finally, the relevance of the extracted features is showcased by the improved performance of three machine learning models in recognizing daily-life situations. Compared to using only features related to physical context, the proposed approach achieves a 3% improvement in AUROC, 9% in Precision, and 5% in Recall

    Computational LEED: computational thinking strategies and Visual Programming Languages to support environmental design and LEED credits achievement

    Get PDF
    Since environmental and energy issues and challenges continues to emerge as key global concerns, Green Building Certification Systems are becoming increasingly relevant in the construction industry. In this regard, LEED (Leadership in Energy and Environmental Design) is considered one of the most widely recognized environmental assessment methods used globally in the construction industry today. However, due to the high level of complexity of the LEED system, the tools usually used to verify the achievement of the credits lack of “design friendliness” and hardly communicate effectively with the conventional tools used by architects and engineers (e.g. CAD, BIM). This makes difficult to fully take into account, especially at the early design stage, the many interconnected aspects that contribute to the green certification, with consequent issues often arising in the design validation and/or construction phases, resulting in time delays and cost increments. The application of innovative problem-solving methods, such as computational thinking, together with coding techniques, represents an effective way to deal with this issue. This kind of methodology, in fact, allows the requirements of a specific LEED credit to be digitally parametrised and flexibly incorporated into a “designer friendly” working environment. In particular, Visual Programming Languages (VPLs), due to their high simplicity of usage, allow architects and engineers to develop algorithms and thus implement their technical knowledge in the field of environmental design with computer programming skills, useful to improve their tools and keep them constantly updated. The aim of this paper is to illustrate a methodology through which, by merging computational thinking strategies with VPL tools, is possible to keep under control, in the same working environment, all the parameters required to verify in real time the achievement of LEED credits. To demonstrate the flexibility of the approach, dedicated tools developed for the verification of some specific credits at different scales – neighbourhood and building – are illustrated as operational examples of the proposed methodology

    Recommender Systems for Online and Mobile Social Networks: A survey

    Full text link
    Recommender Systems (RS) currently represent a fundamental tool in online services, especially with the advent of Online Social Networks (OSN). In this case, users generate huge amounts of contents and they can be quickly overloaded by useless information. At the same time, social media represent an important source of information to characterize contents and users' interests. RS can exploit this information to further personalize suggestions and improve the recommendation process. In this paper we present a survey of Recommender Systems designed and implemented for Online and Mobile Social Networks, highlighting how the use of social context information improves the recommendation task, and how standard algorithms must be enhanced and optimized to run in a fully distributed environment, as opportunistic networks. We describe advantages and drawbacks of these systems in terms of algorithms, target domains, evaluation metrics and performance evaluations. Eventually, we present some open research challenges in this area
    • …
    corecore