155 research outputs found

    The physics of angular momentum radio

    Full text link
    Wireless communications, radio astronomy and other radio science applications are predominantly implemented with techniques built on top of the electromagnetic linear momentum (Poynting vector) physical layer. As a supplement and/or alternative to this conventional approach, techniques rooted in the electromagnetic angular momentum physical layer have been advocated, and promising results from proof-of-concept radio communication experiments using angular momentum were recently published. This sparingly exploited physical observable describes the rotational (spinning and orbiting) physical properties of the electromagnetic fields and the rotational dynamics of the pertinent charge and current densities. In order to facilitate the exploitation of angular momentum techniques in real-world implementations, we present a systematic, comprehensive theoretical review of the fundamental physical properties of electromagnetic angular momentum observable. Starting from an overview that puts it into its physical context among the other Poincar\'e invariants of the electromagnetic field, we describe the multi-mode quantized character and other physical properties that sets electromagnetic angular momentum apart from the electromagnetic linear momentum. These properties allow, among other things, a more flexible and efficient utilization of the radio frequency spectrum. Implementation aspects are discussed and illustrated by examples based on analytic and numerical solutions.Comment: Fixed LaTeX rendering errors due to inconsistencies between arXiv's LaTeX machine and texlive in OpenSuSE 13.

    A high-resolution photogrammetric workflow based on focus stacking for the 3D modeling of small Aegean inscriptions

    Get PDF
    Any attempt of decipherment and language identification of the scripts from the Aegean dating to the second millennium BCE (namely Cretan Hieroglyphic, Linear A, and Cypro-Minoan) has relied, until today, on traditional catalogues of inscriptions, consisting of incomplete or subjective 2D representations, such as photographs and hand-drawn copies, which are not suitable for documenting such three-dimensional writing systems. In contrast, 3D models of the inscribed media allow for an accurate and objective “autopsy” of the entire surface of the inscriptions. In this context, this work presents an efficient, accurate, high-resolution, and high-quality texture photogrammetric workflow based on focus-stacked macro images, designed for the 3D modeling of small Aegean inscriptions, to properly reconstruct their geometry and to enhance the identification of their signs, making their transcription as unbiased as possible. The pipeline we propose also benefits from a pre-processing stage to remove any coloration difference from the images, and a reliable and simple 3D scaling procedure. We tested this workflow on six inscribed artifacts (two in Cretan Hieroglyphic, three in Linear A, one of uncertain affiliation), whose average size ranges approximately from 1 to 3 cm. Our results show that this workflow achieved an accuracy of a few hundredths of mm, comparable to the technical specifications of standard commercial 3D scanners. Moreover, the high 3D density we obtained (corresponding to the edge average length of the 3D model mesh), up to ≈ 30 ”m, allowed us to reconstruct even the smallest details of the inscriptions, both in the mesh and in the texture layer of the 3D models

    Monitoring urban heat island through google earth engine. Potentialities and difficulties in different cities of the United States

    Get PDF
    The aim of this work is to exploit the large-scale analysis capabilities of the innovative Google Earth Engine platform in order to investigate the temporal variations of the Urban Heat Island phenomenon as a whole. A intuitive methodology implementing a large-scale correlation analysis between the Land Surface Temperature and Land Cover alterations was thus developed. The results obtained for the Phoenix MA are promising and show how the urbanization heavily affects the magnitude of the UHI effects with significant increases in LST. The proposed methodology is therefore able to efficiently monitor the UHI phenomenon

    ANALYSIS OF THE FLOATING CAR DATA OF TURIN PUBLIC TRANSPORTATION SYSTEM: FIRST RESULTS

    Get PDF
    Global Navigation Satellite System (GNSS) sensors represent nowadays a mature technology, low-cost and efficient, to collect large spatio-temporal datasets (Geo Big Data) of vehicle movements in urban environments. Anyway, to extract the mobility information from such Floating Car Data (FCD), specific analysis methodologies are required. In this work, the first attempts to analyse the FCD of the Turin Public Transportation system are presented. Specifically, a preliminary methodology was implemented, in view of an automatic and possible real-time impedance map generation. The FCD acquired by all the vehicles of the Gruppo Torinese Trasporti (GTT) company in the month of April 2017 were thus processed to compute their velocities and a visualization approach based on Osmnx library was adopted. Furthermore, a preliminary temporal analysis was carried out, showing higher velocities in weekend days and not peak hours, as could be expected. Finally, a method to assign the velocities to the line network topology was developed and some tests carried out

    3D HIGH-QUALITY MODELING OF SMALL AND COMPLEX ARCHAEOLOGICAL INSCRIBED OBJECTS: RELEVANT ISSUES AND PROPOSED METHODOLOGY

    Get PDF
    3D modelling of inscribed archaeological finds (such as tablets or small objects) has to consider issues related to the correct acquisition and reading of ancient inscriptions, whose size and degree of conservation may vary greatly, in order to guarantee the needed requirements for visual inspection and analysis of the signs. In this work, photogrammetry and laser scanning were tested in order to find the optimal sensors and settings, useful to the complete 3D reconstruction of such inscribed archaeological finds, paying specific attention to the final geometric accuracy and operative feasibility in terms of required sensors and necessary time. Several 3D modelling tests were thus carried out on four replicas of inscribed objects, which are characterized by different size, material and epigraphic peculiarities. Specifically, in relation to photogrammetry, different cameras and lenses were used and a robust acquisition setup, able to guarantee a correct and automatic alignment of images during the photogrammetric process, was identified. The focus stacking technique was also investigated. The Canon EOS 1200D camera equipped with prime lenses and iPad camera showed respectively the best and the worst accuracy. From an overall geometric point of view, 50 mm and 100 mm lenses achieved very similar results, but the reconstruction of the smallest details with the 50 mm lens was not appropriate. On the other hand, the acquisition time for the 50 mm lens was considerably lower than the 100 mm one. In relation to laser scanning, the ScanRider 1.2 model was used. The 3D models produced (in less time than using photogrammetry) clearly highlight how this scanner is able to reconstruct even the high frequencies with high resolution. However, the models in this case are not provided with texture. For these reasons, a robust procedure for integrating the texture of photogrammetry models with the mesh of laser scanning models was also carried out

    Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images

    Get PDF
    Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the FullWidth-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges

    Rapid automatic segmentation of abnormal tissue in late gadolinium enhancement cardiovascular magnetic resonance images for improved management of long-standing persistent atrial fibrillation

    Get PDF
    Background: Atrial fibrillation (AF) is the most common heart rhythm disorder. In order for late Gd enhancement cardiovascular magnetic resonance (LGE CMR) to ameliorate the AF management, the ready availability of the accurate enhancement segmentation is required. However, the computer-aided segmentation of enhancement in LGE CMR of AF is still an open question. Additionally, the number of centres that have reported successful application of LGE CMR to guide clinical AF strategies remains low, while the debate on LGE CMR’s diagnostic ability for AF still holds. The aim of this study is to propose a method that reliably distinguishes enhanced (abnormal) from non-enhanced (healthy) tissue within the left atrial wall of (pre-ablation and 3 months post-ablation) LGE CMR data-sets from long-standing persistent AF patients studied at our centre. Methods: Enhancement segmentation was achieved by employing thresholds benchmarked against the statistics of the whole left atrial blood-pool (LABP). The test-set cross-validation mechanism was applied to determine the input feature representation and algorithm that best predict enhancement threshold levels. Results: Global normalized intensity threshold levels T PRE = 1 1/4 and T POST = 1 5/8 were found to segment enhancement in data-sets acquired pre-ablation and at 3 months post-ablation, respectively. The segmentation results were corroborated by using visual inspection of LGE CMR brightness levels and one endocardial bipolar voltage map. The measured extent of pre-ablation fibrosis fell within the normal range for the specific arrhythmia phenotype. 3D volume renderings of segmented post-ablation enhancement emulated the expected ablation lesion patterns. By comparing our technique with other related approaches that proposed different threshold levels (although they also relied on reference regions from within the LABP) for segmenting enhancement in LGE CMR data-sets of AF patients, we illustrated that the cut-off levels employed by other centres may not be usable for clinical studies performed in our centre. Conclusions: The proposed technique has great potential for successful employment in the AF management within our centre. It provides a highly desirable validation of the LGE CMR technique for AF studies. Inter-centre differences in the CMR acquisition protocol and image analysis strategy inevitably impede the selection of a universally optimal algorithm for segmentation of enhancement in AF studies
    • 

    corecore