33 research outputs found

    Oncology and complications.

    Get PDF
    This collection of cases describes some unusual urological tumors and complications related to urological tumors and their treatment. Case 1: A case of uretero-arterial fistula in a patient with long-term ureteral stenting for ureteral oncological stricture and a second case associated to retroperitoneal fibrosis were described. Abdominal CT, pyelography, cystoscopy were useful to show the origin of the bleeding. Angiography is useful for confirming the diagnosis and for subsequent positioning of an endovascular prosthesis which represents a safe approach with reduced post-procedural complications. Case 2: A case of patient who suffered from interstitial pneumonitis during a cycle of intravesical BCG instillations for urothelial cancer. The patient was hospitalized for more than two weeks in a COVID ward for a suspected of COVID-19 pneumonia, but he did not show any evidence of SARS-CoV-2 infection during his hospital stay. Case 3: A case of a young man with a functional urinary bladder paraganglioma who was successfully managed with complete removal of the tumor, leaving the urinary bladder intact. Case 4: A case of a 61 year old male suffering from muscle invasive bladder cancer who was admitted for a radical cystectomy and on the eighth postoperative day developed microangiopathic hemolytic anemia and thrombocytopenia, which clinically defines thrombotic microangiopathy

    Earth observation : An integral part of a smart and sustainable city

    Get PDF
    Over the course of the 21st century, a century in which the urbanization process of the previous one is ever on the rise, the novel smart city concept has rapidly evolved and now encompasses the broader aspect of sustainability. Concurrently, there has been a sea change in the domain of Earth observation (EO) where scientific and technological breakthroughs are accompanied by a paradigm shift in the provision of open and free data. While the urban and EO communities share the end goal of achieving sustainability, cities still lack an understanding of the value EO can bring in this direction, an next a consolidated framework for tapping the full potential of EO and integrating it in their operational modus operandi. The “SMart URBan Solutions for air quality, disasters and city growth” H2020 project (SMURBS/ERA-PLANET) sits at this scientific and policy crossroad, and, by creating bottom-up EO-driven solutions against an array of environmental urban pressures, and by expanding the network of engaged and exemplary smart cities that push the state-of-the-art in EO uptake, brings the international ongoing discussion of EO for sustainable cities closer to home and contributes in this discussion. This paper advocates for EO as an integral part of a smart and sustainable city and aspires to lead by example. To this end, it documents the project's impacts, ranging from the grander policy fields to an evolving portfolio of smart urban solutions and everyday city operations, as well as the cornerstones for successful EO integration. Drawing a parallel with the utilization of EO in supporting several aspects of the 2030 Agenda for Sustainable Development, it aspires to be a point of reference for upcoming endeavors of city stakeholders and the EO community alike, to tread together, beyond traditional monitoring or urban planning, and to lay the foundations for urban sustainability.Peer reviewe

    Propuesta metodológica para la identificación de tierras marginales mediante productos derivados de teledetección y datos auxiliares

    Full text link
    [EN] The concept of marginal land (ML) is dynamic and depends on various factors related to the environment, climate, scale, culture, and economic sector. The current methods for identifying ML are diverse, they employ multiple parameters and variables derived from land use and land cover, and mostly reflect specific management purposes. A methodological approach for the identification of marginal lands using remote sensing and ancillary data products and validated on samples from four European countries (i.e., Germany, Spain, Greece, and Poland) is presented in this paper. The methodology proposed combines land use and land cover data sets as excluding indicators (forest, croplands, protected areas, impervious areas, land-use change, water bodies, and permanent snow areas) and environmental constraints information as marginality indicators: (i) physical soil properties, in terms of slope gradient, erosion, soil depth, soil texture, percentage of coarse soil texture fragments, etc.; (ii) climatic factors e.g. aridity index; (iii) chemical soil properties, including soil pH, cation exchange capacity, contaminants, and toxicity, among others. This provides a common vision of marginality that integrates a multidisciplinary approach. To determine the ML, we first analyzed the excluding indicators used to delimit the areas with defined land use. Then, thresholds were determined for each marginality indicator through which the land productivity progressively decreases. Finally, the marginality indicator layers were combined in Google Earth Engine. The result was categorized into 3 levels of productivity of ML: high productivity, low productivity, and potentially unsuitable land. The results obtained indicate that the percentage of marginal land per country is 11.64% in Germany, 19.96% in Spain, 18.76% in Greece, and 7.18% in Poland. The overall accuracies obtained per country were 60.61% for Germany, 88.87% for Spain, 71.52% for Greece, and 90.97% for Poland.[ES] El concepto de tierra marginal (ML) es dinámico y depende de factores relacionados con el entorno, el clima, la escala, la cultura y la economía. los métodos actuales de identificación de ML son también diversos y están basados en múltiples parámetros y variables derivados del uso y cobertura del suelo reflejando, en su mayoría, fines de gestión específicos. En este artículo se presenta una propuesta metodológica para la identificación de tierras marginales mediante el uso de productos derivados de teledetección y datos auxiliares, validándose sobre muestras obtenidas en cuatro países europeos: Alemania, España, Grecia y Polonia. La metodología combina datos de usos y coberturas del suelo como indicadores excluyentes (bosque, tierras de cultivo, áreas protegidas, áreas impermeables, cambios de usos del suelo, cuerpos de agua y áreas de nieve permanente) e información ambiental como indicadores de marginalidad, esto es, (i) propiedades físicas del suelo como la pendiente, profundidad de suelo, erosión del suelo, textura, porcentaje de fragmentos de textura gruesa del suelo, etc.; (ii) factores climáticos como el índice de aridez; (iii) propiedades químicas del suelo como pH, capacidad de intercambio catiónico, contaminantes y toxicidad, entre otros, con el objetivo de abordar una visión común de la marginalidad que integre un enfoque multidisciplinar. Para obtener las coberturas de ML primero se analizaron los indicadores excluyentes para delimitar las áreas con un uso del suelo establecido. En segundo lugar, se determinaron los umbrales para cada indicador de marginalidad a través de los cuales el suelo se transforma, disminuyendo progresivamente su aprovechamiento productivo. Finalmente, la superposición de las capas de indicadores de marginalidad se llevó a cabo con la herramienta Google Earth Engine. El resultado final se categorizó en 3 niveles de ML con diferente productividad: alta, baja y tierras potencialmente inadecuadas. Los resultados obtenidos indican que el porcentaje de tierras marginales sobre la extensión total de cada país analizado es de 11,64% en Alemania, 19,96% en España, 18,76% en Grecia y 7,18% en Polonia. La precisión global obtenida por país fue del 60,61% para Alemania, del 88,87% para España, del 71,52% para Grecia y del 90,97% para Polonia.This research has been funded by the European Commission through the H2020-MSCA-RISE-2018 MAIL project (grant 823805) and by the Fondo de Garantía Juvenil en I+D+i from the Spanish Ministry of Labour and Social Economy.Torralba, J.; Ruiz, L.; Georgiadis, C.; Patias, P.; Gómez-Conejo, R.; Verde, N.; Tassapoulou, M.... (2021). Methodological proposal for the identification of marginal lands with remote sensing-derived products and ancillary data. En Proceedings 3rd Congress in Geomatics Engineering. Editorial Universitat Politècnica de València. 248-257. https://doi.org/10.4995/CiGeo2021.2021.12729OCS24825

    Unpublished Mediterranean and Black Sea records of marine alien, cryptogenic, and neonative species

    Get PDF
    To enrich spatio-temporal information on the distribution of alien, cryptogenic, and neonative species in the Mediterranean and the Black Sea, a collective effort by 173 marine scientists was made to provide unpublished records and make them open access to the scientific community. Through this effort, we collected and harmonized a dataset of 12,649 records. It includes 247 taxa, of which 217 are Animalia, 25 Plantae and 5 Chromista, from 23 countries surrounding the Mediterranean and the Black Sea. Chordata was the most abundant taxonomic group, followed by Arthropoda, Mollusca, and Annelida. In terms of species records, Siganus luridus, Siganus rivulatus, Saurida lessepsianus, Pterois miles, Upeneus moluccensis, Charybdis (Archias) longicollis, and Caulerpa cylindracea were the most numerous. The temporal distribution of the records ranges from 1973 to 2022, with 44% of the records in 2020–2021. Lethrinus borbonicus is reported for the first time in the Mediterranean Sea, while Pomatoschistus quagga, Caulerpa cylindracea, Grateloupia turuturu, and Misophria pallida are first records for the Black Sea; Kapraunia schneideri is recorded for the second time in the Mediterranean and for the first time in Israel; Prionospio depauperata and Pseudonereis anomala are reported for the first time from the Sea of Marmara. Many first country records are also included, namely: Amathia verticillata (Montenegro), Ampithoe valida (Italy), Antithamnion amphigeneum (Greece), Clavelina oblonga (Tunisia and Slovenia), Dendostrea cf. folium (Syria), Epinephelus fasciatus (Tunisia), Ganonema farinosum (Montenegro), Macrorhynchia philippina (Tunisia), Marenzelleria neglecta (Romania), Paratapes textilis (Tunisia), and Botrylloides diegensis (Tunisia).peer-reviewe

    ActiveHuman Part 2

    No full text
    <p><strong>This is Part 2/2 of the ActiveHuman dataset! Part 1 can be found </strong><a href="https://zenodo.org/record/8359766"><strong>here</strong></a><strong>.</strong></p><p><strong>Dataset Description</strong></p><p>ActiveHuman was generated using Unity's Perception package.</p><p>It consists of 175428 RGB images and their semantic segmentation counterparts taken at different <strong>environments</strong>, <strong>lighting conditions</strong>, <strong>camera distances</strong> and <strong>angles</strong>. In total, the dataset contains images for 8 environments, 33 humans, 4 lighting conditions, 7 camera distances (1m-4m) and 36 camera angles (0-360 at 10-degree intervals).</p><p>The dataset does not include images at every single combination of available camera distances and angles, since for some values the camera would collide with another object or go outside the confines of an environment. As a result, some combinations of camera distances and angles do not exist in the dataset.</p><p>Alongside each image, <strong>2D Bounding Box</strong>, <strong>3D Bounding Box</strong> and <strong>Keypoint</strong> ground truth annotations are also generated via the use of Labelers and are stored as a JSON-based dataset. These Labelers are scripts that are responsible for capturing ground truth annotations for each captured image or frame. Keypoint annotations follow the COCO format defined by the COCO keypoint annotation template offered in the perception package.</p><p> </p><p><strong>Folder configuration</strong></p><p>The dataset consists of 3 folders:</p><ul><li><strong>JSON Data</strong>: Contains all the generated JSON files.</li><li><strong>RGB Images</strong>: Contains the generated RGB images.</li><li><strong>Semantic Segmentation Images</strong>: Contains the generated semantic segmentation images.</li></ul><p> </p><p><strong>Essential Terminology</strong></p><ul><li><strong>Annotation</strong>: Recorded data describing a single capture.</li><li><strong>Capture</strong>: One completed rendering process of a Unity sensor which stored the rendered result to data files (e.g.  PNG, JPG, etc.).</li><li><strong>Ego</strong>: Object or person on which a collection of sensors is attached to (e.g., if a drone has a camera attached to it, the drone would be the ego and the camera would be the sensor).</li><li><strong>Ego coordinate system</strong>: Coordinates with respect to the ego.</li><li><strong>Global coordinate system</strong>: Coordinates with respect to the global origin in Unity.</li><li><strong>Sensor</strong>: Device that captures the dataset (in this instance the sensor is a camera).</li><li><strong>Sensor coordinate system</strong>: Coordinates with respect to the sensor.</li><li><strong>Sequence</strong>: Time-ordered series of captures. This is very useful for video capture where the time-order relationship of two captures is vital.</li><li><strong>UIID</strong>: Universal Unique Identifier. It is a unique hexadecimal identifier that can represent an individual instance of a capture, ego, sensor, annotation, labeled object or keypoint, or keypoint template.</li></ul><p> </p><p><strong>Dataset Data</strong></p><p>The dataset includes 4 types of JSON annotation files files:</p><ul><li><strong>annotation_definitions.json</strong>: Contains annotation definitions for all of the active Labelers of the simulation stored in an array. Each entry consists of a collection of key-value pairs which describe a particular type of annotation and contain information about that specific annotation describing how its data should be mapped back to labels or objects in the scene. Each entry contains the following key-value pairs:<ul><li><strong>id</strong>: Integer identifier of the annotation's definition.</li><li><strong>name</strong>: Annotation name (e.g., keypoints, bounding box, bounding box 3D, semantic segmentation).</li><li><strong>description</strong>: Description of the annotation's specifications.</li><li><strong>format</strong>: Format of the file containing the annotation specifications (e.g., json, PNG).</li><li><strong>spec</strong>: Format-specific specifications for the annotation values generated by each Labeler.</li></ul></li></ul><p> </p><p>Most Labelers generate different annotation specifications in the spec key-value pair:</p><ul><li><strong>BoundingBox2DLabeler/BoundingBox3DLabeler</strong>:<ul><li><strong>label_id</strong>: Integer identifier of a label.</li><li><strong>label_name</strong>: String identifier of a label.</li></ul></li><li><strong>KeypointLabeler</strong>:<ul><li><strong>template_id</strong>: Keypoint template UUID.</li><li><strong>template_name</strong>: Name of the keypoint template.</li><li><strong>key_points</strong>: Array containing all the joints defined by the keypoint template. This array includes the key-value pairs:<ul><li><strong>label</strong>: Joint label.</li><li><strong>index</strong>: Joint index.</li><li><strong>color</strong>: RGBA values of the keypoint.</li><li><strong>color_code</strong>: Hex color code of the keypoint</li></ul></li><li><strong>skeleton</strong>: Array containing all the skeleton connections defined by the keypoint template. Each skeleton connection defines a connection between two different joints. This array includes the key-value pairs:<ul><li><strong>label1</strong>: Label of the first joint.</li><li><strong>label2</strong>: Label of the second joint.</li><li><strong>joint1</strong>: Index of the first joint.</li><li><strong>joint2</strong>: Index of the second joint.</li><li><strong>color</strong>: RGBA values of the connection.</li><li><strong>color_code</strong>: Hex color code of the connection.</li></ul></li></ul></li><li><strong>SemanticSegmentationLabeler</strong>:<ul><li><strong>label_name</strong>: String identifier of a label.</li><li><strong>pixel_value</strong>: RGBA values of the label.</li><li><strong>color_code</strong>: Hex color code of the label.</li></ul></li></ul><p> </p><ul><li><strong>captures_xyz.json</strong>: Each of these files contains an array of ground truth annotations generated by each active Labeler for each capture separately, as well as extra metadata that describe the state of each active sensor that is present in the scene. Each array entry in the contains the following key-value pairs:<ul><li><strong>id</strong>: UUID of the capture.</li><li><strong>sequence_id</strong>: UUID of the sequence.</li><li><strong>step</strong>: Index of the capture within a sequence.</li><li><strong>timestamp</strong>: Timestamp (in ms) since the beginning of a sequence.</li><li><strong>sensor</strong>: Properties of the sensor. This entry contains a collection with the following key-value pairs:<ul><li><strong>sensor_id</strong>: Sensor UUID.</li><li><strong>ego_id</strong>: Ego UUID.</li><li><strong>modality</strong>: Modality of the sensor (e.g., camera, radar).</li><li><strong>translation</strong>: 3D vector that describes the sensor's position (in meters) with respect to the global coordinate system.</li><li><strong>rotation</strong>: Quaternion variable that describes the sensor's orientation with respect to the ego coordinate system.</li><li><strong>camera_intrinsic</strong>:  matrix containing (if it exists) the camera's  intrinsic calibration.</li><li><strong>projection</strong>: Projection type used by the camera (e.g., orthographic, perspective).</li></ul></li><li><strong>ego</strong>: Attributes of the ego. This entry contains a collection with the following key-value pairs:<ul><li><strong>ego_id</strong>: Ego UUID.</li><li><strong>translation</strong>: 3D vector that describes the ego's position (in meters) with respect to the global coordinate system.</li><li><strong>rotation</strong>: Quaternion variable containing the ego's orientation.</li><li><strong>velocity</strong>: 3D vector containing the ego's velocity (in meters per second).</li><li><strong>acceleration</strong>: 3D vector containing the ego's acceleration (in ).</li></ul></li><li><strong>format</strong>: Format of the file captured by the sensor (e.g., PNG, JPG).</li><li><strong>annotations</strong>: Key-value pair collections, one for each active Labeler. These key-value pairs are as follows:<ul><li><strong>id</strong>: Annotation UUID .</li><li><strong>annotation_definition</strong>: Integer identifier of the annotation's definition.</li><li><strong>filename</strong>: Name of the file generated by the Labeler. This entry is only present for Labelers that generate an image.</li><li><strong>values</strong>: List of key-value pairs containing annotation data for the current Labeler.</li></ul></li></ul></li></ul><p> </p><p>Each Labeler generates different annotation specifications in the <strong>values</strong> key-value pair:</p><ul><li><strong>BoundingBox2DLabeler</strong>:<ul><li><strong>label_id</strong>: Integer identifier of a label.</li><li><strong>label_name</strong>: String identifier of a label.</li><li><strong>instance_id</strong>: UUID of one instance of an object. Each object with the same label that is visible on the same capture has different <strong>instance_id</strong> values.</li><li><strong>x</strong>: Position of the 2D bounding box on the X axis.</li><li><strong>y</strong>: Position of the 2D bounding box position on the Y axis.</li><li><strong>width</strong>: Width of the 2D bounding box.</li><li><strong>height</strong>: Height of the 2D bounding box.</li></ul></li><li><strong>BoundingBox3DLabeler</strong>:<ul><li><strong>label_id</strong>: Integer identifier of a label.</li><li><strong>label_name</strong>: String identifier of a label.</li><li><strong>instance_id</strong>: UUID of one instance of an object. Each object with the same label that is visible on the same capture has different <strong>instance_id</strong> values.</li><li><strong>translation</strong>: 3D vector containing the location of the center of the 3D bounding box with respect to the sensor coordinate system (in meters).</li><li><strong>size</strong>: 3D vector containing the size of the 3D bounding box (in meters)</li><li><strong>rotation</strong>: Quaternion variable containing the orientation of the 3D bounding box.</li><li><strong>velocity</strong>: 3D vector containing the velocity of the 3D bounding box (in meters per second).</li><li><strong>acceleration</strong>: 3D vector containing the acceleration of the 3D bounding box acceleration (in ).</li></ul></li><li><strong>KeypointLabeler</strong>:<ul><li><strong>label_id</strong>: Integer identifier of a label.</li><li><strong>instance_id</strong>: UUID of one instance of a joint. Keypoints with the same joint label that are visible on the same capture have different <strong>instance_id</strong> values.</li><li><strong>template_id</strong>: UUID of the keypoint template.</li><li><strong>pose</strong>: Pose label for that particular capture.</li><li><strong>keypoints</strong>: Array containing the properties of each keypoint. Each keypoint that exists in the keypoint template file is one element of the array. Each entry's contents have as follows:<ul><li><strong>index</strong>: Index of the keypoint in the keypoint template file.</li><li><strong>x</strong>: Pixel coordinates of the keypoint on the X axis.</li><li><strong>y</strong>: Pixel coordinates of the keypoint on the Y axis.</li><li>state: State of the keypoint.</li></ul></li></ul></li></ul><p> </p><p>The SemanticSegmentationLabeler does not contain a <strong>values</strong> list.</p><ul><li><strong>egos.json</strong>: Contains collections of key-value pairs for each ego. These include:<ul><li><strong>id</strong>: UUID of the ego.</li><li><strong>description</strong>: Description of the ego.</li></ul></li><li><strong>sensors.json</strong>: Contains collections of key-value pairs for all sensors of the simulation. These include:<ul><li><strong>id</strong>: UUID of the sensor.</li><li><strong>ego_id</strong>: UUID of the ego on which the sensor is attached.</li><li><strong>modality</strong>: Modality of the sensor (e.g., camera, radar, sonar).</li><li><strong>description</strong>: Description of the sensor (e.g., camera, radar).</li></ul></li></ul><p> </p><p><strong>Image names</strong></p><p>The RGB and semantic segmentation images share the same image naming convention. However, the semantic segmentation images also contain the string <i>Semantic_</i> at the beginning of their filenames.</p><p>Each RGB image is named "e_h_l_d_r.jpg", where:</p><ul><li><strong>e </strong>denotes the id of the environment.</li><li><strong>h </strong>denotes the id of the person.</li><li><strong>l </strong>denotes the id of the lighting condition.</li><li><strong>d </strong>denotes the camera distance at which the image was captured.</li><li><strong>r </strong>denotes the camera angle at which the image was captured.</li></ul><p>This is Part 2/2 of the ActiveHuman dataset</p&gt

    An evaluation and performance comparison of different approaches for data stream processing

    No full text
    In recent years the demand of faster data processing and real-time analysis and reporting has grown substantially. Social networks, internet of things, trading are among others, use cases where data stream processing has a vital importance. This has led to the emergence of several distributed computing frameworks that can be successfully exploited for data stream processing purposes. This project aims to examine a number of them, their architecture and key features. First, all the open source frameworks were found and studied. Based on the approach they follow, two of them were selected to be further analyzed and presented in detail. In the final part of the project a telemetry data monitoring application was applied using both frameworks on a computing cluster. The aim of that experiment was to illustrate how those two different approaches would perform in terms of exploiting the clusters resources as they scale out

    ActiveFace

    No full text
    <h2><strong>Dataset Description</strong></h2><p>ActiveFace is a synthetic face image dataset was generated using Unity's Perception package. </p><p>It consists of 175428 face images taken at different environments, lighting conditions, camera distances and angles. In total, the dataset contains images for 8 environments, 33 humans, 4 lighting conditions, 7 camera distances (1m-4m) and 36 camera angles (0-360 at 10-degree intervals).</p><p>The dataset does not include images at every single combination of available camera distances and angles, since for some values the camera would collide with another object or go outside the confines of an environment. As a result, some combinations of camera distances and angles do not exist in the dataset.</p><h2><strong>How to Download</strong></h2><p>You can download the dataset <a href="https://cicloud.csd.auth.gr/owncloud/s/OG6Bkgf9Hn5kpT9">here</a>.</p><h2><strong>Folder Configuration</strong></h2><p>The dataset consists of 33 main folders each one containing all the face images for one human. Each main folder consists of 32 subfolders, each one containing that person's face images for one combination of environment and lighting condition. Each subfolder is named "x_y", where "x" denotes the id of the environment and "y" denotes the id of the lighting condition.</p><h2><strong>Naming Conventions</strong></h2><p>Each image is named "e_h_l_d_r.jpg", where:</p><ul><li>e denotes the id of the environment.</li><li>h denotes the id of the person.</li><li>l denotes the id of the lighting condition.</li><li>d denotes the camera distance at which the image was captured.</li><li>r denotes the camera angle at which the image was captured.</li></ul><p>You can download the dataset <a href="https://cicloud.csd.auth.gr/owncloud/s/OG6Bkgf9Hn5kpT9">here</a>.</p&gt

    Editorial of Special Issue “Remote Sensing for Land Cover/Land Use Mapping at Local and Regional Scales”

    No full text
    More than ever, there is a need from policy and decision makers, national governments, non-governmental organizations, international initiatives, scientists, and individual citizens for timely and accurate spatially-explicit information on Earth’s physical surface cover and the socio-economic function of land at multiple scales [...

    Photogrammetric surveying forests and woodlands with UAVs: Techniques for automatic removal of vegetation and Digital Terrain Model production for hydrological applications

    No full text
    The purpose of this study is the photogrammetric survey of a forested area using Unmanned Aerial Vehicles (UAV), and the estimation of the Digital Terrain Model (DTM) of the area, based on the photogrammetrically produced Digital Surface Model (DSM). Furthermore, through the classification of the height difference between a DSM and a DTM, a vegetation height model is estimated, and a vegetation type map is produced. Finally, the generated DTM was used in a hydrological analysis study in order to determine its suitability compared to the usage of the DSM. The selected study area was the forest of Seih-Sou (Thessaloniki). The DTM extraction methodology applies classification and filtering of point clouds, and aims in the production of a surface model including only terrain points (DTM). The method yielded a DTM which functioned satisfactorily as a basis for the hydrological analysis. Also, by classifying the DSM DTM difference, a vegetation heights model was generated. For the photogrammetric survey, 495 aerial images were used, taken by a UAV from a height of ~ 200m. A total of 44 ground control points were measured with an accuracy of 5cm. The accuracy of the aerial triangulation was approximately 13cm. The produced dense point cloud, counted 146,593,725 points.The accepted manuscript in pdf format is listed with the files at the bottom of this page. The presentation of the authors' names and (or) special characters in the title of the manuscript may differ slightly between what is listed on this page and what is listed in the pdf file of the accepted manuscript; that in the pdf file of the accepted manuscript is what was submitted by the author
    corecore