371 research outputs found

    СОВРЕМЕННЫЕ МЕТОДЫ АВТОМАТИЧЕСКОГО ОБНАРУЖЕНИЯ ПРЯМОУГОЛЬНЫХ ОБЪЕКТОВ НА ИЗОБРАЖЕНИЯХ

    Get PDF
    Low-level and high-level feature extraction methods and algorithms for the image formation of a rectangular object were considered. The algorithm for object detection based on correlation analysis, as well as the algorithm containing the use of Canny edge detector, Hough and Radon transform for lines detection, and then, depending on the properties of the object lines combining in the rectangular area, were explored. The algorithms were tested on the base of 1000 passports for the problem of accurate photo edges detection.Рассмотрены низкоуровневые и высокоуровневые методы выделения признаков на изображении и алгоритмы формирования прямоугольного объекта. Были исследованы и протестированы алгоритм обнаружения объекта на основе корреляционного анализа, а также алгоритм, содержащий в себе применение детектора границ Канни, обнаружение линий с помощью преобразования Хафа и преобразования Радона и далее, в зависимости от свойств объекта, объединением линий в прямоугольную область. Алгоритмы были протестированы на базе из 1000 паспортов для задачи точного обнаружения границ фотографии

    X-ray microtomography measurements of bioactive glass scaffolds in rabbit femur samples at multiple stages of bone regeneration : reduction of image artefacts and a preliminary segmentation

    Get PDF
    A series of x-ray microtomography (micro-CT) measurements was performed on a set of rabbit femur bone samples containing artificial scaffolds of bioactive glass BAG-S53P4, implanted into an intentionally induced defect, i.e. a gap, in the femur. The scaffolds, some additionally enveloped in PLGA, were supportive structures composed of small granules of bioactive glass, intended to enhance, stimulate and guide the healing and regeneration of bone. The 34 samples were harvested from the rabbits at three different stages of healing and bone regeneration: 2 weeks, 4 weeks and 8 weeks. In addition to 27 samples that contained scaffolds of BAG-S53P4 or BAG-S53P4-PLGA, which had been implanted into the femur of a rabbit, 3 scaffolds of BAG-S53P4(-PLGA) that were not implanted and 7 control samples containing inert PMMA-implants were also included in the measurements for comparison. During the healing process the bioactive glass granules are gradually dissolved into the surrounding bodily fluids and a thin reaction layer composed of silica gel forms onto the surfaces of the granules. Subsequently an additional surface layer composed of HCA, a material that closely resembles natural hydroxyapatite, is formed onto the granules. As the healing process to regenerate the bone in the gap progresses, a complex three-dimensional network of newly formed trabecular bone grows in between the granules, attaching onto the surface layers and eventually enveloping the gradually dissolving granules entirely. Ultimately, the scaffold is intended to degrade completely, and a structure of regenerated, remodeled cortical bone is expected to be formed into the volume of the initial defect. As the thicknesses of both the surface layers of the granules and the individual trabeculae of the newly formed bone are in the micrometre range, x-ray microtomography was employed to evaluate and assess the complex three-dimensional structure, consisting of trabecular bone intertwined with granules at varying stages of dissolution. By evaluating the rate of formation of these structures at three different stages, i.e. time points, of regeneration, valuable information on the effectiveness of the bioactive glass BAG-S53P4(-PLGA) for the regeneration of defected bone can be obtained. The measurements were performed at University of Helsinki’s Laboratory of Microtomography using its Nanotom-apparatus with 80kV voltage, 150microA current and a voxel size of 15micrometres. 1000 projection images per sample were used in 37 reconstructions utilizing the FBP-algorithm. Subsequent image processing to analyze and compare the samples was conducted using ImageJ. A procedure to reduce image artefacts – due to metal parts in the samples – was developed, utilizing Gaussian filtering, as well as a preliminary image segmentation scheme, utilizing Morphological filtering, to automatically separate the bone from the granules and their surface layers

    Analysis of tomographic images

    Get PDF

    Update urban basemap by using the LiDAR mobile mapping system : the case of Abu Dhabi municipal system

    Get PDF
    Basemaps are the main resource used in urban planning and in building and infrastructure asset management. These maps are used by citizens and by private and public stakeholders. Therefore, accurate, up-to-date geoinformation of reference are needed to provide a good service. In general, basemaps have been updated by aerial photogrammetry or field surveying, but these methods are not always possible and alternatives need to be sought. Current limitations and challenges that face traditional field surveys include areas with extreme weather, deserts or artic environments, and flight restrictions due to proximity with other countries if there is not an agreement. In such cases, alternatives for large-scale are required. This thesis proposes the use of a mobile mapping system (MMS) to update urban basemaps. Most urban features can be extracted from point cloud using commercial software or open libraries. However, there are some exceptions: manhole covers, or hidden elements even with captures from defferent perspective, the most common building corners. Therefore, the main objective of this study was to establish a methodology for extracting manholes automatically and for completing hidden corners of buildings, so that urban basemaps can be updated. The algorithm developed to extract manholes is based on time, intensity and shape detection parameters, whereas additional information from satellite images is used to complete buildings. Each municipality knows the materials and dimensions of its manholes. Taking advantage of this knowledge, the point cloud is filtered to classify points according to the set of intensity values associated with the manhole material. From the classified points, the minimum bounding rectangles (MBR) are obtained and finally the shape is adjusted and drawn. We use satellite imagery to automatically digitize the layout of building footprints with automated software tools. Then, the visible corners of buildings from the LiDAR point cloud are imported and a fitting process is performed by comparing them with the corners of the building from the satellite image. Two methods are evaluated to establish which is the most suitable for adjustment in these conditions. In the first method, the differences in X and Y directions are measured in the corners, where LiDAR and satellite data are available, and is often computed as the average of the offsets. In the second method, a Helmert 2D transformation is applied. MMS involves Global Navigation Satellite Systems (GNSS) and Inertial Measurement Units (IMU) to georeference point clouds. Their accuracy depends on the acquisition environment. In this study, the influence of the urban pattern is analysed in three zones with varied urban characteristics: different height buildings, open areas, and areas with a low and high level of urbanization. To evaluate the efficiency of the proposed algorithms, three areas were chosen with varying urban patterns in Abu Dhabi. In these areas, 3D urban elements (light poles, street signs, etc) were automatically extracted using commercial software. The proposed algorithms were applied to the manholes and buildings. The completeness and correctness ratio, and geometric accuracy were calculated for all urban elements in the three areas. The best success rates (>70%) were for light poles, street signs and road curbs, regardless of the height of the buildings. The worst rate was obtained for the same features in peri-urban areas, due to high vegetation. In contrast, the best results for trees were found in theses areas. Our methodology demonstrates the great potential and efficiency of mobile LiDAR technology in updating basemaps; a process that is required to achieve standard accuracy in large scale maps. The cost of the entire process and the time required for the proposed methodology was calculated and compared with the traditional method. It was found that mobile LiDAR could be a standard cost-efficient procedure for updating maps.La cartografía de referencia es la principal herramienta en planificación urbanística, y gestión de infraestructuras y edificios, al servicio de ciudadanos, empresas y administración. Por esta razón, debe estar actualizada y ser lo más precisa posible. Tradicionalmente, la cartografía se actualiza mediante fotogrametría aérea o levantamientos terrestres. No obstante, deben buscarse alternativas válidas para escalas grandes, porque no siempre es posible emplear estas técnicas debido a las limitaciones y retos actuales a los que se enfrenta la medición tradicional en algunas zonas del planeta, con meteorología extrema o restricciones de vuelo por la proximidad a la frontera con otros países. Esta tesis propone el uso del sistema Mobile Mapping System (MMS) para actualizar la cartografía urbana de referencia. La mayoría de los elementos pueden extraerse empleando software comercial o librerías abiertas, excepto los registros de servicios. Los elementos ocultos son otro de los inconvenientes encontrados en el proceso de creación o actualización de la cartografía, incluso si se dispone de capturas desde diferentes puntos de vista. El caso más común es el de las esquinas de edificios. Por ello, el principal objetivo de este estudio es establecer una metodología de extracción automática de los registros y completar las esquinas ocultas de los edificios para actualizar cartografía urbana. El algoritmo desarrollado para la detección y extracción de registros se basa en parámetros como el tiempo, la intensidad de la señal laser y la forma de los registros, mientras que para completar los edificios se emplea información adicional de imágenes satélite. Aprovechando el conocimiento del material y dimensión de los registros, en disposición de los gestores municipales, el algoritmo propuesto filtra y clasifica los puntos de acuerdo a los valores de intensidad. De aquellos clasificados como registros se calcula el mínimo rectángulo que los contiene (Minimum Bounding Rectangle) y finalmente se ajusta la forma y se dibuja. Las imágenes de satélite son empleadas para obtener automáticamente la huella de los edificios. Posteriormente, se importan las esquinas visibles de los edificios obtenidas desde la nube de puntos y se realiza el ajuste comparándolas con las obtenidas desde satélite. Para llevar a cabo este ajuste se han evaluado dos métodos, el primero de ellos considera las diferencias entre las coordenadas XY, desplazándose el promedio. En el segundo, se aplica una transformación Helmert2D. MMS emplea sistemas de navegación global por satélite (Global Navigation Satellite Systems, GNSS) e inerciales (Inertial Measurement Unit, IMU) para georreferenciar la nube de puntos. La precisión de estos sistemas de posicionamiento depende del entorno de adquisición. Por ello, en este estudio se han seleccionado tres áreas con distintas características urbanas (altura de edificios, nivel de urbanización y áreas abiertas) de Abu Dhabi con el fin de analizar su influencia, tanto en la captura, como en la extracción de los elementos. En el caso de farolas, señales viales, árboles y aceras se ha realizado con software comercial, y para registros y edificios con los algoritmos propuestos. Las ratios de corrección y completitud, y la precisión geométrica se han calculado en las diferentes áreas urbanas. Los mejores resultados se han conseguido para las farolas, señales y bordillos, independientemente de la altura de los edificios. La peor ratio se obtuvo para los mismos elementos en áreas peri-urbanas, debido a la vegetación. Resultados opuestos se han conseguido en la detección de árboles. El coste económico y en tiempo de la metodología propuesta resulta inferior al de métodos tradicionales. Lo cual demuestra el gran potencial y eficiencia de la tecnología LiDAR móvil para la actualización cartografía de referenciaPostprint (published version

    The use of contextual techniques and textural analysis of satellite imagery in geological studies of arid regions

    Get PDF
    This Thesis examines the problem of extracting spatial information (context and texture) of use to the geologist, from satellite imagery. Part of the Arabian Shield was chosen to be the study area. Two new contextual techniques; (a) Ripping Membrane and (b) Rolling Ball were developed and examined in this study. Both new contextual based techniques proved to be excellent tools for visual detection and analysis of lineaments, and were clearly better than the 'traditional' spatial filtration technique. This study revealed structural lineaments, mostly mapped for the first time, which are clearly related to regional tectonic history of the area. Contextual techniques were used to perform image segmentation. Two different image segmentation methods were developed and examined in this study. These methods were the automatic watershed segmentation and ripping membrane/Laserscan system method (as this method was being used for the first time). The second method produced high accuracy results for four selected test sites. A new automatic lineament extraction method using the above contextual techniques was developed. The aim of the method was to produce an automatic lineament map and the azimuth direction of these lineaments in each rock type, as defined by the segmented regions. 75-85% of the visually traced lineaments were extracted by the automatic method. The automatic method appears to give a dominant trend slightly different (10° — 15°) from the visually determined trend. It was demonstrated that not all the different types of rock could be discriminated using the spectral image enhancement techniques (band ratio, principal components and decorrelation stretch). Therefore, the spatial grey level dependency matrix (SGLDM) was used to produce a texture feature image, which would enable distinctions to be made and overcome the limitations of spectral enhancement techniques. The SGLDM did not produce any useful texture features which can discriminate between every rock type in the selected test sites. It did, however, show some acceptable texture discrimination between some rock types. The remote sensing data examined in this thesis were the Landsat (multispectral scanner, Thematic Mapper), SPOT, and Shuttle Imaging Radar (SIR-B)

    Data and knowledge engineering for medical image and sensor data

    Get PDF

    Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data

    Get PDF
    The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering

    Robust Magnetic Resonance Imaging of Short T2 Tissues

    Get PDF
    Tissues with short transverse relaxation times are defined as ‘short T2 tissues’, and short T2 tissues often appear dark on images generated by conventional magnetic resonance imaging techniques. Common short T2 tissues include tendons, meniscus, and cortical bone. Ultrashort Echo Time (UTE) pulse sequences can provide morphologic contrasts and quantitative maps for short T2 tissues by reducing time-of-echo to the system minimum (e.g., less than 100 us). Therefore, UTE sequences have become a powerful imaging tool for visualizing and quantifying short T2 tissues in many applications. In this work, we developed a new Flexible Ultra Short time Echo (FUSE) pulse sequence employing a total of thirteen acquisition features with adjustable parameters, including optimized radiofrequency pulses, trajectories, choice of two or three dimensions, and multiple long-T2 suppression techniques. Together with the FUSE sequence, an improved analytical density correction and an auto-deblurring algorithm were incorporated as part of a novel reconstruction pipeline for reducing imaging artifacts. Firstly, we evaluated the FUSE sequence using a phantom containing short T2 components. The results demonstrated that differing UTE acquisition methods, improving the density correction functions and improving the deblurring algorithm could reduce the various artifacts, improve the overall signal, and enhance short T2 contrast. Secondly, we applied the FUSE sequence in bovine stifle joints (similar to the human knee) for morphologic imaging and quantitative assessment. The results showed that it was feasible to use the FUSE sequence to create morphologic images that isolate signals from the various knee joint tissues and carry out comprehensive quantitative assessments, using the meniscus as a model, including the mappings of longitudinal relaxation (T1) times, quantitative magnetization transfer parameters, and effective transverse relaxation (T2*) times. Lastly, we utilized the FUSE sequence to image the human skull for evaluating its feasibility in synthetic computed tomography (CT) generation and radiation treatment planning. The results demonstrated that the radiation treatment plans created using the FUSE-based synthetic CT and traditional CT data were able to present comparable dose calculations with the dose difference of mean less than a percent. In summary, this thesis clearly demonstrated the need for the FUSE sequence and its potential for robustly imaging short T2 tissues in various applications

    CELLmicrocosmos - Integrative cell modeling at the  molecular, mesoscopic and functional level

    Get PDF
    Sommer B. CELLmicrocosmos - Integrative cell modeling at the  molecular, mesoscopic and functional level. Bielefeld: Bielefeld University; 2012.The modeling of cells is an important application area of Systems Biology. In the context of this work, three cytological levels are defined: the mesoscopic, the molecular and the functional level. A number of related approaches which are quite diverse will be introduced during this work which can be categorized into these disciplines. But none of these approaches covers all areas. In this work, the combination of all three aforementioned cytological levels is presented, realized by the CELLmicrocosmos project, combining and extending different Bioinformatics-related methods. The mesoscopic level is covered by CellEditor which is a simple tool to generate eukaryotic or prokaryotic cell models. These are based on cell components represented by three-dimensional shapes. Different methods to generate these shapes are discussed by using partly external tools such as Amira, 3ds Max and/or Blender; abstract, interpretative, 3D-microscopy-based and molecular-structure-based cell component modeling. To communicate with these tools, CellEditor provides import as well as export capabilities based on the VRML97 format. In addition, different cytological coloring methods are discussed which can be applied to the cell models. MembraneEditor operates at the molecular level. This tool solves heterogeneous Membrane Packing Problems by distributing lipids on rectangular areas using collision detection. It provides fast and intuitive methods supporting a wide range of different application areas based on the PDB format. Moreover, a plugin interface enables the use of custom algorithms. In the context of this work, a high-density-generating lipid packing algorithm is evaluated; The Wanderer. The semi-automatic integration of proteins into the membrane is enabled by using data from the OPM and PDBTM database. Contrasting with the aforementioned structural levels, the third level covers the functional aspects of the cell. Here, protein-related networks or data sets can be imported and mapped into the previously generated cell models using the PathwayIntegration. For this purpose, data integration methods are applied, represented by the data warehouse DAWIS-M.D. which includes a number of established databases. This information is enriched by the text-mining data acquired from the ANDCell database. The localization of proteins is supported by different tools like the interactive Localization Table and the Localization Charts. The correlation of partly multi-layered cell components with protein-related networks is covered by the Network Mapping Problem. A special implementation of the ISOM layout is used for this purpose. Finally, a first approach to combine all these interrelated levels is represented; CellExplorer which integrates CellEditor as well as PathwayIntegration and imports structures generated with MembraneEditor. For this purpose, the shape-based cell components can be correlated with networks as well as molecular membrane structures using Membrane Mapping. It is shown that the tools discussed here can be applied to scientific as well as educational tasks: educational cell visualization, initial membrane modeling for molecular simulations, analysis of interrelated protein sets, cytological disease mapping. These are supported by the user-friendly combination of Java, Java 3D and Web Start technology. In the last part of this thesis the future of Integrative Cell Modeling is discussed. While the approaches discussed here represent basically three-dimensional snapshots of the cell, prospective approaches have to be extended into the fourth dimension; time

    The Talking Heads experiment: Origins of words and meanings

    Get PDF
    The Talking Heads Experiment, conducted in the years 1999-2001, was the first large-scale experiment in which open populations of situated embodied agents created for the first time ever a new shared vocabulary by playing language games about real world scenes in front of them. The agents could teleport to different physical sites in the world through the Internet. Sites, in Antwerp, Brussels, Paris, Tokyo, London, Cambridge and several other locations were linked into the network. Humans could interact with the robotic agents either on site or remotely through the Internet and thus influence the evolving ontologies and languages of the artificial agents. The present book describes in detail the motivation, the cognitive mechanisms used by the agents, the various installations of the Talking Heads, the experimental results that were obtained, and the interaction with humans. It also provides a perspective on what happened in the field after these initial groundbreaking experiments. The book is invaluable reading for anyone interested in the history of agent-based models of language evolution and the future of Artificial Intelligence
    corecore