22,673 research outputs found

    A non-invasive technique for burn area measurement

    Get PDF
    The need for a reliable and accurate method for assessing the surface area of burn wounds currently exists in the branch of medicine involved with burn care and treatment. The percentage of the surface area is of critical importance in evaluating fluid replacement amounts and nutritional support during the 24 hours of postburn therapy. A noninvasive technique has been developed which facilitates the measurement of burn area. The method we shall describe is an inexpensive technique to measure the burn areas accurately. Our imaging system is based on a technique known as structured light. Most structured light computer imaging systems, including ours, use triangulation to determine the location of points in three dimensions as the intersection of two lines: a ray of light originating from the structured light projector and the line of sight determined by the location of the image point in the camera plane. The geometry used to determine 3D location by triangulation is identical to the geometry of other stereo-based vision system, including the human vision system. Our system projects a square grid pattern from 35mm slide onto the patient. The grid on the slide is composed of uniformly spaced orthogonal stripes which may be indexed by row and column. Each slide also has square markers placed in between time lines of the grid in both the horizontal and vertical directions in the center of the slide. Our system locates intersections of the projected grid stripes in the camera image and determines the 3D location of the corresponding points on the body by triangulation. Four steps are necessary in order to reconstruct 3D locations of points on the surface of the skin: camera and projector calibration; image processing to locate the grid intersections in the camera image; grid labeling to establish the correspondence between projected and imaged intersections; and triangulation to determine three-dimensional position. Three steps are required to segment burned portion in image: edge detection to get the strongest edges of the region; edge following to form a closed boundary; and region filling to identify the burn region. After combining the reconstructed 3D locations and segmented image, numerical analysis and geometric modeling techniques are used to calculate the burn area. We use cubic spline interpolation, bicubic surface patches and Gaussian quadrature double integration to calculate the burn wound area. The accuracy of this technique is demonstrated The benefits and advantages of this technique are, first, that we don’t have to make any assumptions about the shape of the human body and second, there is no need for either the Rule-of-Nines, or the weight and height of the patient. This technique can be used for human body shape, regardless of weight proportion, size, sex or skin pigmentation. The low cost, intuitive method, and demonstrated efficiency of this computer imaging technique makes it a desirable alternative to current methods and provides the burn care specialist with a sterile, safe, and effective diagnostic tool in assessing and investigating burn areas

    Macroscale multimodal imaging reveals ancient painting production technology and the vogue in Greco-Roman Egypt.

    Get PDF
    Macroscale multimodal chemical imaging combining hyperspectral diffuse reflectance (400-2500 nm), luminescence (400-1000 nm), and X-ray fluorescence (XRF, 2 to 25 keV) data, is uniquely equipped for noninvasive characterization of heterogeneous complex systems such as paintings. Here we present the first application of multimodal chemical imaging to analyze the production technology of an 1,800-year-old painting and one of the oldest surviving encaustic ("burned in") paintings in the world. Co-registration of the data cubes from these three hyperspectral imaging modalities enabled the comparison of reflectance, luminescence, and XRF spectra at each pixel in the image for the entire painting. By comparing the molecular and elemental spectral signatures at each pixel, this fusion of the data allowed for a more thorough identification and mapping of the painting's constituent organic and inorganic materials, revealing key information on the selection of raw materials, production sequence and the fashion aesthetics and chemical arts practiced in Egypt in the second century AD

    Automated and robust geometric and spectral fusion of multi-sensor, multi-spectral satellite images

    Get PDF
    Die in den letzten Jahrzehnten aufgenommenen Satellitenbilder zur Erdbeobachtung bieten eine ideale Grundlage für eine genaue Langzeitüberwachung und Kartierung der Erdoberfläche und Atmosphäre. Unterschiedliche Sensoreigenschaften verhindern jedoch oft eine synergetische Nutzung. Daher besteht ein dringender Bedarf heterogene Multisensordaten zu kombinieren und als geometrisch und spektral harmonisierte Zeitreihen nutzbar zu machen. Diese Dissertation liefert einen vorwiegend methodischen Beitrag und stellt zwei neu entwickelte Open-Source-Algorithmen zur Sensorfusion vor, die gründlich evaluiert, getestet und validiert werden. AROSICS, ein neuer Algorithmus zur Co-Registrierung und geometrischen Harmonisierung von Multisensor-Daten, ermöglicht eine robuste und automatische Erkennung und Korrektur von Lageverschiebungen und richtet die Daten an einem gemeinsamen Koordinatengitter aus. Der zweite Algorithmus, SpecHomo, wurde entwickelt, um unterschiedliche spektrale Sensorcharakteristika zu vereinheitlichen. Auf Basis von materialspezifischen Regressoren für verschiedene Landbedeckungsklassen ermöglicht er nicht nur höhere Transformationsgenauigkeiten, sondern auch die Abschätzung einseitig fehlender Spektralbänder. Darauf aufbauend wurde in einer dritten Studie untersucht, inwieweit sich die Abschätzung von Brandschäden aus Landsat mittels synthetischer Red-Edge-Bänder und der Verwendung dichter Zeitreihen, ermöglicht durch Sensorfusion, verbessern lässt. Die Ergebnisse zeigen die Effektivität der entwickelten Algorithmen zur Verringerung von Inkonsistenzen bei Multisensor- und Multitemporaldaten sowie den Mehrwert einer geometrischen und spektralen Harmonisierung für nachfolgende Produkte. Synthetische Red-Edge-Bänder erwiesen sich als wertvoll bei der Abschätzung vegetationsbezogener Parameter wie z. B. Brandschweregraden. Zudem zeigt die Arbeit das große Potenzial zur genaueren Überwachung und Kartierung von sich schnell entwickelnden Umweltprozessen, das sich aus einer Sensorfusion ergibt.Earth observation satellite data acquired in recent years and decades provide an ideal data basis for accurate long-term monitoring and mapping of the Earth's surface and atmosphere. However, the vast diversity of different sensor characteristics often prevents synergetic use. Hence, there is an urgent need to combine heterogeneous multi-sensor data to generate geometrically and spectrally harmonized time series of analysis-ready satellite data. This dissertation provides a mainly methodical contribution by presenting two newly developed, open-source algorithms for sensor fusion, which are both thoroughly evaluated as well as tested and validated in practical applications. AROSICS, a novel algorithm for multi-sensor image co-registration and geometric harmonization, provides a robust and automated detection and correction of positional shifts and aligns the data to a common coordinate grid. The second algorithm, SpecHomo, was developed to unify differing spectral sensor characteristics. It relies on separate material-specific regressors for different land cover classes enabling higher transformation accuracies and the estimation of unilaterally missing spectral bands. Based on these algorithms, a third study investigated the added value of synthesized red edge bands and the use of dense time series, enabled by sensor fusion, for the estimation of burn severity and mapping of fire damage from Landsat. The results illustrate the effectiveness of the developed algorithms to reduce multi-sensor, multi-temporal data inconsistencies and demonstrate the added value of geometric and spectral harmonization for subsequent products. Synthesized red edge information has proven valuable when retrieving vegetation-related parameters such as burn severity. Moreover, using sensor fusion for combining multi-sensor time series was shown to offer great potential for more accurate monitoring and mapping of quickly evolving environmental processes

    Automatic mapping of burned areas using Landsat 8 time-series images in Google Earth engine: a case study from Iran

    Get PDF
    Due to the natural conditions and inappropriate management responses, large part of plains and forests in Iran have been burned in recent years. Given the increasing availability of open-access satellite images and open-source software packages, we developed a fast and cost-effective remote sensing methodology for characterizing burned areas for the entire country of Iran. We mapped the fire-affected areas using a post-classification supervised method and Landsat 8 time-series images. To this end, the Google Earth Engine (GEE) and Google Colab computing services were used to facilitate the downloading and processing of images as well as allowing for effective implementation of the algorithms. In total, 13 spectral indices were calculated using Landsat 8 images and were added to the nine original bands of Landsat 8. The training polygons of the burned and unburned areas were accurately distinguished based on the information acquired from the Iranian Space Agency (ISA), Sentinel-2 images, and Fire Information for Resource Management System (FIRMS) products. A combination of Genetic Algorithm (GA) and Neural Network (NN) approaches was then implemented to specify 19 optimal features out of the 22 bands. The 19 optimal bands were subsequently applied to two classifiers of NN and Random Forest (RF) in the timespans of 1 January 2019 to 30 December 2020 and of 1 January 2021 to 30 September 2021. The overall classification accuracies of 94% and 96% were obtained for these two classifiers, respectively. The omission and commission errors of both classifiers were also less than 10%, indicating the promising capability of the proposed methodology in detecting the burned areas. To detect the burned areas caused by the wildfire in 2021, the image differencing method was used as well. The resultant models were finally compared to the MODIS fire products over 10 sampled polygons of the burned areas. Overall, the models had a high accuracy in detecting the burned areas in terms of shape and perimeter, which can be further implicated for potential prevention strategies of endangered biodiversity.Peer ReviewedPostprint (published version

    Estimating the granularity coefficient of a Potts-Markov random field within an MCMC algorithm

    Get PDF
    This paper addresses the problem of estimating the Potts parameter B jointly with the unknown parameters of a Bayesian model within a Markov chain Monte Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem because performing inference on B requires computing the intractable normalizing constant of the Potts model. In the proposed MCMC method the estimation of B is conducted using a likelihood-free Metropolis-Hastings algorithm. Experimental results obtained for synthetic data show that estimating B jointly with the other unknown parameters leads to estimation results that are as good as those obtained with the actual value of B. On the other hand, assuming that the value of B is known can degrade estimation performance significantly if this value is incorrect. To illustrate the interest of this method, the proposed algorithm is successfully applied to real bidimensional SAR and tridimensional ultrasound images

    A high resolution full-field range imaging system for robotic devices

    Get PDF
    There has been considerable effort by many researchers to develop a high resolution full-field range imaging system. Traditionally these systems rely on a homodyne technique that modulates the illumination source and shutter speed at some high frequency. These systems tend to suffer from the need to be calibrated to account for changing ambient light conditions and generally cannot provide better than single centimeter range resolution, and even then over a range of only a few meters. We present a system, tested to proof-of-concept stage that is being developed for use on a range of mobile robots. The system has the potential for real-time, sub millimeter range resolution, with minimal power and space requirements
    • …
    corecore