3,212 research outputs found

    Trying to break new ground in aerial archaeology

    Get PDF
    Aerial reconnaissance continues to be a vital tool for landscape-oriented archaeological research. Although a variety of remote sensing platforms operate within the earth’s atmosphere, the majority of aerial archaeological information is still derived from oblique photographs collected during observer-directed reconnaissance flights, a prospection approach which has dominated archaeological aerial survey for the past century. The resulting highly biased imagery is generally catalogued in sub-optimal (spatial) databases, if at all, after which a small selection of images is orthorectified and interpreted. For decades, this has been the standard approach. Although many innovations, including digital cameras, inertial units, photogrammetry and computer vision algorithms, geographic(al) information systems and computing power have emerged, their potential has not yet been fully exploited in order to re-invent and highly optimise this crucial branch of landscape archaeology. The authors argue that a fundamental change is needed to transform the way aerial archaeologists approach data acquisition and image processing. By addressing the very core concepts of geographically biased aerial archaeological photographs and proposing new imaging technologies, data handling methods and processing procedures, this paper gives a personal opinion on how the methodological components of aerial archaeology, and specifically aerial archaeological photography, should evolve during the next decade if developing a more reliable record of our past is to be our central aim. In this paper, a possible practical solution is illustrated by outlining a turnkey aerial prospection system for total coverage survey together with a semi-automated back-end pipeline that takes care of photograph correction and image enhancement as well as the management and interpretative mapping of the resulting data products. In this way, the proposed system addresses one of many bias issues in archaeological research: the bias we impart to the visual record as a result of selective coverage. While the total coverage approach outlined here may not altogether eliminate survey bias, it can vastly increase the amount of useful information captured during a single reconnaissance flight while mitigating the discriminating effects of observer-based, on-the-fly target selection. Furthermore, the information contained in this paper should make it clear that with current technology it is feasible to do so. This can radically alter the basis for aerial prospection and move landscape archaeology forward, beyond the inherently biased patterns that are currently created by airborne archaeological prospection

    SusOrganic - Development of quality standards and optimised processing methods for organic produce - Final report

    Get PDF
    The SusOrganic project aimed to develop improved drying and cooling/freezing processes for organic products in terms of sustainability and objective product quality criteria. Initially, the consortium focused on a predefined set products to investigate (fish, meat, fruits and vegetables). Contacting participants in the fruit and vegetable sector showed that there is only little perceived need for making changes for the improvement of the processes. At the same time, it became clear that hops and herb producers (drying) face several challenges in terms of product quality and cost of drying processes. Therefore, the range of products was extended to these products. The results of a consumer survey conducted as part the project showed clearly that consumers trust in the organic label, but also tend to mix up the term organic with regional or fair ­trade. Further, the primary production on farm and not the processing is explicitly included in the consumers’ evaluation of sustainability. Appearance of organic products was found to be one of the least important quality criteria or attributes regarding buying decisions. However, there are indications that an imperfect appearance could be a quality attribute for consumers, as the product then is perceived to be processed without artificial additives. Regarding drying operations, small scale producers in the organic sector often work with old and/or modified techniques and technologies, which often leads to an inefficient drying processes due to high energy consumptions and decreased product quality. Inappropriate air volume flow and distribution often cause inefficient removal of the moisture from the product and heterogeneous drying throughout the bulk. Guidelines for improvement of the physical setup of existing driers as well as designs for new drying operations, including novel drying strategies were developed. Besides chilling and freezing, the innovative idea of superchilling was included into the project.The superchilled cold chain is only a few degrees colder than the refrigeration chain but has a significant impact on the preservation characteristic due to shock frosting of the outer layer of the product and the further distribution of very small ice crystals throughout the product during storage. Super­chilling of organically grown salmon eliminated the demand of ice for transport, resulting in both, a reduction of energy costs and a better value chain performance in terms of carbon foot printing. This is mainly due to the significantly reduced transport volume and weight without the presence of ice. The product quality is not different but the shelf life is extended compared to chilled fish. This means that the high quality of organic salmon can be maintained over a longer time period, which can be helpful,e.g. to reach far distant markets. The same trend was found for superchilled organic meat products such as pork and chicken. The consortium also developed innovative noninvasive measurement and control systems and improved drying strategies and systems for fruits, vegetables, herbs, hops and meat. Those systems are based on changes occurring inside the product and therefore require observation strategies of the product during the drying process. Through auditing campaigns as well as pilot scale drying tests it has been possible to develop optimisation strategies for both herb and hops commodities, which can help reduce microbial spoilage and retain higher levels of volatile product components whilst reducing the energy demands. These results can be applied with modifications to the other commodities under investigation. The environmental and cost performance of superchilling of salmon and drying of meat, fruit and vegetables were also investigated and the findings indicated that both superchilling and drying could improve sustainability of organic food value chains especially in case of far distant markets. An additional outcome of the project, beyond the original scope was the development of a noninvasive, visual sensor based detection system for authenticity checks of meat products in terms of fresh and prefrozen meats

    Efficiently Tracking Homogeneous Regions in Multichannel Images

    Full text link
    We present a method for tracking Maximally Stable Homogeneous Regions (MSHR) in images with an arbitrary number of channels. MSHR are conceptionally very similar to Maximally Stable Extremal Regions (MSER) and Maximally Stable Color Regions (MSCR), but can also be applied to hyperspectral and color images while remaining extremely efficient. The presented approach makes use of the edge-based component-tree which can be calculated in linear time. In the tracking step, the MSHR are localized by matching them to the nodes in the component-tree. We use rotationally invariant region and gray-value features that can be calculated through first and second order moments at low computational complexity. Furthermore, we use a weighted feature vector to improve the data association in the tracking step. The algorithm is evaluated on a collection of different tracking scenes from the literature. Furthermore, we present two different applications: 2D object tracking and the 3D segmentation of organs.Comment: to be published in ICPRS 2017 proceeding

    Tree species identification in an urban environment using a data fusion approach

    Get PDF
    This thesis explores a data fusion approach combining hyperspectral, LiDAR, and multispectral data to classify tree species in an urban environment. The study area is the campus of the University of Northern Iowa. In order to use the data fusion approach, a wide variety of data was incorporated into the classification. These data include: a four-band Quickbird image from April 2003 with 0.6m spatial resolution, a 24-band AISA hyperspectral image from July 2004 with 2m spatial resolution, a 63-band AISA Eagle hyperspectral image from October 2006 with lm spatial resolution, a high resolution, multiple return LiDAR data set from April 2006 with sub-meter posting density, spectrometer data gathered in the field, and a database containing the location and type of every tree in the study area. The elevation data provided by the LiDAR was fused with the imagery in eCognition Professional. The LiDAR data was used to refine class rules by defining trees as objects with elevation greater than 3 meters. Classes included honey locust, white pine, crab apple, sugar maple, white spruce, American basswood, pin oak and ash. Results indicate fusing LiDAR data with these imageries showed an increase in overall classification accuracy for all datasets. Overall classification accuracy with the October 2006 hyperspectral data and LiDAR was 93%. Increases in overall accuracy ranged from 12 to 24% over classifications based on spectral imagery alone. Further, in this study, hyperspectral data with higher spatial resolution provided increased classification accuracy. The limitations of the study included a LiDAR data set that was acquired slightly before the leaves had matured. This affected the shape and extent of these trees based on their LiDAR returns. The July 2004 hyperspectral data set was difficult to georectify with its 2m resolution. This may have resulted in some minor issues of alignment between the LiDAR and the July 2004 hyperspectral data. Future directions of the study include developing a classification scheme using a Classification And Regression Tree, utilizing all of the LiDAR returns in a classification instead of just the first and fourth returns, and examining an additional LiDAR-derived data set with estimated tree locations

    Web-Based Visualization of Very Large Scientific Astronomy Imagery

    Full text link
    Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of datasets. It provides access to floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.Comment: Published in Astronomy & Computing. IIPImage server available from http://iipimage.sourceforge.net . Visiomatic code and demos available from http://www.visiomatic.org

    Remote sensing of coastal vegetation in the Netherlands and Belgium

    Get PDF
    Vegetation maps are frequently used in conservation planning and evaluation. Monitoring commitments, a.o. in relation to the European Habitat Directive, increase the need for efficient mapping tools. This paper explores methods of vegetation mapping with particular attention to automated classification of remotely sensed images. Characteristics of two main types of imagery are discussed, very high spatial resolution false colour images on the one hand and hyperspectral images on the other. The first type has proved its qualities for mapping of - mainly - vegetation structure in dunes and salt marshes. Hyperspectral imagery enables thematic detail but encounters more technical problems

    The future of Earth observation in hydrology

    Get PDF
    In just the past 5 years, the field of Earth observation has progressed beyond the offerings of conventional space-agency-based platforms to include a plethora of sensing opportunities afforded by CubeSats, unmanned aerial vehicles (UAVs), and smartphone technologies that are being embraced by both for-profit companies and individual researchers. Over the previous decades, space agency efforts have brought forth well-known and immensely useful satellites such as the Landsat series and the Gravity Research and Climate Experiment (GRACE) system, with costs typically of the order of 1 billion dollars per satellite and with concept-to-launch timelines of the order of 2 decades (for new missions). More recently, the proliferation of smart-phones has helped to miniaturize sensors and energy requirements, facilitating advances in the use of CubeSats that can be launched by the dozens, while providing ultra-high (3-5 m) resolution sensing of the Earth on a daily basis. Start-up companies that did not exist a decade ago now operate more satellites in orbit than any space agency, and at costs that are a mere fraction of traditional satellite missions. With these advances come new space-borne measurements, such as real-time high-definition video for tracking air pollution, storm-cell development, flood propagation, precipitation monitoring, or even for constructing digital surfaces using structure-from-motion techniques. Closer to the surface, measurements from small unmanned drones and tethered balloons have mapped snow depths, floods, and estimated evaporation at sub-metre resolutions, pushing back on spatio-temporal constraints and delivering new process insights. At ground level, precipitation has been measured using signal attenuation between antennae mounted on cell phone towers, while the proliferation of mobile devices has enabled citizen scientists to catalogue photos of environmental conditions, estimate daily average temperatures from battery state, and sense other hydrologically important variables such as channel depths using commercially available wireless devices. Global internet access is being pursued via high-altitude balloons, solar planes, and hundreds of planned satellite launches, providing a means to exploit the "internet of things" as an entirely new measurement domain. Such global access will enable real-time collection of data from billions of smartphones or from remote research platforms. This future will produce petabytes of data that can only be accessed via cloud storage and will require new analytical approaches to interpret. The extent to which today's hydrologic models can usefully ingest such massive data volumes is unclear. Nor is it clear whether this deluge of data will be usefully exploited, either because the measurements are superfluous, inconsistent, not accurate enough, or simply because we lack the capacity to process and analyse them. What is apparent is that the tools and techniques afforded by this array of novel and game-changing sensing platforms present our community with a unique opportunity to develop new insights that advance fundamental aspects of the hydrological sciences. To accomplish this will require more than just an application of the technology: in some cases, it will demand a radical rethink on how we utilize and exploit these new observing systems

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy
    corecore