67 research outputs found

    An active vision system for tracking and mosaicking on UAV.

    Get PDF
    Lin, Kai Wun.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (leaves 120-127).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Overview of the UAV Project --- p.1Chapter 1.2 --- Challenges on Vision System for UAV --- p.2Chapter 1.3 --- Contributions of this Work --- p.4Chapter 1.4 --- Organization of Thesis --- p.6Chapter 2 --- Image Sensor Selection and Evaluation --- p.8Chapter 2.1 --- Image Sensor Overview --- p.8Chapter 2.1.1 --- Comparing Sensor Features and Performance --- p.9Chapter 2.1.2 --- Rolling Shutter vsGlobal Shutter --- p.10Chapter 2.2 --- Sensor Evaluation through USB Peripheral --- p.11Chapter 2.2.1 --- Interfacing Image Sensor and USB Controller --- p.12Chapter 2.2.2 --- Image Sensor Configuration --- p.14Chapter 2.3 --- Image Data Transmitting and Processing --- p.17Chapter 2.3.1 --- Data Transfer Mode and Buffering on USB Controller --- p.18Chapter 2.3.2 --- Demosaicking of Bayer Image Data --- p.20Chapter 2.4 --- Splitting Images and Exposure Problem --- p.22Chapter 2.4.1 --- Buffer Overflow on USB Controller --- p.22Chapter 2.4.2 --- Image Luminance and Exposure Adjustment --- p.24Chapter 3 --- Embedded System for Vision Processing --- p.26Chapter 3.1 --- Overview of the Embedded System --- p.26Chapter 3.1.1 --- TI OMAP3530 Processor --- p.27Chapter 3.1.2 --- Gumstix Overo Fire Computer-on-Module --- p.27Chapter 3.2 --- Interfacing Camera Module to the Embedded System --- p.28Chapter 3.2.1 --- Image Signal Processing Subsystem --- p.29Chapter 3.2.2 --- Camera Module Adapting Board --- p.30Chapter 3.2.3 --- Image Sensor Driver and Program Development --- p.31Chapter 3.3 --- View-stabilizing Biaxial Camera Platform --- p.34Chapter 3.3.1 --- The New Camera System iv --- p.35Chapter 3.3.2 --- View-stabilizing Pan-tilt Platform --- p.41Chapter 3.4 --- Overall System Architecture and UAV Integration --- p.46Chapter 4 --- Target Tracking and Geo-locating --- p.50Chapter 4.1 --- Camera Calibration --- p.51Chapter 4.1.1 --- The Perspective Camera Model --- p.51Chapter 4.1.2 --- Camera Lens Distortions --- p.53Chapter 4.1.3 --- Calibration Toolbox and Results --- p.54Chapter 4.2 --- Selection of Object Features and Trackers --- p.56Chapter 4.2.1 --- Harris Corner Detection --- p.58Chapter 4.2.2 --- Color Histogram --- p.59Chapter 4.2.3 --- KLT and Mean-shift Tracker --- p.59Chapter 4.3 --- Target Auto-centering --- p.64Chapter 4.3.1 --- Formulation of the PID Controller --- p.65Chapter 4.3.2 --- Control Gain Settings and Tuning --- p.69Chapter 4.4 --- Geo-locating of Tracked Target --- p.69Chapter 4.4.1 --- Coordinate Frame Transformation --- p.70Chapter 4.4.2 --- Depth Estimation and Target Locating --- p.74Chapter 4.5 --- Results and Discussion --- p.77Chapter 5 --- Real-time Aerial Mosaic Building --- p.89Chapter 5.1 --- Motion Model Selection --- p.90Chapter 5.1.1 --- Planar Perspective Motion Model --- p.90Chapter 5.2 --- Feature-based Image Alignment --- p.91Chapter 5.2.1 --- Image Preprocessing --- p.91Chapter 5.2.2 --- Feature Extraction and Matching --- p.92Chapter 5.2.3 --- Image Alignment using RANSAC Algorithm --- p.94Chapter 5.3 --- Image Composition --- p.95Chapter 5.3.1 --- Image Blending with Distance Map --- p.96Chapter 5.3.2 --- Overall Stitching Process --- p.98Chapter 5.4 --- Mosaic Simulation using Google Earth --- p.99Chapter 5.5 --- Results and Discussion --- p.100Chapter 6 --- Conclusion and Further Work --- p.108Chapter A --- System Schematics --- p.111Chapter B --- Image Sensor Sensitivity --- p.118Bibliography --- p.12

    Review article: The use of remotely piloted aircraft systems (RPASs) for natural hazards monitoring and management

    Get PDF
    The number of scientific studies that consider possible applications of remotely piloted aircraft systems (RPASs) for the management of natural hazards effects and the identification of occurred damages strongly increased in the last decade. Nowadays, in the scientific community, the use of these systems is not a novelty, but a deeper analysis of the literature shows a lack of codified complex methodologies that can be used not only for scientific experiments but also for normal codified emergency operations. RPASs can acquire on-demand ultra-high-resolution images that can be used for the identification of active processes such as landslides or volcanic activities but can also define the effects of earthquakes, wildfires and floods. In this paper, we present a review of published literature that describes experimental methodologies developed for the study and monitoring of natural hazard

    A New Coastal Crawler Prototype to Expand the Ecological Monitoring Radius of OBSEA Cabled Observatory

    Get PDF
    The use of marine cabled video observatories with multiparametric environmental data collection capability is becoming relevant for ecological monitoring strategies. Their ecosystem surveying can be enforced in real time, remotely, and continuously, over consecutive days, seasons, and even years. Unfortunately, as most observatories perform such monitoring with fixed cameras, the ecological value of their data is limited to a narrow field of view, possibly not representative of the local habitat heterogeneity. Docked mobile robotic platforms could be used to extend data collection to larger, and hence more ecologically representative areas. Among the various state-of-the-art underwater robotic platforms available, benthic crawlers are excellent candidates to perform ecological monitoring tasks in combination with cabled observatories. Although they are normally used in the deep sea, their high positioning stability, low acoustic signature, and low energetic consumption, especially during stationary phases, make them suitable for coastal operations. In this paper, we present the integration of a benthic crawler into a coastal cabled observatory (OBSEA) to extend its monitoring radius and collect more ecologically representative data. The extension of the monitoring radius was obtained by remotely operating the crawler to enforce back-and-forth drives along specific transects while recording videos with the onboard cameras. The ecological relevance of the monitoring-radius extension was demonstrated by performing a visual census of the species observed with the crawler’s cameras in comparison to the observatory’s fixed cameras, revealing non-negligible differences. Additionally, the videos recorded from the crawler’s cameras during the transects were used to demonstrate an automated photo-mosaic of the seabed for the first time on this class of vehicles. In the present work, the crawler travelled in an area of 40 m away from the OBSEA, producing an extension of the monitoring field of view (FOV), and covering an area approximately 230 times larger than OBSEA’s camera. The analysis of the videos obtained from the crawler’s and the observatory’s cameras revealed differences in the species observed. Future implementation scenarios are also discussed in relation to mission autonomy to perform imaging across spatial heterogeneity gradients around the OBSEA

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]

    Implementation and improvement of an unmanned aircraft system for precision farming purposes

    Get PDF
    Precision farming (PF) is an agricultural concept that accounts for within-field variability by gathering spatial and temporal information with modern sensing technology and performs variable and targeted treatments on a smaller scale than field scale. PF research quickly recognized the possible benefits unmanned aerial vehicles (UAVs) can add to the site-specific management of farms. As UAVs are flexible carrier platforms, they can be equipped with a range of different sensing devices and used in a variety of close-range remote sensing scenarios. Most frequently, UAVs are utilized to gather actual in-season canopy information with imaging sensors that are sensitive to reflected electro-magnetic radiation in the visual (VIS) and near-infrared (NIR) spectrum. They are generally used to infer the crops biophysical and biochemical parameters to support farm management decisions. A current disadvantage of UAVs is that they are not designed to interact with their attached sensor payload. This leads to the need of intensive data post-processing and prohibits the possibility of real-time scenarios, in which UAVs can directly transfer information to field machinery or robots. In consequence, this thesis focused on the development of a smart unmanned aircraft system (UAS), which in the thesis context was regarded as a combination of a UAV carrier platform, an on-board central processing unit for sensor control and data processing, and a remotely connected ground control station. The ground control station was supposed to feature the possibility of flight mission control and the standardized distribution of sensor data with a sensor data infrastructure, serving as a data basis for a farm management information system (FMIS). The UAS was intended to be operated as a flexible monitoring tool for in-season above-ground biomass and nitrogen content estimation as well as crop yield prediction. Therefore, the selection, development, and validation of appropriate imaging sensors and processing routines were key parts to prove the UAS usability in PF scenarios. The individual objectives were (i) to implement an advanced UAV for PF research, providing the possibilities of remotely-controlled and automatic flight mission execution, (ii) to improve the developed UAV to a UAS by implementing sensor control, data processing and communication functionalities, (iii) to select and develop appropriate sensor systems for yield prediction and nitrogen fertilization strategies, (iv) to integrate the sensor systems into the UAS and to test the performance in example use cases, and (v) to embed the UAS into a standardized sensor data infrastructure for data storage and usage in PF applications. This work demonstrated the successful development of a custom rotary-wing UAV carrier platform with an embedded central processing unit. A modular software framework was developed with the ability to control any kind of sensor payload in real-time. The sensors can be triggered and their measurements are retrieved, fused together with the carriers navigation information, logged and broadcasted to a ground control station. The setup was used as basis for further research, focusing on information generation by sophisticated data processing. For a first application of predicting the grain yield of corn (Zea mays L.), a simple RGB camera was selected to acquire a set of aerial imagery of early- and mid-season corn crops. Orthoimages were processed with different ground resolutions and were computed to simple vegetation indices (VI) for a crop/non-crop classification. In addition to that, crop surface models (CSMs) were generated to estimate the crop heights. Linear regressions were performed with the corn grain yield as dependent variable and crop height and crop coverage as independent variable. The analysis showed the best prediction results of a relative root mean square error (RMSE) of 8.8 % at mid-season growth stages and ground resolutions of 4 cm px −1 . Moreover, the results indicate that with on-going canopy closure and homogeneity accounting for high ground resolutions and crop/non-crop classification becomes less and less important. For the estimation of above-ground biomass and nitrogen content in winter wheat (Triticum aestivum L.) a programmable multispectral camera was developed. It is based on an industrial multi-sensor camera, which was equipped with bandpass filters to measure four narrow wavelength bands in the so-called red-edge region. This region is the transition zone in between the VIS and NIR spectrum and known to be sensitive to leaf chlorophyll content and the structural state of the plant. It is often used to estimate biomass and nitrogen content with the help of the normalized difference vegetation index (NDVI) and the red-edge inflection point (REIP). The camera system was designed to measure ambient light conditions during the flight mission to set appropriate image acquisition times, which guarantee images with high contrast. It is fully programmable and can be further developed to a real-time image processing system. The analysis relies on semi-automatic orthoimage processing. The NDVI orthoimages were analyzed for the correlation with biomass by means of simple linear regression. These models proved to estimate biomass for all measurements with RMSEs of 12.3 % to 17.6 %. The REIP was used to infer nitrogen content and showed good results with RMSEs of 7.6 % to 11.7 %. Both NDVI and REIP were also tested for the in-season grain yield prediction ability (RMSE = 9.012.1 %), whereas grain protein content could be modeled with the REIP, except for low-fertilized wheat plots. The last part of the thesis comprised the development of a standardized sensor data infrastructure as a first step to a holistic farm management. The UAS was integrated into a real-time sensor data acquisition network with standardized data base storage capabilities. The infrastructure was based on open source software and the geo-data standards of the Open Geospatial Consortium (OGC). A prototype implementation was tested for four exemplary sensor systems and proved to be able to acquire, log, visualize and store the sensor data in a standardized data base via a sensor observation service on-the-fly. The setup is scalable to scenarios, where a multitude of sensors, data bases, and web services interact with each other to exchange and process data. This thesis demonstrates the successful prototype implementation of a smart UAS and a sensor data infrastructure, which offers real-time data processing functionality. The UAS is equipped with appropriate sensor systems for agricultural crop monitoring and has the potential to be used in real-world scenarios.Precision farming (PF) ist ein landwirtschaftliches Konzept, das die Variabilität innerhalb eines Feldes berĂĽcksichtigt, indem es mit Hilfe moderner Sensortechnologien räumliche und zeitliche Bestandsinformationen sammelt. Dadurch ist PF in der Lage, gezielte teilflächenspezifische Anwendungen innerhalb eines Feldes durchzufĂĽhren. Die Forschung im Bereich von PF hat frĂĽh die potenziellen VorzĂĽge von kleinen Luftfahrzeugen, sogenannten unmanned aerial vehicles (UAVs), fĂĽr die teilflächenspezifische Bewirtschaftung erkannt. Da UAVs flexible Lastenträger darstellen, können sie mit den verschiedensten Sensoren ausgestattet und in einer Vielzahl von fernerkundlichen Anwendungsfällen in der Landwirtschaft genutzt werden. Dabei werden sie am häufigsten mit bildgebenden Sensoren eingesetzt, um aktuelle Informationen ĂĽber den Pflanzenbestand in der Vegetationsperiode zu liefern. Die eingesetzten Sensoren sind dabei meist zur Messung elektromagnetischer Strahlung im sichtbaren (VIS) und nahen infraroten (NIR) Bereich ausgelegt. Im Allgemeinen werden sie dazu benutzt auf biophysikalische und biochemische Eigenschaften der Nutzpflanzen zu schlieĂźen und damit die Entscheidungsprozesse in der BestandsfĂĽhrung zu unterstĂĽtzen. Ein aktueller Nachteil der UAVs ist, dass sie nicht dafĂĽr gebaut werden um mit ihrer Nutzlast zu interagieren. Das fĂĽhrt zu einem Bedarf an erheblicher Datennachverarbeitung und verhindert Echtzeitszenarios, in denen UAVs Informationen direkt an Feldmaschinen und Roboter senden können. Aus diesem Grund konzentrierte sich diese Dissertation auf die Entwicklung eines intelligenten fliegenden Systems, eines sogenannten unmanned aircraft system (UAS), welches im Kontext dieser Dissertation als eine Kombination aus UAV Trägerplattform, zentralem Computer zur Sensorsteuerung und Datenverarbeitung, sowie einer entfernt verbundenen Bodenstation betrachtet wurde. Die Bodenstation war zur FlugĂĽberwachung und zur standardisierten Verteilung der Sensordaten ĂĽber eine Sensordateninfrastruktur bestimmt. Die Sensordateninfrastruktur diente als Basis eines sogenannten farm management information system (FMIS), das die Verwaltung und Bewirtschaftung eines landwirtschaftlichen Betriebs mit Methoden der Informatik unterstĂĽtzt. Das UAS sollte als flexibles Aufklärungswerkzeug eingesetzt werden, um Schätzungen von Biomasse, Stickstoffgehalt und erwartetem Ertrag während der Vegetationsperiode zu liefern. Daher war die Auswahl, Entwicklung und Validierung geeigneter bildgebender Sensoren und zugehöriger Verarbeitungsmethoden ein zentraler Bestandteil, um die Nutzbarkeit von UAS im PF zu belegen. Die einzelnen Ziele waren (i) der Aufbau eines UAVs fĂĽr das PF, das sich fernsteuern und automatisch nach Wegpunkten fliegen lässt, (ii) die Erweiterung des UAVs zum UAS, durch die Entwicklung einer zentralen Sensorsteuerung, Datenverarbeitung und Kommunikationsfähigkeit, (iii) die Auswahl und Entwicklung geeigneter Sensorsysteme zur Ertragsschätzung und StickstoffdĂĽngung, (iv) der Einbau der Sensorsysteme in das UAS und deren Validierung in Beispielanwendungen und (v) die Integration des UAS in eine standardisierte Sensordateninfrastruktur um die Daten fĂĽr PF-Anwendungen abzuspeichern und verfĂĽgbar zu machen. Diese Dissertation präsentiert eine erfolgreiche Entwicklung eines DrehflĂĽgler-UAVs mit zentraler Steuereinheit. Dazu passend wurde eine modulare Software entwickelt, die jegliche Sensorik in Echtzeit steuern kann. Messungen können ausgelöst, empfangen, mit den Navigationsdaten des UAVs fusioniert, gespeichert und an eine Bodenstation gesendet werden. Das UAV diente als Basis weiterer Forschung, die die Verarbeitung von Sensordaten zur Erzeugung pflanzenbaulicher Information zum Ziel hatte. Eine erste Anwendung war die Ertragsschätzung von Körnermais (Zea mays L.). Eine einfache RGB Kamera wurde dazu benutzt Luftbilder von Maispflanzen in frĂĽhen und mittleren Wachstumsstadien aufzunehmen. Daraus wurden Orthophotos mit unterschiedlichen Bodenauflösungen erzeugt und zu einfachen Vegetationsindizes (VIs) zur Klassifizierung der Pixel als Pflanze oder nicht Pflanze weiterverarbeitet. Zusätzlich wurden Oberflächenmodelle des Pflanzenbestands, sogenannte crop surface models (CSMs), erzeugt, um die Pflanzenhöhen abzuschätzen. Mit dem Ertrag als abhängige Variable, sowie Pflanzenhöhe und Bedeckungsgrad als unabhängige Variablen, wurden lineare Regressionen durchgefĂĽhrt. Die Analyse ergab beste Vorhersagen mit geringsten Standardabweichungen (SD) von 8.8 % fĂĽr die Messungen in mittleren Wachstumsstadien mit einer Bodenauflösung von 4 cm px −1 . DarĂĽber hinaus zeigten die Ergebnisse, dass hohe Bodenauflösungen und Klassifizierung mit fortschreitendem Reihenschluss und sich angleichendem Pflanzenbestand immer unwichtiger werden. Zur Schätzung von Biomasse und Stickstoffgehalt von Winterweizen (Triticum aestivum L.) wurde eine programmierbare multispektrale Kamera entwickelt. Sie basiert auf einer Industriekamera mit mehreren Sensorköpfen, von denen jeder mit einem Bandpassfilter bestĂĽckt wurde. Die Kamera misst vier schmalbandige Wellenlängen im Ăśbergangsbereich vom VIS- zum NIR-Spektrum, der sogenannten roten Kante red-edge. Dieser Bereich ist dafĂĽr bekannt RĂĽckschlĂĽsse auf den Chlorophyllgehalt der Blätter und die Pflanzenstruktur zuzulassen. Mit Hilfe der Formeln zur Berechnung des normalized difference vegetation index (NDVI) und des red-edge inflection point (REIP) wird dieser Bereich oft zur Schätzung von Biomasse und Stickstoffgehalt genutzt. Das Kamerasystem wurde darĂĽber hinaus entworfen, die Lichtverhältnisse während des Fluges zu messen und geeignete Belichtungszeiten festzulegen, um Bilder mit hohem Kontrast zu erzeugen. Die Kamera ist komplett programmierbar und kann zur Echtzeitbildverarbeitung weiterentwickelt werden. Die Untersuchung basiert auf der teilautomatisierten Erzeugung von Orthophotos. Die NDVI Orthophotos wurden mit Hilfe einer einfachen linearen Regression auf ihre Korrelation mit Biomasse getestet. Sie zeigten ĂĽber alle Messzeitpunkte, dass sie Biomasse mit Standardabweichungen von 12.3 % bis 17.6 % schätzen konnten. Der REIP wurde zur Stickstoffgehaltschätzung heran gezogen und zeigte gute Ergebnisse mit Standardabweichungen von 7.6 % bis 11.7 %. Beide, NDVI und REIP, wurden auch auf ihre Vorhersagefähigkeit des Kornertrags getestet (SD = 9.012.1 %). Ăśberdies konnte, auĂźer in gering gedĂĽngten Parzellen, der Proteingehalt im Korn mit dem REIP abgeschätzt werden. Der letzte Teil der Dissertation beinhaltete die Entwicklung einer standardisierten Sensordateninfrastruktur als Schritt hin zu einem umfassenden Bewirtschaftungskonzept, das möglichst viele Faktoren berĂĽcksichtigt. Das UAS wurde in ein echtzeitbasiertes Sensordatennetzwerk integriert, das Sensordaten erfassen und standardisiert in Datenbanken ablegen kann. Die Infrastruktur basiert auf quellcodeoffener open source software und den Geodatenstandards des Open Geospatial Consortiums (OGC). Eine erste Umsetzung einer solchen Infrastruktur wurde mit vier Beispielsensoren getestet und zeigte, dass Sensordaten in Echtzeit erfasst, lokal gespeichert, visualisiert und mittels eines Sensordatendienstes (sensor observation service) standardisiert in einer Datenbank gespeichert werden konnten. Die Umsetzung ist auf eine beliebige Anzahl von Sensoren und Diensten erweiterbar und ermöglicht ihnen den Austausch und die Verarbeitung von Daten. Diese Dissertation zeigt eine erfolgreiche Umsetzung eines intelligenten UAS und einer Sensordateninfrastruktur, die Sensordatenverarbeitung in Echtzeit anbietet. Das UAS ist mit Sensoren ausgestattet, die zur landwirtschaftlichen Beurteilung von Pflanzenbeständen geeignet sind und zeigt Potential auch unter realistischen Bedingungen eingesetzt werden zu können

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas

    Transforming scientific research and development in precision agriculture : the case of hyperspectral sensing and imaging : a thesis presented in partial fulfilment of the requirements for the degree of Doctor in Philosophy in Agriculture at Massey University, Manawatū, New Zealand. EMBARGOED until 30 September 2023.

    Get PDF
    Embargoed until 30 September 2023There has been increasing social and academic debate in recent times surrounding the arrival of agricultural big data. Capturing and responding to real world variability is a defining objective of the rapidly evolving field of precision agriculture (PA). While data have been central to knowledge-making in the field since its inception in the 1980s, research has largely operated in a data-scarce environment, constrained by time-consuming and expensive data collection methods. While there is a rich tradition of studying scientific practice within laboratories in other fields, PA researchers have rarely been the explicit focal point of detailed empirical studies, especially in the laboratory setting. The purpose of this thesis is to contribute to new knowledge of the influence of big data technologies through an ethnographic exploration of a working PA laboratory. The researcher spent over 30 months embedded as a participant observer of a small PA laboratory, where researchers work with nascent data rich remote sensing technologies. To address the research question: “How do the characteristics of technological assemblages affect PA research and development?” the ethnographic case study systematically identifies and responds to the challenges and opportunities faced by the science team as they adapt their scientific processes and resources to refine value from a new data ecosystem. The study describes the ontological characteristics of airborne hyperspectral sensing and imaging data employed by PA researchers. Observations of the researchers at work lead to a previously undescribed shift in the science process, where effort moves from the planning and performance of the data collection stage to the data processing and analysis stage. The thesis develops an argument that changing data characteristics are central to this shift in the scientific method researchers are employing to refine knowledge and value from research projects. Importantly, the study reveals that while researchers are working in a rapidly changing environment, there is little reflection on the implications of these changes on the practice of science-making. The study also identifies a disjunction to how science is done in the field, and what is reported. We discover that the practices that provide disciplinary ways of doing science are not established in this field and moments to learn are siloed because of commercial constraints the commercial structures imposed in this case study of contemporary PA research
    • …
    corecore