410 research outputs found

    Geo-correction of high-resolution imagery using fast template matching on a GPU in emergency mapping contexts

    Get PDF
    The increasing availability of satellite imagery acquired from existing and new sensors allow a wide variety of new applications that depend on the use of diverse spectral and spatial resolution data sets. One of the pre-conditions for the use of hybrid image data sets is a consistent geo-correction capacity. We demonstrate how a novel fast template matching approach implemented on a Graphics Processing Unit (GPU) allows us to accurately and rapidly geo-correct imagery in an automated way. The key difference with existing geo-correction approaches, which do not use a GPU, is the possibility to match large source image segments (8192 by 8192 pixels) with relatively large templates (512 by 512 pixels). Our approach is sufficiently robust to allow for the use of various reference data sources. The need for accelerated processing is relevant in our application context, which relates to mapping activities in the European Copernicus emergency management service. Our new method is demonstrated over an area North-West of Valencia (Spain) for a large forest fire event in July 2012. We use DEIMOS-1 and RapidEye imagery for the delineation of burnt fire scar extent. Automated geo-correction of each full resolution image sets takes approximately 1 minute. The reference templates are taken from the TerraColor data set and the Spanish national ortho-imagery data base, through the use of dedicate web map services (WMS). Geo-correction results are compared to the vector sets derived in the related Copernicus emergency service activation request.JRC.G.2-Global security and crisis managemen

    Generalized Model to Enable Zero-shot Imitation Learning for Versatile Robots

    Get PDF
    The rapid advancement in Deep Learning (DL), especially in Reinforcement Learning (RL) and Imitation Learning (IL), has positioned it as a promising approach for a multitude of autonomous robotic systems. However, the current methodologies are predominantly constrained to singular setups, necessitating substantial data and extensive training periods. Moreover, these methods have exhibited suboptimal performance in tasks requiring long-horizontal maneuvers, such as Radio Frequency Identification (RFID) inventory, where a robot requires thousands of steps to complete. In this thesis, we address the aforementioned challenges by presenting the Cross-modal Reasoning Model (CMRM), a novel zero-shot Imitation Learning policy, to tackle long-horizontal robotic tasks. The RFID inventory task is a typical long-horizontal robotic task that can be formulated as a Partially Observable Markov Decision Process (POMDP); the robot should be able to recall previous actions and reason from current environmental observations to optimize its strategy. To this end, our CMRM has been designed with a two-stream flow structure to extract abstract information concealed in environmental observations and subsequently generate robot actions by reasoning structural and temporal features from historical and current observations. Extensive experiments in a virtual platform and mockup real store are conducted to evaluate the proposed CMRM. Experimental results demonstrate that CMRM is capable of performing RFID inventory tasks in unstructured environments with complex layouts and provides competitive accuracy that surpasses previous methods and manual inventory. To facilitate the training and assessment of CMRM, we constructed a Unity3D-based virtual platform that can be configured into various environments, like an apparel store. This platform is capable of offering photo-realistic objects and precise physical features (gravities, appearance, and more) to provide close to real environments for training and testing robots. Subsequently, the robot, once trained, was deployed in an actual retail environment to perform RFID inventory tasks. This approach effectively bridges the ``reality gap , enabling the robot to perform the RFID inventory task seamlessly in both virtual and real-world settings, thereby demonstrating zero-shot generalization capabilities

    Towards Intelligent Runtime Framework for Distributed Heterogeneous Systems

    Get PDF
    Scientific applications strive for increased memory and computing performance, requiring massive amounts of data and time to produce results. Applications utilize large-scale, parallel computing platforms with advanced architectures to accommodate their needs. However, developing performance-portable applications for modern, heterogeneous platforms requires lots of effort and expertise in both the application and systems domains. This is more relevant for unstructured applications whose workflow is not statically predictable due to their heavily data-dependent nature. One possible solution for this problem is the introduction of an intelligent Domain-Specific Language (iDSL) that transparently helps to maintain correctness, hides the idiosyncrasies of lowlevel hardware, and scales applications. An iDSL includes domain-specific language constructs, a compilation toolchain, and a runtime providing task scheduling, data placement, and workload balancing across and within heterogeneous nodes. In this work, we focus on the runtime framework. We introduce a novel design and extension of a runtime framework, the Parallel Runtime Environment for Multicore Applications. In response to the ever-increasing intra/inter-node concurrency, the runtime system supports efficient task scheduling and workload balancing at both levels while allowing the development of custom policies. Moreover, the new framework provides abstractions supporting the utilization of heterogeneous distributed nodes consisting of CPUs and GPUs and is extensible to other devices. We demonstrate that by utilizing this work, an application (or the iDSL) can scale its performance on heterogeneous exascale-era supercomputers with minimal effort. A future goal for this framework (out of the scope of this thesis) is to be integrated with machine learning to improve its decision-making and performance further. As a bridge to this goal, since the framework is under development, we experiment with data from Nuclear Physics Particle Accelerators and demonstrate the significant improvements achieved by utilizing machine learning in the hit-based track reconstruction process

    FPGA-based smart camera mote for pervasive wireless network

    Get PDF
    International audienceSmart camera networks raise challenging issues in many fields of research, including vision processing, communication protocols, distributed algorithms or power management. The ever increasing resolution of image sensors entails huge amounts of data, far exceeding the bandwidth of current networks and thus forcing smart camera nodes to process raw data into useful information. Consequently, on-board processing has become a key issue for the expansion of such networked systems. In this context, FPGA-based platforms, supporting massive, fine grain data parallelism, offer large opportunities. Besides, the concept of a middleware, providing services for networking, data transfer, dynamic loading or hardware abstraction, has emerged as a means of harnessing the hardware and software complexity of smart camera nodes. In this paper, we prospect the development of a new kind of smart cameras, wherein FPGAs provide high performance processing and general purpose processors support middleware services. In this approach, FPGA devices can be reconfigured at run-time through the network both from explicit user request and transparent middleware decision. An embedded real-time operating system is in charge of the communication layer, and thus can autonomously decide to use a part of the FPGA as an available processing resource. The classical programmability issue, a significant obstacle when dealing with FPGAs, is addressed by resorting to a domain specific high-level programming language (CAPH) for describing operations to be implemented on FPGAs

    Towards safer mining: the role of modelling software to find missing persons after a mine collapse

    Get PDF
    Purpose. The purpose of the study is to apply science and technology to determine the most likely location of a container in which three miners were trapped after the Lily mine disaster. Following the collapse of the Crown Pillar at Lily Mine in South Africa on the 5th of February 2016, there was a national outcry to find the three miners who were trapped in a surface container lamp room that disappeared in the sinkhole that formed during the surface col-lapse. Methods. At a visit to Lily Mine on the 9th of March, the Witwatersrand Mining Institute suggested a two-way strategy going forward to find the container in which the miners are trapped and buried. The first approach, which is the subject of this paper, is to test temporal 3D modeling software technology to locate the container, and second, to use scientific measurement and testing technologies. The overall methodology used was to first, request academia and research entities within the University to supply the WMI with ideas, which ideas list was compiled as responses came in. These were scrutinized and literature gathered for a conceptual study on which these ideas are likely to work. The software screening and preliminary testing of such software are discussed in this article. Findings. The findings are that software modeling is likely to locate the present position of the container, but accurate data and a combination of different advanced software packages will be required, but at tremendous cost. Originality. This paper presents original work on how software technology can be used to locate missing miners. Practical implications. The two approaches were not likely to recover the miners alive because of the considerable time interval, but will alert the rescue team and mine workers when they come in close proximity to them.Мета. Визначення можливого місця локалізації лампового приміщення контейнера, в якому опинилися три шахтаря після аварії на шахті Лілі (Барбертон, Мпумаланга) методом комп’ютерного моделювання. Після обвалення стельового цілика на шахті Лілі 5 лютого 2016 року почалася національна кампанія з порятунку трьох шахтарів, які залишилися у ламповому приміщенні поверхневого транспортного контейнера, що провалився в утворену після вибуху воронку. Методика. Співробітниками Гірничого Інституту (Уітуотерс) запропонована двостадійна стратегія пошуку контейнера, в якому існує ймовірність знаходження шахтарів. В рамках першого підходу (який розглядається у даній статті) для виявлення контейнера здійснювалось випробування комп’ютерної технології 3D-моделювання в часі. Другий підхід передбачав технологію проведення наукового вимірювання та експерименту. В цілому, методологія включала, насамперед, підключення викладацького та наукового складу університету до вирішення проблеми шляхом комплексної генерації ідей, які були об’єднані в загальний список, вивчені із залученням відповідних літературних джерел, і найбільш реалістичні ідеї були виділені із загального переліку. Дана стаття розглядає результати комп’ютерної експертизи цих ідей та перевірки надійності відповідного програмного забезпечення. Результати. Для зручності моделювання процес обвалення був розділений на три окремі фази: руйнування воронки, руйнування західного схилу та небезпека ковзання на південних схилах. Ідентифіковано програмні технології, які можуть імітувати рух контейнера у перших двох фазах обвалення. В результаті моделювання у програмному забезпеченні ParaView виявлено місце розташування даного контейнера. Виконано аналіз південного схилу за допомогою ArcGIS і складені карти небезпеки схилу для району, а також підземні карти порятунку з маршрутами евакуації. Встановлено, що комп’ютерне моделювання може визначити місцезнаходження контейнера, але для цього потрібні точні вихідні дані й комплекс дорогих високоефективних програмних пакетів. Наукова новизна. Вперше застосовано комплекс комп’ютерних технологій та програмного забезпечення для пошуку зниклих шахтарів після аварійних ситуацій у підземному просторі шахт. Практична значимість. При застосуванні двостадійної стратегії пошуку шахтарів, що опинилися під завалом порід, команда рятувальників отримає сигнал про наближення до їх місцезнаходження.Цель. Определение возможного места локализации лампового помещения контейнера, в котором оказались три шахтера после аварии на шахте Лили (Барбертон, Мпумаланга) методом компьютерного моделирования. После обрушения потолочного целика на шахте Лили 5 февраля 2016 года началась национальная кампания по спасению трех шахтеров, оставшихся в ламповом помещении поверхностного транспортного контейнера, который провалился в воронку, образовавшуюся после взрыва. Методика. Сотрудниками Горного Института (Уитуотерс) предложена двухстадийная стратегия поиска контейнера, в котором существует вероятность нахождения шахтеров. В рамках первого подхода (который рассматривается в данной статье) для обнаружения контейнера производилось испытание компьютерной технологии 3D-моделирования во времени. Второй подход предполагал технологию проведения научного измерения и эксперимента. В целом, методология включала, прежде всего, подключение преподавательского и научного состава университета к решению проблемы путем комплексной генерации идей, которые были объединены в общий список, изучены с привлечением соответствующих литературных источников, и наиболее реалистичные идеи были выделены из общего списка. Настоящая статья рассматривает результаты компьютерной экспертизы данных идей и проверки надежности соответствующего программного обеспечения. Результаты. Для удобства моделирования процесс обрушения был разделен на три отдельные фазы: разрушение воронки, разрушение западного склона и опасность скольжения на южных склонах. Идентифицированы программные технологии, которые могут имитировать движение контейнера в первых двух фазах обрушения. В результате моделирования в программном обеспечении ParaView выявлено местоположение данного контейнера. Выполнен анализа южного склона с помощью ArcGIS и составлены карты опасности склона для района, а также подземные карты спасения с маршрутами эвакуации. Установлено, что компьютерное моделирование может определить местонахождение контейнера, но для этого нужны точные исходные данные и комплекс дорогостоящих высокоэффективных программных пакетов. Научная новизна. Впервые применен комплекс компьютерных технологий и программного обеспечения для поиска пропавших шахтеров после аварийных ситуаций в подземном пространстве шахт. Практическая значимость. При применении двухстадийной стратегии поиска шахтеров, оказавшихся под завалом пород, команда горноспасателей получит сигнал о приближении к их местонахождению.The results of the article were obtained without the support of any of the projects or funding

    Sensor management for enhanced catalogue maintenance of resident space objects

    Get PDF

    A Networked Dataflow Simulation Environment for Signal Processing and Data Mining Applications

    Get PDF
    In networked signal processing systems, dataflow graphs can be used to describe the processing on individual network nodes. However, to analyze the correctness and performance of these systems, designers must understand the interactions across these individual "node-level'' dataflow graphs --- as they communicate across the network --- in addition to the characteristics of the individual graphs. In this thesis, we present a novel simulation environment, called the NS-2 -- TDIF SIMulation environment (NT-SIM). NT-SIM provides integrated co-simulation of networked systems and combines the network analysis capabilities provided by the Network Simulator (ns) with the scheduling capabilities of a dataflow-based framework, thereby providing novel features for more comprehensive simulation of networked signal processing systems. Through a novel integration of advanced tools for network and dataflow graph simulation, our NT-SIM environment allows comprehensive simulation and analysis of networked systems. We present two case studies that concretely demonstrate the utility of NT-SIM in the contexts of a heterogeneous signal processing and data mining system design

    Adaptive On-Board Signal Compression for SAR Using Machine Learning Methods

    Get PDF
    Satellites with synthetic aperture radar (SAR) payloads are growing in popularity, with a number of new institutional missions and commercial constellations launched or in planning. As an active instrument operating in the microwave region of the electromagnetic spectrum, SAR provides a number of unique advantages over passive optical instruments, in that it can image in all weather conditions and at night. This allows dense time-series to be built up over areas of interest, that are useful in a variety of Earth observation applications. The polarisation and phase information that can be captured also allows for unique applications not possible in optical frequencies. The data volume of SAR captures is growing due to developments in modern high-resolution multi-modal SAR. Instruments with higher spatial resolution, wider swaths, multiple beams, multiple frequencies and more polarization channels are being launched. Miniaturization and the deployment of SAR constellations is bringing improved revisit times. All of these developments drive an increase in the operational cost due to the increase in data downlink required. These factors will make on-board data compression more crucial to overall system performance, especially in large scale constellations. The current deployed state-of-the-art of on-board compression in SAR space-borne payloads is Block Adaptive Quantization (BAQ) and variations such as Flexible BAQ, Entropy Constrained BAQ and Flexible Dynamic BAQ. Craft Prospect is working on an evolution of these techniques where machine learning will be used to identify signals based on dynamics and features of the received signal, with this edge processing allowing the tagging of raw data. These tags can then be used to better adjust the compression parameters to fit the local optimum in the acquired data. We present the results of a survey of available raw SAR data which was used to inform a selection of applications and frequencies for further study. Following this, we present a comparison of a number of SAR compression algorithms downselected using trade-off metrics such as the bands/applications they can be applied to and various complexity measures. We then show an assessment of AI/ML feasibility and capabilities, with the improvements assessed on mission examples characterised by the SAR modes and architecture for specific SAR applications. Finally, future hardware feasibility and capability is assessed, targeting a Smallsat SAR mission, with a high level roadmap developed to progress the concept toward this goal
    corecore