67 research outputs found

    Development of a New Framework for Distributed Processing of Geospatial Big Data

    Get PDF
    Geospatial technology is still facing a lack of “out of the box” distributed processing solutions which are suitable for the amount and heterogeneity of geodata, and particularly for use cases requiring a rapid response. Moreover, most of the current distributed computing frameworks have important limitations hindering the transparent and flexible control of processing (and/or storage) nodes and control of distribution of data chunks. We investigated the design of distributed processing systems and existing solutions related to Geospatial Big Data. This research area is highly dynamic in terms of new developments and the re-use of existing solutions (that is, the re-use of certain modules to implement further specific developments), with new implementations continuously emerging in areas such as disaster management, environmental monitoring and earth observation. The distributed processing of raster data sets is the focus of this paper, as we believe that the problem of raster data partitioning is far from trivial: a number of tiling and stitching requirements need to be addressed to be able to fulfil the needs of efficient image processing beyond pixel level. We attempt to compare the terms Big Data, Geospatial Big Data and the traditional Geospatial Data in order to clarify the typical differences, to compare them in terms of storage and processing backgrounds for different data representations and to categorize the common processing systems from the aspect of distributed raster processing. This clarification is necessary due to the fact that they behave differently on the processing side, and particular processing solutions need to be developed according to their characteristics. Furthermore, we compare parallel and distributed computing, taking into account the fact that these are used improperly in several cases. We also briefly assess the widely-known MapReduce paradigm in the context of geospatial applications. The second half of the article reports on a new processing framework initiative, currently at the concept and early development stages, which aims to be capable of processing raster, vector and point cloud data in a distributed IT ecosystem. The developed system is modular, has no limitations on programming language environment, and can execute scripts written in any development language (e.g. Python, R or C#)

    Többváltozós Lagrange interpolációs polinom előállítása és ábrázolása párhuzamos programmal

    Get PDF
    Többváltozós Lagrange interpolációs polinom előállítása és ábrázolása párhuzamos programmal, képnagyításra használva, Opencv és Cuda segítségével

    Sim2Real Grasp Pose Estimation for Adaptive Robotic Applications

    Full text link
    Adaptive robotics plays an essential role in achieving truly co-creative cyber physical systems. In robotic manipulation tasks, one of the biggest challenges is to estimate the pose of given workpieces. Even though the recent deep-learning-based models show promising results, they require an immense dataset for training. In this paper, we propose two vision-based, multiobject grasp-pose estimation models, the MOGPE Real-Time (RT) and the MOGPE High-Precision (HP). Furthermore, a sim2real method based on domain randomization to diminish the reality gap and overcome the data shortage. We yielded an 80% and a 96.67% success rate in a real-world robotic pick-and-place experiment, with the MOGPE RT and the MOGPE HP model respectively. Our framework provides an industrial tool for fast data generation and model training and requires minimal domain-specific data.Comment: Submitted to the 22nd World Congress of the International Federation of Automatic Control (IFAC 2023

    Big Geospatial Data processing in the IQmulus Cloud

    Get PDF
    Remote sensing instruments are continuously evolving in terms of spatial, spectral and temporal resolutions and hence provide exponentially increasing amounts of raw data. These volumes increase significantly faster than computing speeds. All these techniques record lots of data, yet in different data models and representations; therefore, resulting datasets require harmonization and integration prior to deriving meaningful information from them. All in all, huge datasets are available but raw data is almost of no value if not processed, semantically enriched and quality checked. The derived information need to be transferred and published to all level of possible users (from decision makers to citizens). Up to now, there are only limited automatic procedures for this; thus, a wealth of information is latent in many datasets. This paper presents the first achievements of the IQmulus EU FP7 research and development project with respect to processing and analysis of big geospatial data in the context of flood and waterlogging detection

    Discrete and Continuous Caching Games

    Full text link
    Alpern's Caching Game is played by 22 players. Player 11 is a squirrel, who is hiding his dd nuts in nn different holes, with the restriction that he can only dig down a distance of 11 metre altogether. After that, Player 22 wants to find all the nuts, and she is allowed to dig down a distance of kk metres altogether. Player 22 wins after finding all the nuts, if she fails then the squirrel wins. We investigate a discrete version of the game, finding strategies and statements for both smaller and general values of n,d,kn,d,k. In particular, we answer a question of P\'alv\"olgyi, by exhibiting an example, where the value of the game can change depending on which nut the squirrel reveals, when he has multiple options in the discrete game. We also investigate and invent other continuous versions of the game, one of them having a connection to the Manickam-Mikl\'os-Singhi Conjecture.Comment: 22 pages, 8 figure

    Effects of thermal annealing and solvent-induced crystallization on the structure and properties of poly(lactic acid) microfibres produced by high-speed electrospinning

    Get PDF
    This research concentrates on the marked discrepancies in the crystalline structure of poly(lactic acid) (PLA) nano- and microfibres, achieved by different annealing strategies. PLA nonwoven mats were produced by high-speed electrospinning (HSES). The high-speed production technique allowed the manufacturing of PLA microfibres with diameters of 0.25–8.50 µm with a relatively high yield of 40 g h-1. The crystalline content of the inherently highly amorphous microfibres was increased by two methods, thermal annealing in an oven at 85°C was compared to immersion in absolute ethanol at 40°C. The morphology of the fibres was examined by scanning electron microscopy (SEM), crystalline forms and thermal properties were assessed using X-ray diffractometry (XRD), Raman spectrometry, differential scanning calorimetry (DSC) as well as modulated differential scanning calorimetry (MDSC). As a consequence of 45 min heat treatment, the crystalline fraction increased up to 26%, while solution treatment resulted in 33% crystallinity. It was found that only disordered α’ crystals are formed during the conventional heat treatment, however, the ethanol-induced crystallization favours the formation of the ordered α polymorph. In connection with the different crystalline structures, noticeable changes in the macroscopic properties such as heat resistance and mechanical properties were evinced by localised thermomechanical analysis (LTMA) and static tensile test, respectively
    corecore