112 research outputs found

    Optimization of Robot Telemonitoring System Software Using Multi-thread Method

    Full text link
    The processor development today is on multi-core and multi-processor which can be used to a speedup of data processing time compared with one processor core only. One of the main ways that can be used to speed up the data processing time is by using multi-thread. Multi-thread method has been implemented on the robot telemonitoring system based a Graphical User Interface (GUI) which has been developed in Research Center for Electrical Power and Mechatronics, Indonesian Institute of Sciences (LIPI). A part that requires high processing time at the telemonitoring systems are the display of real-time thermal cameras and color camera along with tracking algorithm used, it can be seen from the thermal camera display which less smooth. Two threads have been added to process each of the cameras separately. C programming language, with the opencv library and the Integrated Development Environment (IDE) Qt Creator has been used to implement this method into an application program. Based on experiments, it can be seen that both of the camera display with tracking algorithm used can run more quickly, it is demonstrated with the smooth display and the processing time is faster than a sequential program. The processing time based cpu time in sequential program both on color camera and thermal camera is 6 fps, while in multi-thread program (with added two threads), the processing time is 6 fps for color cameras and thermal camera is become 7 fps. The processing time based wall clock time in the sequential program on color camera and thermal camera is 6.31579 fps, while in multi-thread program (with added two threads), the processing time is 6.31579 fps for color cameras and thermal camera becomes 7.5 fps. The speedup and efficiency obtained between the sequential program and with added two threads are 0.84211 and 0.28070

    Optimization of Robot Telemonitoring System Software using multi-thread method

    Get PDF
    The processor development today is on multi-core and multi-processor which can be used to a speedup of data processing time compared with one processor core only. One of the main ways that can be used to speed up the data processing time is by using multi-thread. Multi-thread method has been implemented on the robot telemonitoring system based a Graphical User Interface (GUI) which has been developed in Research Center for Electrical Power and Mechatronics, Indonesian Institute of Sciences (LIPI). A part that requires high processing time at the telemonitoring systems are the display of real-time thermal cameras and color camera along with tracking algorithm used, it can be seen from the thermal camera display which less smooth. Two threads have been added to process each of the cameras separately. C programming language, with the opencv library and the Integrated Development Environment (IDE) Qt Creator has been used to implement this method into an application program. Based on experiments, it can be seen that both of the camera display with tracking algorithm used can run more quickly, it is demonstrated with the smooth display and the processing time is faster than a sequential program. The processing time based cpu time in sequential program both on color camera and thermal camera is 6 fps, while in multi-thread program (with added two threads), the processing time is 6 fps for color cameras and thermal camera is become 7 fps. The processing time based wall clock time in the sequential program on color camera and thermal camera is 6.31579 fps, while in multi-thread program (with added two threads), the processing time is 6.31579 fps for color cameras and thermal camera becomes 7.5 fps. The speedup and efficiency obtained between the sequential program and with added two threads are 0.84211 and 0.28070. Keywords: multi-thread, telemonitoring, GUI, Qt creator, c languag

    Aspects and implementations for accelerating image acquisition in microscopy

    Get PDF
    Subject of this thesis is to shorten the execution time of biological experiments which are performed with fluorescence microscopy. Especially when genome-wide screens are run, a huge amount of prepared cells has to be observed. Therefore, speeding up the image acquisition will have the strongest impact on the time which is needed to execute the experiments. The approaches presented here are the elimination of the time-consuming focus search, and two parallel microscope systems, one with standard macro-optics and spectral-parallelization and the other one with highly miniaturized optical components arranged in an array for fast-scanning microscopy

    Simulating the nonlinear QED vacuum

    Get PDF

    Homotopy Based Reconstruction from Acoustic Images

    Get PDF

    Masivně paralelní implementace algoritmů počítačové grafiky

    Get PDF
    Computer graphics, since its inception in the 1960s, has made great progress. It has become part of everyday life. We can see it all around us, from smartwatches and smartphones, where graphic accelerators are already part of the chips and can render not only interactive menus but also demanding graphic applications, to laptops and personal computers as well as to high-performance visualization servers and supercomputers that can display demanding simulations in real time. In this dissertation we focus on one of the most computationally demanding area of computer graphics and that is the computation of global illumination. One of the most widely used methods for simulating global illumination is the path tracing method. Using this method, we can visualize, for example, scientific or medical data. The path tracing method can be accelerated using multiple graphical accelerators, which we will focus on in this work. We will present a solution for path tracing of massive scenes on multiple GPUs. Our approach analyzes the memory access pattern of the path tracer and defines how the scene data should be distributed across up to 16 GPUs with minimal performance impact. The key concept is that the parts of the scene that have the highest number of memory accesses are replicated across all GPUs. We present two methods for maximizing the performance of path tracing when dealing with partially distributed scene data. Both methods operate at the memory management level, and therefore the path tracing data structures do not need to be redesigned. We implemented this new out-of-core mechanism in the open-source Blender Cycles path tracer, which we also extended with technologies that support running on supercomputers and can take advantage of all accelerators allocated on multiple nodes. In this work, we also introduce a new service that uses our extended version of the Blender Cycles renderer to simplify sending and running jobs directly from Blender.Počítačová grafika od svého vzniku v 60. letech 20. století udělala velký pokrok. Stala se součástí každodenního života. Můžeme ji vidět všude kolem nás, od chytrých hodinek a smartphonů, kde jsou grafické akcelerátory již součástí čipů a dokáží vykreslovat nejen interaktivní menu, ale i náročné grafické aplikace, přes notebooky a osobní počítače až po výkonné vizualizační servery nebo superpočítače, které dokáží zobrazovat náročné simulace v reálném čase. V této disertační práci se zaměříme na jednu z výpočetně nejnáročnějších oblastí počítačové grafiky, a tou je výpočet globálního osvětlení. Jednou z nejpoužívanějších metod pro simulaci globálního osvětlení je metoda sledování cesty. Pomocí této metody můžeme vizualizovat např. vědecká nebo lékařská data. Metodu sledování cest lze urychlit pomocí několika grafických akcelerátorů, na které se v této práci zaměříme. Představíme řešení pro vykreslování masivních scén na více GPU. Náš přístup analyzuje vzory přístupů k paměti a definuje, jak by měla být data scény rozdělena mezi grafickými akcelerátory s minimální ztrátou výkonu. Klíčovým konceptem je, že části scény, které mají nejvyšší počet přístupů do paměti, jsou replikovány na všech grafických akcelerátorech. Představíme dvě metody pro maximalizaci výkonu vykreslování při práci s částečně distribuovanými daty scény. Obě metody pracují na úrovni správy paměti, a proto není třeba datové struktury přepracovávat. Tento nový out-of-core mechanismus jsme implementovali do open-source path traceru Blender Cycles, který jsme také rozšířili o technologie podporující běh na superpočítačích a schopné využít všechny akcelerátory alokované na více uzlech. V této práci také představíme novou službu, která využívá naši rozšířenou verzi Blender Cycles a zjednodušuje odesílání a spouštění úloh přímo z programu Blender.96220 - Laboratoř pro výzkum infrastrukturyvyhově

    System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging

    Get PDF
    In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies
    corecore