89 research outputs found

    Evaluation of the parallel computational capabilities of embedded platforms for critical systems

    Get PDF
    Modern critical systems need higher performance which cannot be delivered by the simple architectures used so far. Latest embedded architectures feature multi-cores and GPUs, which can be used to satisfy this need. In this thesis we parallelise relevant applications from multiple critical domains represented in the GPU4S benchmark suite, and perform a comparison of the parallel capabilities of candidate platforms for use in critical systems. In particular, we port the open source GPU4S Bench benchmarking suite in the OpenMP programming model, and we benchmark the candidate embedded heterogeneous multi-core platforms of the H2020 UP2DATE project, NVIDIA TX2, NVIDIA Xavier and Xilinx Zynq Ultrascale+, in order to drive the selection of the research platform which will be used in the next phases of the project. Our result indicate that in terms of CPU and GPU performance, the NVIDIA Xavier is the highest performing platform

    Cross platform 3D rendering engines and mobile devices/smartphones

    Get PDF
    Dissertação de mestrado em Engenharia InformáticaNow more than ever, we live in a cross platform technological world. We are surrounded by various platforms, each with their own set of advantages and drawbacks. We’ve come to a point where we cannot delay the transition of software from one platform to another. This has become increasingly more visible with the "rise of the smartphones". Their evolution has sparked quite an interest and due to their ubiquitous nature and, CPU and GPU performance, they prove to be very interesting and useful computing devices. The aim of this dissertation is to port the 3D rendering engine, Curitiba, currently being developed on Windows, to the second and third most popular platforms: Mac OS X and iOS (iPhone and iPad), respectively, and create one unified project. Due to incompatibilities presented by the wxWidgets toolkit (2.8.x) on Mac OS X (10.6 and greater), we ported Curitiba to the GNU/Linux platform first since it’s also POSIX compliant. Sadly, the Android platform had to be left out because, at the time, it lacked the support for C++’s STL and Exceptions. Throughout this dissertation we shall cover all the challenges faced to transform Curitiba into a cross platform software and the development of the resulting unified project. Our secondary objective is to replace the traditional keyboard and mouse interactions in a 3D rendering engine by implementing new interaction models which make use of the touch screen and/or the sensors available on the iOS platform.Agora mais que nunca, vivemos num mundo tecnológico multi-plataforma. Estamos rodeados de várias plataformas, cada uma com as suas vantagens e desvantagens. Chegamos a um ponto em que não pudemos adiar mais a transição do software de uma plataforma para outra. Isto tornou-se gradualmente mais visível com a ”ascensão dos smartphones”. A sua evolução tem despertado bastante interesse e graças à sua natureza ubíqua e, desempenho ao nível do CPU e GPU. Estes demonstram ser sistemas computacionais bastante interessantes e úteis. O objectivo desta dissertação é portar o motor de renderização 3D, Curitiba, desenvolvido no Departamento de Informática da Universidade do Minho, actualmente desenvolvido em Windows, para a segunda e terceira plataformas mais populares: Mac OS X e iOS (iPhone e iPad)2, respectivamente e criar um único projecto. Devido a uma incompatibilidade com a ferramenta wxWidgets [52] (2.8.x) em Mac OS X (10.6 e maior), portamos o Curitiba para GNU/Linux primeiro visto que também implementa as normas POSIX. Infelizmente, tivemos que abandonar a plataforma Android devido a este, na altura, não possuir suporte para o STL e Excepções do C++. Ao longo desta dissertação vamos abordar as dificuldades encontradas ao transformar o Curitiba num software cross plataforma e o desenvolvimento do projecto unificado. O nosso objectivo secundário consiste em substituir as interacções tradicionais com teclado e rato num motor de renderização 3D com novos modelos de interacção que tiram proveito do ecrã táctil e/ou sensores disponíveis na plataforma iOS

    Multiresolution Techniques for Real–Time Visualization of Urban Environments and Terrains

    Get PDF
    In recent times we are witnessing a steep increase in the availability of data coming from real–life environments. Nowadays, virtually everyone connected to the Internet may have instant access to a tremendous amount of data coming from satellite elevation maps, airborne time-of-flight scanners and digital cameras, street–level photographs and even cadastral maps. As for other, more traditional types of media such as pictures and videos, users of digital exploration softwares expect commodity hardware to exhibit good performance for interactive purposes, regardless of the dataset size. In this thesis we propose novel solutions to the problem of rendering large terrain and urban models on commodity platforms, both for local and remote exploration. Our solutions build on the concept of multiresolution representation, where alternative representations of the same data with different accuracy are used to selectively distribute the computational power, and consequently the visual accuracy, where it is more needed on the base of the user’s point of view. In particular, we will introduce an efficient multiresolution data compression technique for planar and spherical surfaces applied to terrain datasets which is able to handle huge amount of information at a planetary scale. We will also describe a novel data structure for compact storage and rendering of urban entities such as buildings to allow real–time exploration of cityscapes from a remote online repository. Moreover, we will show how recent technologies can be exploited to transparently integrate virtual exploration and general computer graphics techniques with web applications

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    A novel parallel algorithm for surface editing and its FPGA implementation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophySurface modelling and editing is one of important subjects in computer graphics. Decades of research in computer graphics has been carried out on both low-level, hardware-related algorithms and high-level, abstract software. Success of computer graphics has been seen in many application areas, such as multimedia, visualisation, virtual reality and the Internet. However, the hardware realisation of OpenGL architecture based on FPGA (field programmable gate array) is beyond the scope of most of computer graphics researches. It is an uncultivated research area where the OpenGL pipeline, from hardware through the whole embedded system (ES) up to applications, is implemented in an FPGA chip. This research proposes a hybrid approach to investigating both software and hardware methods. It aims at bridging the gap between methods of software and hardware, and enhancing the overall performance for computer graphics. It consists of four parts, the construction of an FPGA-based ES, Mesa-OpenGL implementation for FPGA-based ESs, parallel processing, and a novel algorithm for surface modelling and editing. The FPGA-based ES is built up. In addition to the Nios II soft processor and DDR SDRAM memory, it consists of the LCD display device, frame buffers, video pipeline, and algorithm-specified module to support the graphics processing. Since there is no implementation of OpenGL ES available for FPGA-based ESs, a specific OpenGL implementation based on Mesa is carried out. Because of the limited FPGA resources, the implementation adopts the fixed-point arithmetic, which can offer faster computing and lower storage than the floating point arithmetic, and the accuracy satisfying the needs of 3D rendering. Moreover, the implementation includes Bézier-spline curve and surface algorithms to support surface modelling and editing. The pipelined parallelism and co-processors are used to accelerate graphics processing in this research. These two parallelism methods extend the traditional computation parallelism in fine-grained parallel tasks in the FPGA-base ESs. The novel algorithm for surface modelling and editing, called Progressive and Mixing Algorithm (PAMA), is proposed and implemented on FPGA-based ES’s. Compared with two main surface editing methods, subdivision and deformation, the PAMA can eliminate the large storage requirement and computing cost of intermediated processes. With four independent shape parameters, the PAMA can be used to model and edit freely the shape of an open or closed surface that keeps globally the zero-order geometric continuity. The PAMA can be applied independently not only FPGA-based ESs but also other platforms. With the parallel processing, small size, and low costs of computing, storage and power, the FPGA-based ES provides an effective hybrid solution to surface modelling and editing

    A graphics software architecture for high-end interactive TV terminals

    Get PDF
    This thesis proposes a graphics architecture for next-generation digital television receivers. The starting assumption is that in the future, a number of multimedia terminals will have access through a number of networks to a variety of content and services. One example of such a device is a media station capable of integrating different kinds of multimedia objects such as 2D/3D graphics and video, reacting to user interaction, and supporting the temporal dimension of applications. Some of the services intended for these devices include, for example, games and enhanced information over broadcasted video. First, this thesis provides an overview of the digital television environment, focusing on the limitations of current receivers and hints at future directions. In addition, this thesis compares different solutions from regional standardisation bodies such as DVB, CableLabs, and ARIB. It proposes the adoption of the most relevant initiative, GEM by DVB. Unfortunately, GEM software middleware only considers Java language as an authoring format, meaning that the declarative environment and advanced functionalities (e.g., 3D graphics support) remain to be standardised. Because in the future different user groups will have different demands with regard to television, this thesis identifies two major extensions to the GEM standard. First, it proposes a declarative environment for GEM that takes into account W3C standardisation efforts. This environment is divided into two configurations: one capable of rendering limited interactive applications such as information services, and another intended for more demanding applications, for example a distance learning portal that synchronises videos of lecturers and slides. Second, this thesis proposes to extend the procedural environment of GEM with 3D graphics support. The potential services of this new profile, High-End Interactive, include games and commercials. Then, based on the requirements the proposed profiles should meet, this thesis defines a graphics architecture model composed of five layers. The hardware abstraction layer is in charge of rendering the final graphics output. The graphical context is a cross-platform abstraction of the rendering region and provides graphics primitives (e.g., rectangles and images). The graphical environment provides the means to control different graphical contexts. The GUI toolkit is a set of ready-made user interface widgets and layout schemes. Finally, high-level languages are easy-to-use tools for developing simple services. The thesis concludes with a report of my experience implementing a digital television receiver based on the proposals described. In addition to testing the application of the proposed graphics architecture to the design and implementation of a next-generation digital television receiver, the implementation permits the analysis of the requirements of such receivers and of the services they can provide.reviewe

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Implementation and evaluation of a mobile Android application for auditory stimulation of chronic tinnitus patients

    Get PDF
    Tinnitus is a common symptom where the affected person perceives a sound without an external source. To support the development of new therapies a tinnitus tracking platform, including mobile applications, was developed at Ulm University in cooperation with the tinnitus research initiative. In the future, these mobile applications should be extended to include a simple game that requires the user to concentrate on an auditory stimulation, distracting them from their tinnitus. This is accomplished by using localization of an audio source as a game mechanic. The measurement of the offset between the position the user guessed for an audio source and its actual location could also serves as an additional data point. In this thesis an application for the Android operating system is designed that implements such a game and serves as a proof of concept. Since the Android API does not include the capability for positional audio, a separate audio API based on OpenAL was created as part of this thesis. This API as well as the framework developed to implement the game are designed to be reusable for future, similar projects. The game concept was also evaluated in a study using the demonstration application

    Comparison of Mobile and Native Technologies for Mobile MES Applications

    Get PDF
    The MES (manufacturing execution system) is mostly used from desktop based terminals in a factory. These terminals are distant from the machines and materials used on the factory floor. To access the information available through MES from anywhere in the factory floor, use of mobile terminal instead of desktop computers has been proposed. To evaluate two alternative implementation technologies, Web and native, we have de-veloped and compared two prototypes of the MES application. In addition, we have studied the advantages of native and Web approaches through the literature and survey. Mobile devices are categorized by its different platforms and screen sizes. Android, iOS, and Windows phone are most common among them. Mobile applications are platform dependent and an application made for one platform does not work on others. Web ap-plications are platform independent that work on all devices. HTML5 has introduced some APIs through which a Web app can behave like a native app and can compete with the native app. So, in this thesis we have tried to compare Web and native app and tried to find out which is better for MES applications. A general answer to this question is native because of its better performance. In this thesis, we have analyzed some of the factors that are responsible for the performance difference between a Web app and native app. In addition to this, we have had an online survey to find out what developers think about the development, testing, maintenance and deployment of Web and native technologies. Based on all the data, i.e. literature review, some experiments, feedback from participants and online survey, we made a conclusion that native app is the best solution for mobile MES because native app is more responsive and more secure. However, native apps require more time, effort, cost and skills to be developed and maintained
    • …
    corecore