1,396 research outputs found

    AN INTERACTIVE REMOTE VISUALIZATION SYSTEM FOR MOBILE APPLICATION ACCESS

    Get PDF
    This paper introduces a remote visualization approach that enables the visualization of data sets on mobile devices or in web environments. With this approach the necessary computing power can be outsourced to a server environment. The developed system allows the rendering of 2D and 3D graphics on mobile phones or web browsers with high quality independent of the size of the original data set. Compared to known terminal server or other proprietary remote systems our approach offers a very simple way to integrate with a large variety of applications which makes it useful for real-life application scenarios in business processes

    DCT Implementation on GPU

    Get PDF
    There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform

    Survey of Transportation of Adaptive Multimedia Streaming service in Internet

    Full text link
    [DE] World Wide Web is the greatest boon towards the technological advancement of modern era. Using the benefits of Internet globally, anywhere and anytime, users can avail the benefits of accessing live and on demand video services. The streaming media systems such as YouTube, Netflix, and Apple Music are reining the multimedia world with frequent popularity among users. A key concern of quality perceived for video streaming applications over Internet is the Quality of Experience (QoE) that users go through. Due to changing network conditions, bit rate and initial delay and the multimedia file freezes or provide poor video quality to the end users, researchers across industry and academia are explored HTTP Adaptive Streaming (HAS), which split the video content into multiple segments and offer the clients at varying qualities. The video player at the client side plays a vital role in buffer management and choosing the appropriate bit rate for each such segment of video to be transmitted. A higher bit rate transmitted video pauses in between whereas, a lower bit rate video lacks in quality, requiring a tradeoff between them. The need of the hour was to adaptively varying the bit rate and video quality to match the transmission media conditions. Further, The main aim of this paper is to give an overview on the state of the art HAS techniques across multimedia and networking domains. A detailed survey was conducted to analyze challenges and solutions in adaptive streaming algorithms, QoE, network protocols, buffering and etc. It also focuses on various challenges on QoE influence factors in a fluctuating network condition, which are often ignored in present HAS methodologies. Furthermore, this survey will enable network and multimedia researchers a fair amount of understanding about the latest happenings of adaptive streaming and the necessary improvements that can be incorporated in future developments.Abdullah, MTA.; Lloret, J.; Canovas Solbes, A.; García-García, L. (2017). Survey of Transportation of Adaptive Multimedia Streaming service in Internet. Network Protocols and Algorithms. 9(1-2):85-125. doi:10.5296/npa.v9i1-2.12412S8512591-

    Mobile learning: benefits of augmented reality in geometry teaching

    Get PDF
    As a consequence of the technological advances and the widespread use of mobile devices to access information and communication in the last decades, mobile learning has become a spontaneous learning model, providing a more flexible and collaborative technology-based learning. Thus, mobile technologies can create new opportunities for enhancing the pupils’ learning experiences. This paper presents the development of a game to assist teaching and learning, aiming to help students acquire knowledge in the field of geometry. The game was intended to develop the following competences in primary school learners (8-10 years): a better visualization of geometric objects on a plane and in space; understanding of the properties of geometric solids; and familiarization with the vocabulary of geometry. Findings show that by using the game, students have improved around 35% the hits of correct responses to the classification and differentiation between edge, vertex and face in 3D solids.This research was supported by the Arts and Humanities Research Council Design Star CDT (AH/L503770/1), the Portuguese Foundation for Science and Technology (FCT) projects LARSyS (UID/EEA/50009/2013) and CIAC-Research Centre for Arts and Communication.info:eu-repo/semantics/publishedVersio

    Image Compression and its Effect on Data

    Get PDF
    This thesis is intended to define and study different image compression techniques, software programs, image formats (from early ones such as “GIF” to most recent ones such as “JPEG 2000”), compression effect on compressed data (compressed images), and its effectiveness and usefulness in reducing the file size and its transmission time, as a result. In many GeoBioPhysical applications, some information inside any image may be the keys to solve different kinds of problems and classify features. This kind of data and information has to be handled with care; i.e. it’s not allowed to be lost during the compression process. On the other hand, dealing with images is more flexible in regular applications such as images used as pictures for simple purposes such as e-mails. An un-compressed aerial image (DOQQ) of Huntington, WV. (with “.Tiff” extension) was taken as the original file to be compressed using different techniques and software programs. The results were studied and attached to each image. The resulting file size of each image was used to perform some comparisons between different software programs that were also used, trying to find the effectiveness of each technique and software from the quality to file size ratio point of view. Some previous work and research from different references was also studied and discussed to show the differences and the similarities between this work and previous ones. One of the goals of this study is to find the software program(s) and the compression types those give the best quality to file size ratio, and the ones that work best for GeoBioPhysical studies. The results show that dealing with different types of imagery is sensitive and depends strongly on the application; the user has to know what he is doing. The user has to use the proper input imagery and compress them to the proper limits to get best results. The results of this study show that JPEG2000 software programs (such as LuraWave) are very good and effective choices. JPEG2000 and ECW are likely to be extensively used in the near future for imagery and internet usage

    Web-based Stereoscopic Collaboration for Medical Visualization

    Get PDF
    Medizinische Volumenvisualisierung ist ein wertvolles Werkzeug zur Betrachtung von Volumen- daten in der medizinischen Praxis und Lehre. Eine interaktive, stereoskopische und kollaborative Darstellung in Echtzeit ist notwendig, um die Daten vollständig und im Detail verstehen zu können. Solche Visualisierung von hochauflösenden Daten ist jedoch wegen hoher Hardware- Anforderungen fast nur an speziellen Visualisierungssystemen möglich. Remote-Visualisierung wird verwendet, um solche Visualisierung peripher nutzen zu können. Dies benötigt jedoch fast immer komplexe Software-Deployments, wodurch eine universelle ad-hoc Nutzbarkeit erschwert wird. Aus diesem Sachverhalt ergibt sich folgende Hypothese: Ein hoch performantes Remote- Visualisierungssystem, welches für Stereoskopie und einfache Benutzbarkeit spezialisiert ist, kann für interaktive, stereoskopische und kollaborative medizinische Volumenvisualisierung genutzt werden. Die neueste Literatur über Remote-Visualisierung beschreibt Anwendungen, welche nur reine Webbrowser benötigen. Allerdings wird bei diesen kein besonderer Schwerpunkt auf die perfor- mante Nutzbarkeit von jedem Teilnehmer gesetzt, noch die notwendige Funktion bereitgestellt, um mehrere stereoskopische Präsentationssysteme zu bedienen. Durch die Bekanntheit von Web- browsern, deren einfach Nutzbarkeit und weite Verbreitung hat sich folgende spezifische Frage ergeben: Können wir ein System entwickeln, welches alle Aspekte unterstützt, aber nur einen reinen Webbrowser ohne zusätzliche Software als Client benötigt? Ein Proof of Concept wurde durchgeführt um die Hypothese zu verifizieren. Dazu gehörte eine Prototyp-Entwicklung, deren praktische Anwendung, deren Performanzmessung und -vergleich. Der resultierende Prototyp (CoWebViz) ist eines der ersten Webbrowser basierten Systeme, welches flüssige und interaktive Remote-Visualisierung in Realzeit und ohne zusätzliche Soft- ware ermöglicht. Tests und Vergleiche zeigen, dass der Ansatz eine bessere Performanz hat als andere ähnliche getestete Systeme. Die simultane Nutzung verschiedener stereoskopischer Präsen- tationssysteme mit so einem einfachen Remote-Visualisierungssystem ist zur Zeit einzigartig. Die Nutzung für die normalerweise sehr ressourcen-intensive stereoskopische und kollaborative Anatomieausbildung, gemeinsam mit interkontinentalen Teilnehmern, zeigt die Machbarkeit und den vereinfachenden Charakter des Ansatzes. Die Machbarkeit des Ansatzes wurde auch durch die erfolgreiche Nutzung für andere Anwendungsfälle gezeigt, wie z.B. im Grid-computing und in der Chirurgie

    Portal-s: High-resolution real-time 3D video telepresence

    Get PDF
    The goal of telepresence is to allow a person to feel as if they are present in a location other than their true location; a common application of telepresence is video conferencing in which live video of a user is transmitted to a remote location for viewing. In conventional two-dimensional (2D) video conferencing, loss of correct eye gaze commonly occurs, due to a disparity between the capture and display optical axes. Newer systems are being developed which allow for three-dimensional (3D) video conferencing, circumventing issues with this disparity, but new challenges are arising in the capture, delivery, and redisplay of 3D contents across existing infrastructure. To address these challenges, a novel system is proposed which allows for 3D video conferencing across existing networks while delivering full resolution 3D video and establishing correct eye gaze. During the development of Portal-s, many innovations to the field of 3D scanning and its applications were made; specifically, this dissertation research has achieved the following innovations: a technique to realize 3D video processing entirely on a graphics processing unit (GPU), methods to compress 3D videos on a GPU, and combination of the aforementioned innovations with a special holographic display hardware system to enable the novel 3D telepresence system entitled Portal-s. The first challenge this dissertation addresses is the cost of real-time 3D scanning technology, both from a monetary and computing power perspective. New advancements in 3D scanning and computation technology are continuing to increase, simplifying the acquisition and display of 3D data. These advancements are allowing users new methods of interaction and analysis of the 3D world around them. Although the acquisition of static 3D geometry is becoming easy, the same cannot be said of dynamic geometry, since all aspects of the 3D processing pipeline, capture, processing, and display, must be realized in real-time simultaneously. Conventional approaches to solve these problems utilize workstation computers with powerful central processing units (CPUs) and GPUs to accomplish the large amounts of processing power required for a single 3D frame. A challenge arises when trying to realize real-time 3D scanning on commodity hardware such as a laptop computer. To address the cost of a real-time 3D scanning system, an entirely parallel 3D data processing pipeline that makes use of a multi-frequency phase-shifting technique is presented. This novel processing pipeline can achieve simultaneous 3D data capturing, processing, and display at 30 frames per second (fps) on a laptop computer. By implementing the pipeline within the OpenGL Shading Language (GLSL), nearly any modern computer with a dedicated graphics device can run the pipeline. Making use of multiple threads sharing GPU resources and direct memory access transfers, high frame rates on low compute power devices can be achieved. Although these advancements allow for low compute power devices such as a laptop to achieve real-time 3D scanning, this technique is not without challenges. The main challenge being selecting frequencies that allow for high quality phase, yet do not include phase jumps in equivalent frequencies. To address this issue, a new modified multi-frequency phase shifting technique was developed that allows phase jumps to be introduced in equivalent frequencies yet unwrapped in parallel, increasing phase quality and reducing reconstruction error. Utilizing these techniques, a real-time 3D scanner was developed that captures 3D geometry at 30 fps with a root mean square error (RMSE) of 0:00081 mm for a measurement area of 100 mm X 75 mm at a resolution of 800 X 600 on a laptop computer. With the above mentioned pipeline the CPU is nearly idle, freeing it to perform additional tasks such as image processing and analysis. The second challenge this dissertation addresses is associated with delivering huge amounts of 3D video data in real-time across existing network infrastructure. As the speed of 3D scanning continues to increase, and real-time scanning is achieved on low compute power devices, a way of compressing the massive amounts of 3D data being generated is needed. At a scan resolution of 800 X 600, streaming a 3D point cloud at 30 frames per second (FPS) would require a throughput of over 1.3 Gbps. This amount of throughput is large for a PCIe bus, and too much for most commodity network cards. Conventional approaches involve serializing the data into a compressible state such as a polygon file format (PLY) or Wavefront object (OBJ) file. While this technique works well for structured 3D geometry, such as that created with computer aided drafting (CAD) or 3D modeling software, this does not hold true for 3D scanned data as it is inherently unstructured. A challenge arises when trying to compress this unstructured 3D information in such a way that it can be easily utilized with existing infrastructure. To address the need for real-time 3D video compression, new techniques entitled Holoimage and Holovideo are presented, which have the ability to compress, respectively, 3D geometry and 3D video into 2D counterparts and apply both lossless and lossy encoding. Similar to the aforementioned 3D scanning pipeline, these techniques make use of a completely parallel pipeline for encoding and decoding; this affords high speed processing on the GPU, as well as compression before streaming the data over the PCIe bus. Once in the compressed 2D state, the information can be streamed and saved until the 3D information is needed, at which point 3D geometry can be reconstructed while maintaining a low amount of reconstruction error. Further enhancements of the technique have allowed additional information, such as texture information, to be encoded by reducing the bit rate of the data through image dithering. This allows both the 3D video and associated 2D texture information to be interlaced and compressed into 2D video, synchronizing the streams automatically. The third challenge this dissertation addresses is achieving correct eye gaze in video conferencing. In 2D video conferencing, loss of correct eye gaze commonly occurs, due to a disparity between the capture and display optical axes. Conventional approaches to mitigate this issue involve either reducing the angle of disparity between the axes by increasing the distance of the user to the system, or merging the axes through the use of beam splitters. Newer approaches to this issue make use of 3D capture and display technology, as the angle of disparity can be corrected through transforms of the 3D data. Challenges arise when trying to create such novel systems, as all aspects of the pipeline, capture, transmission, and redisplay must be simultaneously achieved in real-time with the massive amounts of 3D data. Finally, the Portal-s system is presented, which is an integration of all the aforementioned technologies into a holistic software and hardware system that enables real-time 3D video conferencing with correct mutual eye gaze. To overcome the loss of eye contact in conventional video conferencing, Portal-s makes use of dual structured-light scanners that capture through the same optical axis as the display. The real-time 3D video frames generated on the GPU are then compressed using the Holovideo technique. This allows the 3D video to be streamed across a conventional network or the Internet, and redisplayed at a remote node for another user on the Holographic display glass. Utilizing two connected Portal-s nodes, users of the systems can engage in 3D video conferencing with natural eye gaze established. In conclusion, this dissertation research substantially advances the field of real-time 3D scanning and its applications. Contributions of this research span into both academic and industrial practices, where the use of this information has allowed users new methods of interaction and analysis of the 3D world around them

    High-quality, real-time 3D video visualization in head mounted displays

    Get PDF
    The main goal of this thesis research was to develop the ability to visualize high- quality, three-dimensional (3D) data within a virtual reality head mounted display (HMD). High-quality, 3D data collection has become easier in past years due to the development of 3D scanning technologies such as structured light methods. Structured light scanning and modern 3D data compression techniques have improved to the point at which 3D data can be captured, processed, compressed, streamed across a network, decompressed, reconstructed, and visualized all in near real-time. Now the question becomes what can be done with this live 3D information? A web application allows for real-time visualization of and interaction with this 3D video on the web. Streaming this data to the web allows for greater ease of access by a larger population. In the past, only two-dimensional (2D) video streaming has been available to the public via the web or installed desktop software. Commonly, 2D video streaming technologies, such as Skype, FaceTime or Google Hangout, are used to connect people around the world for both business and recreational purposes. As the trend continues in which society conducts itself in online environments, improvements to these telecommunication and telecollaboration technologies must be made as current systems have reached their limitations. These improvements are to ensure that interactions are as natural and as user-friendly as possible. One resolution to the limitations imposed by 2D video streaming is to stream 3D video via the aforementioned technologies to a user in a virtual reality HMD. With 3D data, improvements such as eye-gaze correction, obtaining a natural angle of viewing, and more can be accomplished. One common advantage of using 3D data in lieu of 2D data is what can be done with it during redisplay. For example, when a viewer moves about their environment in a physical space while on Skype, the 2D image on their computer monitor does not change; however, via the use of an HMD, the user can naturally view and move about their partner in 3D space almost as if they were sitting directly across from them. With these improvements, increased user perception and level of immersion in the digital world has been achieved. This allows users to perform at an increased level of efficiency in telecollaboration and telecommunication environments due to the increased ability to visualize and communicate more naturally with another human being. This thesis will present some preliminary results which support the notion that users better perceive their environments and also have a greater sense of interpersonal communica- tion when immersed in a 3D video scenario as opposed to a 2D video scenario. This novel technology utilizes high-quality and real-time 3D scanning and 3D compression techniques which in turn allows the user to experience a realistic reconstruction within a virtual reality HMD

    Image database system for glaucoma diagnosis support

    Get PDF
    Tato práce popisuje přehled standardních a pokročilých metod používaných k diagnose glaukomu v ranném stádiu. Na základě teoretických poznatků je implementován internetově orientovaný informační systém pro oční lékaře, který má tři hlavní cíle. Prvním cílem je možnost sdílení osobních dat konkrétního pacienta bez nutnosti posílat tato data internetem. Druhým cílem je vytvořit účet pacienta založený na kompletním očním vyšetření. Posledním cílem je aplikovat algoritmus pro registraci intenzitního a barevného fundus obrazu a na jeho základě vytvořit internetově orientovanou tři-dimenzionální vizualizaci optického disku. Tato práce je součásti DAAD spolupráce mezi Ústavem Biomedicínského Inženýrství, Vysokého Učení Technického v Brně, Oční klinikou v Erlangenu a Ústavem Informačních Technologií, Friedrich-Alexander University, Erlangen-Nurnberg.This master thesis describes a conception of standard and advanced eye examination methods used for glaucoma diagnosis in its early stage. According to the theoretical knowledge, a web based information system for ophthalmologists with three main aims is implemented. The first aim is the possibility to share medical data of a concrete patient without sending his personal data through the Internet. The second aim is to create a patient account based on a complete eye examination procedure. The last aim is to improve the HRT diagnostic method with an image registration algorithm for the fundus and intensity images and create an optic nerve head web based 3D visualization. This master thesis is a part of project based on DAAD co-operation between Department of Biomedical Engineering, Brno University of Technology, Eye Clinic in Erlangen and Department of Computer Science, Friedrich-Alexander University, Erlangen-Nurnberg.
    corecore