384 research outputs found

    Qualitative Evaluation of Data Compression in Real-time Ultrasound Imaging

    Get PDF
    The purpose of this project was to evaluate qualitatively real-time ultrasound imaging using objective and subjective techniques to determine the minimum bandwidth required for clinical diagnosis of various anatomical and pathological states. In the experimental setup live ultrasound video samples representing the most common clinical examinations were compressed at 128, 256, 384, 768, 1152 and 1536 kbps using a compressor-decompressor (CODEC) adhering to International Telecommunication Union (ITU-T) recommendation H.261. A protocol for qualitative evaluation was developed and subjective and objective testing were performed based on this protocol. Subjective methods comprised of inter-rater reliability tests using kappa statistics and three way Analysis of Variance (ANOVA) using General Linear Models (GLM). Objective testing were performed using histogram analysis and estimation of peak signal to noise ratios. The kappa scores for all bandwidths greater than 256 kbps indicated good inter-rater reliablity and minimum variation in confidence levels. Using the results from GLM and ANOVA we could not establish a trend in degradation of observer confidence with increasing compression ratios. The histogram analysis showed a linear increase in standard deviation values, indicating a linear scatter in pixel intensity, with increasing compression ratios. Although higher compression levels were evaluated, only video clips with bandwidths greater than 256 kbps displayed satisfactory temporal and spatial resolution, good enough to make clinical diagnosis of various anatomical and pathological states. The evaluations also indicate that compressed real-time ultrasound imagery using H.261 can be transmitted over a T1 or ADSL networks

    VLSI architecture design approaches for real-time video processing

    Get PDF
    This paper discusses the programmable and dedicated approaches for real-time video processing applications. Various VLSI architecture including the design examples of both approaches are reviewed. Finally, discussions of several practical designs in real-time video processing applications are then considered in VLSI architectures to provide significant guidelines to VLSI designers for any further real-time video processing design works

    Compare multimedia frameworks in mobile platforms

    Get PDF
    Multimedia feature is currently one of the most important features in mobile devices. Many modern mobile platforms use a centralized software stack to handle multimedia requirements that software stack is called multimedia framework. Multimedia framework belongs to the middleware layer of mobile operating system. It can be considered as a bridge that connects mobile operating system kernel, hardware drivers with UI applications. It supplies high level APIs that offers simple and easy solutions for complicated multimedia tasks to UI application developers. Multimedia Framework also manages and utilizes low lever system software and hardware in an efficient manner. It offers a centralize solution between high level demands and low level system resources. In this M.Sc. thesis project we have studied, analyzed and compared open source GStreamer, Android Stagefright and Microsoft Silverlight Media Framework from several perspectives. Some of the comparison perspectives are architecture, supported use cases, extensibility, implementation language and program language support (bindings), developer support, and legal status aspects. One of the main contributions of this thesis work is that clarifying in details the strength and weaknesses of each framework. Furthermore, the thesis should serve decision-making guidance when on needs to select a multimedia framework for a project. Moreover, and to enhance the impression with the three multimedia frameworks, a basic media player implementation is demonstrated with source code in the thesis.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Energy efficient hardware acceleration of multimedia processing tools

    Get PDF
    The world of mobile devices is experiencing an ongoing trend of feature enhancement and generalpurpose multimedia platform convergence. This trend poses many grand challenges, the most pressing being their limited battery life as a consequence of delivering computationally demanding features. The envisaged mobile application features can be considered to be accelerated by a set of underpinning hardware blocks Based on the survey that this thesis presents on modem video compression standards and their associated enabling technologies, it is concluded that tight energy and throughput constraints can still be effectively tackled at algorithmic level in order to design re-usable optimised hardware acceleration cores. To prove these conclusions, the work m this thesis is focused on two of the basic enabling technologies that support mobile video applications, namely the Shape Adaptive Discrete Cosine Transform (SA-DCT) and its inverse, the SA-IDCT. The hardware architectures presented in this work have been designed with energy efficiency in mind. This goal is achieved by employing high level techniques such as redundant computation elimination, parallelism and low switching computation structures. Both architectures compare favourably against the relevant pnor art in the literature. The SA-DCT/IDCT technologies are instances of a more general computation - namely, both are Constant Matrix Multiplication (CMM) operations. Thus, this thesis also proposes an algorithm for the efficient hardware design of any general CMM-based enabling technology. The proposed algorithm leverages the effective solution search capability of genetic programming. A bonus feature of the proposed modelling approach is that it is further amenable to hardware acceleration. Another bonus feature is an early exit mechanism that achieves large search space reductions .Results show an improvement on state of the art algorithms with future potential for even greater savings

    Quality of Service Controlled Multimedia Transport Protocol

    Get PDF
    PhDThis research looks at the design of an open transport protocol that supports a range of services including multimedia over low data-rate networks. Low data-rate multimedia applications require a system that provides quality of service (QoS) assurance and flexibility. One promising field is the area of content-based coding. Content-based systems use an array of protocols to select the optimum set of coding algorithms. A content-based transport protocol integrates a content-based application to a transmission network. General transport protocols form a bottleneck in low data-rate multimedia communicationbsy limiting throughpuot r by not maintainingt iming requirementsT. his work presents an original model of a transport protocol that eliminates the bottleneck by introducing a flexible yet efficient algorithm that uses an open approach to flexibility and holistic architectureto promoteQ oS.T he flexibility andt ransparenccyo mesi n the form of a fixed syntaxt hat providesa seto f transportp rotocols emanticsT. he mediaQ oSi s maintained by defining a generic descriptor. Overall, the structure of the protocol is based on a single adaptablea lgorithm that supportsa pplication independencen, etwork independencea nd quality of service. The transportp rotocol was evaluatedth rougha set of assessmentos:f f-line; off-line for a specific application; and on-line for a specific application. Application contexts used MPEG-4 test material where the on-line assessmenuts eda modified MPEG-4 pl; yer. The performanceo f the QoSc ontrolledt ransportp rotocoli s often bettert hano thers chemews hen appropriateQ oS controlledm anagemenatl gorithmsa re selectedT. his is shownf irst for an off-line assessmenwt here the performancei s compared between the QoS controlled multiplexer,a n emulatedM PEG-4F lexMux multiplexers chemea, ndt he targetr equirements. The performanceis also shownt o be better in a real environmentw hen the QoS controlled multiplexeri s comparedw ith the real MPEG-4F lexMux scheme

    Enhancing a Neurosurgical Imaging System with a PC-based Video Processing Solution

    Get PDF
    This work presents a PC-based prototype video processing application developed to be used with a specific neurosurgical imaging device, the OPMI® PenteroTM operating microscope, in the Department of Neurosurgery of Helsinki University Central Hospital at Töölö, Helsinki. The motivation for implementing the software was the lack of some clinically important features in the imaging system provided by the microscope. The imaging system is used as an online diagnostic aid during surgery. The microscope has two internal video cameras; one for regular white light imaging and one for near-infrared fluorescence imaging, used for indocyanine green videoangiography. The footage of the microscope’s current imaging mode is accessed via the composite auxiliary output of the device. The microscope also has an external high resolution white light video camera, accessed via a composite output of a separate video hub. The PC was chosen as the video processing platform for its unparalleled combination of prototyping and high-throughput video processing capabilities. A thorough analysis of the platform and efficient video processing methods was conducted in the thesis and the results were used in the design of the imaging station. The features found feasible during the project were incorporated into a video processing application running on a GNU/Linux distribution Ubuntu. The clinical usefulness of the implemented features was ensured beforehand by consulting the neurosurgeons using the original system. The most significant shortcomings of the original imaging system were mended in this work. The key features of the developed application include: live streaming, simultaneous streaming and recording, and playing back of upto two video streams. The playback mode provides full media player controls, with a frame-by-frame precision rewinding, in an intuitive and responsive interface. A single view and a side-by-side comparison mode are provided for the streams. The former gives more detail, while the latter can be used, for example, for before-after and anatomic-angiographic comparisons.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    QoS adaptation in multimedia multicast conference applications for e-learning services

    Get PDF
    The evolution of the World Wide Web (WWW) service has incorporated new distributed multimedia conference applications, powering a new generation of e-learning development, and allowing improved interactivity and pro- human relations. Groupware applications are increasingly representative in the Internet home applications market, however, the Quality of Service (QoS) provided by the network is still a limitation impairing their performance. Such applications have found in multicast technology an ally contributing for their efficient implementation and scalability. Additionally, consider QoS as design goal at application level becomes crucial for groupware development, enabling QoS proactivity to applications. The applications’ ability to adapt themselves dynamically according to the resources availability can be considered a quality factor. Tolerant real-time applications, such as videoconferences, are in the frontline to benefit from QoS adaptation. However, not all include adaptive technology able to provide both end-system and network quality awareness. Adaptation, in these cases, can be achieved by introducing a multiplatform middleware layer responsible for tutoring the applications' resources (enabling adjudication or limitation) based on the available processing and networking capabilities. Congregating these technological contributions, an adaptive platform has been developed integrating public domain multicast tools, applied to a web-based distance learning system. The system is user-centered (e-student), aiming at good pedagogical practices and proactive usability for multimedia and network resources. The services provided, including QoS adapted interactive multimedia multicast conferences (MMC), are fully integrated and transparent to end-users. QoS adaptation, when treated systematically in tolerant real-time applications, denotes advantages in group scalability and QoS sustainability in heterogeneous and unpredictable environments such as the Internet

    Image compression techniques using vector quantization

    Get PDF

    Using machine learning to select and optimise multiple objectives in media compression

    Get PDF
    The growing complexity of emerging image and video compression standards means additional demands on computational time and energy resources in a variety of environments. Additionally, the steady increase in sensor resolution, display resolution, and the demand for increasingly high-quality media in consumer and professional applications also mean that there is an increasing quantity of media being compressed. This work focuses on a methodology for improving and understanding the quality of media compression algorithms using an empirical approach. Consequently, the outcomes of this research can be deployed on existing standard compression algorithms, but are also likely to be applicable to future standards without substantial redevelopment, increasing productivity and decreasing time-to-market. Using machine learning techniques, this thesis proposes a means of using past information about how images and videos are compressed in terms of content, and leveraging this information to guide and improve industry standard media compressors in order to achieve the desired outcome in a time and energy e cient way. The methodology is implemented and evaluated on JPEG, WebP and x265 codecs, allowing the system to automatically target multiple performance characteristics like le size, image quality, compression time and e ciency, based on user preferences. Compared to previous work, this system is able to achieve a prediction error three times smaller for quality and size for JPEG, and a speed up of compression of four times for WebP, targeting the same objectives. For x265 video compression, the system allows multiple objectives to be considered simultaneously, allowing speedier encoding for similar levels of quality

    Implications of Implementing HDTV Over Digital Subscriber Line Networks

    Get PDF
    This thesis addresses the different challenges a telecommunications company would face when trying to implement an HDTV video service over a Digital Subscriber Line (DSL) connection. Each challenge is discussed in detail and a technology, protocol, or method is suggested to overcome that particular challenge. One of the biggest challenges is creating a network architecture that can provide enough bandwidth to support video over a network that was originally designed for voice traffic. The majority of the network connections to a customer premises in a telephony network consists of a copper pair. This type of connection is not optimal for high bandwidth services. This limitation can be overcome using Gigabit Ethernet (GE) over fiber in the core part of the network and VDSL2 in the access part of the network. For the purposes of this document, the core portion of the network is considered to be an area equal to several counties or approximately 50 miles in radius. The core network starts at the primary central office (CO) and spreads out to central offices in suburbs and small towns. The primary central office is a central point in the telecom operator\u27s network. Large trunks are propagated from the primary central office to smaller central offices making up the core network. The access portion of the network is considered to be an area within a suburb or small town from the central office to a subscriber\u27s home. Appendix A, located on page 60, contains a network diagram illustrating the scope of each of the different portions of the network. Considerations must also be given for the internal network to the residence such as category 5 (Cat5) cable or higher grade and network equipment that can provide up to 30 Megabits per second (Mbps) connections or throughput. The equipment in the telecommunications network also plays a part in meeting the challenge of 30 Mbps bandwidth. GE switches should be used with single mode fiber optic cable in the core part of the network. Digital Subscriber Line Access Multiplexers (DSLAM) with the capability to filter Internet Group Management Protocol (IGMP) messages should be used in the access part of the network to facilitate bandwidth utilization. Placement of this equipment and how the data is aggregated is another issue to consider when implementing HDTV service. Another major challenge facing the implementation of HDTV over DSL networks is controlling quality of service (QoS) throughout the network. Class of Service (CoS) and Differentiated Services (DiffServ) is a method of QoS that would enable video packets to have a higher priority and less delay than other data packets. The consumer could have data, video, and voice traffic all over the same DSL connection. Data, video and voice packets would need to have a different priority in order to maintain appropriate QoS levels for each service. The use of advanced technology in video encoding will be essential to the success of the video service. MPEG-2, MPEG-4, and Windows Media 9 are just a few of the video encoding technologies that could be used to reduce the necessary bandwidth for HDTV. The advancement of this technology is essential to allow telecommunications providers to offer HDTV. Another challenge for the telecom operator concerns the security of the network and service after implementation. Theft of service will be another area that the telecomm operator will be forced to resolve. The cable operators currently face this issue and lose millions of dollars in revenue. Authentication, IP filtering and MAC address blocking are a few possible solutions to this problem
    corecore