1,996 research outputs found
Video Compression using Neural Weight Step and Huffman Coding Techniques
مقدمة:
تقترح هذه الورقة طريقة مخطط ضغط الفيديو الهرمي (HVCS) مع ثلاث طبقات هرمية من الجودة مع شبكة تحسين الجودة المتكررة (RQEN). تستخدم تقنيات ضغط الصور لضغط الإطارات في الطبقة الأولى، حيث تتمتع الإطارات بأعلى جودة. باستخدام إطار عالي الجودة كمرجع ، تم اقتراح شبكة الضغط العميق ثنائي الاتجاه (BDC) لضغط الإطار في الطبقة الثانية بجودة كبيرة. في الطبقة الثالثة، يتم استخدام جودة منخفضة لضغط الإطار باستخدام شبكة ضغط الحركة الواحدة(SMC) المعتمدة، والتي تقترح خريطة الحركة الواحدة لتقدير الحركة داخل إطارات متعددة. نتيجة لذلك ، يوفر SMC معلومات الحركة باستخدام عدد أقل من البتات. في مرحلة فك التشفير ، يتم تطوير شبكة تحسين الجودة المتكررة ((RQEN المرجحة لأخذ كل من تدفق البتات والإطارات المضغوطة كمدخلات. في خلية RQEN ، يتم ترجيح إشارة التحديث والذاكرة باستخدام ميزات الجودة للتأثير بشكل إيجابي على معلومات الإطارات المتعددة ...
طرق العمل:
يوضح الجدولان 1 و 2 تمثيل القيم الناتجة لتشويه المعدل في مجموعتي بيانات الفيديو. كما ذكرنا سابقا ، يتم استخدام PSNR و MS-SSIM لتقييم الجودة، حيث يتم حساب معدلات البتات باستخدام بت لكل بكسل(bpp) . يوضح الجدول 1 أداء PSNR، حيث يظهرون أداء PSNR أفضل لنموذج الضغط المقترح من الطرق الأخرى مثل Chao et al [7] أو الطرق المحسنة [1]. بالإضافة إلى ذلك ، يتفوقون في تطبيق H.265 على مجموعة بيانات JCT-VC القياسية. على الجانب الآخر ، أسفر مخطط الضغط المقترح عن أداء معدل بت أفضل من تطبيق H.265 على UVG. كما هو الحال في الجدول 2 ، قدم تقييم MS-SSIM أداء أفضل للمخطط المقترح من جميع النهج المستفادة الأخرى، حيث وصل إلى أداء أفضل من H.264 و .H.265 نظرا لأداء معدل البت على UVG ، يتمتع Lee et al. [11] بأداء مماثل، وحقق Guo et al [10] أداء أقل من H.265. التقديم على JCT-VC ، DVC [10] يمكن مقارنته فقط ب H.265 . على العكس من ذلك ، فإن أداء تشويه معدل HVCS له أداء أفضل واضح من H.265. علاوة على ذلك، يتم حساب معدل بت دلتا BjꝊntegaard (BDBR) [47] أيضا اعتمادا على H.265. يحسب مقياس BDBR متوسط الفرق في معدل البت مع الأخذ في الاعتبار مرساة H.265 ، حيث يشار إلى أداء أفضل على القيم المنخفضة ل BDBR [48] . يحسب مقياس BDBR متوسط الفرق في معدل البت مع الأخذ في الاعتبار مرساة H.265، حيث يشار إلى أداء أفضل على القيم المنخفضة ل BDBR [48]. في الجدول 3، يتم توضيح أداء BDBR اعتمادا على PSNR و MS-SSIM ، حيث يشار إلى تخفيض معدل البتات بالنظر إلى المرساة بأرقام سالبة معروضة. تتفوق هذه النتائج على أداء H.265، حيث تمثل الأرقام الجريئة أفضل النتائج التي تم تحقيقها من خلال الأساليب المستفادة. قدم الجدول 3 مقارنة عادلة حول تقنيات DVC المحسنة (MS-SSIM & PSNR) [10] مع الأخذ في الاعتبار المرساة H.265.
الاستنتاجات:
يقترح هذا العمل مخطط ضغط فيديو مستفاد باستخدام جودة الإطار الهرمي مع التحسين المتكرر. على وجه التحديد، يقترح هذا العمل تقسيم الإطارات إلى مستويات هرمية 1 و 2 و 3 في انخفاض الجودة. بالنسبة للطبقة الأولى، يتم اقتراح طرق ضغط الصور، مع اقتراح BDC وSMC للطبقات 2 و 3 على التوالي. تم تطوير شبكة RQEN بإطارات مضغوطة بجودة الإطار ومعلومات معدل البت كمدخلات لتحسين الإطارات المتعددة. أثبتت النتائج التجريبية كفاءة مخطط ضغط HVCS المقترح. وبالمثل مع تقنيات الضغط الأخرى ، يتم تعيين هيكل الإطار يدويا في هذا المخطط. يمكن تحقيق توصية واعدة للعمل المستقبلي من خلال تطوير شبكات DNN التي يتم تعلمها تلقائيا للتنبؤ والتسلسل الهرمي.Background:
This paper proposes a Hierarchical Video Compression Scheme (HVCS) method with three hierarchical layers of quality with Recurrent Quality Enhancement (RQEN) network. Image compression techniques are used to compress frames in the first layer, where frames have the highest quality. Using high-quality frame as a reference, the Bi-Directional Deep Compression (BDC) network is proposed for frame compression in the second layer with considerable quality. In the third layer, low quality is used for frame compression using adopted Single Motion Compression (SMC) network, which proposes the single motion map for motion estimation within multiple frames. As a result, SMC provide motion information using fewer bits. In decoding stage, a weighted Recurrent Quality Enhancement (RQEN) network is developed to take both bit stream and the compressed frames as inputs. In RQEN cell, the update signal and memory are weighted using quality features to positively influence information of multi-frame for enhancement. In this paper, HVCS adopts hierarchical quality to benefit the efficiency of frame coding, whereas high-quality information improves frame compression and enhances the low-quality frames at encoding and decoding stages, respectively. Experimental results validate that proposed HVCS approach overcomes the state-of-the-art of compression methods.
Materials and Methods:
Tables 1& 2 illustrate representing yielded values for rate-distortion on both video datasets. As aforementioned, PSNR and MS-SSIM are used for quality evaluation, where bit-rates are calculated using bits per pixel (bpp). Table 1 illustrates PSNR performance, where they show better PSNR performance for the proposed compression model than other methods such as Chao et al [7] or optimized methods [1]. In addition, they outperform applying H.265 on standard JCT-VC dataset. On the other side, proposed compression scheme yielded better bit-rate performance than applying H.265 on UVG. As in Table 2, the MS-SSIM evaluation provided better performance of proposed scheme than all other learned approaches, where it reached better performance than H.264 and H.265. Due to bit-rate performance on UVG, Lee et al. [11] has comparable performance, and Guo et al [10] yielded lower performance than H.265. Applying on JCT-VC, DVC [10] is only comparable with H.265. On the opposite, the preformance of HVCS-rate-distortion have obvious better performance than H.265.
Furthermore, BjꝊntegaard Delta Bit-Rate (BDBR) [47] is also computed depending on H.265. A BDBR measure computes the average difference of bit-rate considering H.265 anchor, where better performance is indicated on lower values of BDBR [48]. In Table 3, BDBR performance is illustrated depending on PSNR and MS-SSIM, in which, bit-rate reduction considering the anchor is indicated by showed negative numbers. Such results outperform H.265 performance, where bold numbers represent best yielded results by learned methods. Table 3 provided a fair comparison on (MS-SSIM & PSNR) optimized techniques DVC [10] considering the anchor H.265.
Results:
Tables 1& 2 illustrate representing yielded values for rate-distortion on both video datasets. As aforementioned, PSNR and MS-SSIM are used for quality evaluation, where bit-rates are calculated using bits per pixel (bpp). Table 1 illustrates PSNR performance, where they show better PSNR performance for the proposed compression model than other methods such as Chao et al [7] or optimized methods [1]. In addition, they outperform applying H.265 on standard JCT-VC dataset. On the other side, proposed compression scheme yielded better bit-rate performance than applying H.265 on UVG. As in Table 2, the MS-SSIM evaluation provided better performance of proposed scheme than all other learned approaches, where it reached better performance than H.264 and H.265. Due to bit-rate performance on UVG, Lee et al. [11] has comparable performance, and Guo et al [10] yielded lower performance than H.265. Applying on JCT-VC, DVC [10] is only comparable with H.265. On the opposite, the preformance of HVCS-rate-distortion have obvious better performance than H.265.
Furthermore, BjꝊntegaard Delta Bit-Rate (BDBR) [47] is also computed depending on H.265. A BDBR measure computes the average difference of bit-rate considering H.265 anchor, where better performance is indicated on lower values of BDBR [48]. In Table 3, BDBR performance is illustrated depending on PSNR and MS-SSIM, in which, bit-rate reduction considering the anchor is indicated by showed negative numbers. Such results outperform H.265 performance, where bold numbers represent best yielded results by learned methods. Table 3 provided a fair comparison on (MS-SSIM & PSNR) optimized techniques DVC [10] considering the anchor H.265.
Conclusion:
This work proposes a learned video compression scheme utilizing the hierarchical frame quality with recurrent enhancement. Specifically, this work proposes dividing frames into hierarchical levels 1, 2 and 3 in decreasing quality. For the first layer, image compression methods are proposed, while proposing BDC and SMC for layers 2 and 3 respectively. RQEN network is developed with frame quality compressed frames and bit-rate information as inputs for multi-frame enhancement. Experimental results validated the efficiency of proposed HVCS compression scheme.
Similarly with other compression techniques, frame structure is manually set the in this scheme. A promising recommendation for future work can be accomplished by developing DNN networks which are automatically learned for the prediction and hierarchy
Automatic 3DS Conversion of Historical Aerial Photographs
In this paper we present a method for the generation of 3D stereo (3DS) pairs from sequences of historical aerial photographs. The goal of our work is to provide a stereoscopic display when the existing exposures are in a monocular sequence. Each input image is processed using its neighbours and a synthetic image is rendered, which, together with the original one, form a stereo pair. Promising results on real images taken from a historical photo archive are shown, that corroborate the viability of generating 3DS data from monocular footage
Highly efficient low-level feature extraction for video representation and retrieval.
PhDWitnessing the omnipresence of digital video media, the research community has
raised the question of its meaningful use and management. Stored in immense
multimedia databases, digital videos need to be retrieved and structured in an
intelligent way, relying on the content and the rich semantics involved. Current
Content Based Video Indexing and Retrieval systems face the problem of the semantic
gap between the simplicity of the available visual features and the richness of user
semantics.
This work focuses on the issues of efficiency and scalability in video indexing and
retrieval to facilitate a video representation model capable of semantic annotation. A
highly efficient algorithm for temporal analysis and key-frame extraction is developed.
It is based on the prediction information extracted directly from the compressed domain
features and the robust scalable analysis in the temporal domain. Furthermore,
a hierarchical quantisation of the colour features in the descriptor space is presented.
Derived from the extracted set of low-level features, a video representation model that
enables semantic annotation and contextual genre classification is designed.
Results demonstrate the efficiency and robustness of the temporal analysis algorithm
that runs in real time maintaining the high precision and recall of the detection task.
Adaptive key-frame extraction and summarisation achieve a good overview of the
visual content, while the colour quantisation algorithm efficiently creates hierarchical
set of descriptors. Finally, the video representation model, supported by the genre
classification algorithm, achieves excellent results in an automatic annotation system by
linking the video clips with a limited lexicon of related keywords
Image Based View Synthesis
This dissertation deals with the image-based approach to synthesize a virtual scene using sparse images or a video sequence without the use of 3D models. In our scenario, a real dynamic or static scene is captured by a set of un-calibrated images from different viewpoints. After automatically recovering the geometric transformations between these images, a series of photo-realistic virtual views can be rendered and a virtual environment covered by these several static cameras can be synthesized. This image-based approach has applications in object recognition, object transfer, video synthesis and video compression. In this dissertation, I have contributed to several sub-problems related to image based view synthesis. Before image-based view synthesis can be performed, images need to be segmented into individual objects. Assuming that a scene can approximately be described by multiple planar regions, I have developed a robust and novel approach to automatically extract a set of affine or projective transformations induced by these regions, correctly detect the occlusion pixels over multiple consecutive frames, and accurately segment the scene into several motion layers. First, a number of seed regions using correspondences in two frames are determined, and the seed regions are expanded and outliers are rejected employing the graph cuts method integrated with level set representation. Next, these initial regions are merged into several initial layers according to the motion similarity. Third, the occlusion order constraints on multiple frames are explored, which guarantee that the occlusion area increases with the temporal order in a short period and effectively maintains segmentation consistency over multiple consecutive frames. Then the correct layer segmentation is obtained by using a graph cuts algorithm, and the occlusions between the overlapping layers are explicitly determined. Several experimental results are demonstrated to show that our approach is effective and robust. Recovering the geometrical transformations among images of a scene is a prerequisite step for image-based view synthesis. I have developed a wide baseline matching algorithm to identify the correspondences between two un-calibrated images, and to further determine the geometric relationship between images, such as epipolar geometry or projective transformation. In our approach, a set of salient features, edge-corners, are detected to provide robust and consistent matching primitives. Then, based on the Singular Value Decomposition (SVD) of an affine matrix, we effectively quantize the search space into two independent subspaces for rotation angle and scaling factor, and then we use a two-stage affine matching algorithm to obtain robust matches between these two frames. The experimental results on a number of wide baseline images strongly demonstrate that our matching method outperforms the state-of-art algorithms even under the significant camera motion, illumination variation, occlusion, and self-similarity. Given the wide baseline matches among images I have developed a novel method for Dynamic view morphing. Dynamic view morphing deals with the scenes containing moving objects in presence of camera motion. The objects can be rigid or non-rigid, each of them can move in any orientation or direction. The proposed method can generate a series of continuous and physically accurate intermediate views from only two reference images without any knowledge about 3D. The procedure consists of three steps: segmentation, morphing and post-warping. Given a boundary connection constraint, the source and target scenes are segmented into several layers for morphing. Based on the decomposition of affine transformation between corresponding points, we uniquely determine a physically correct path for post-warping by the least distortion method. I have successfully generalized the dynamic scene synthesis problem from the simple scene with only rotation to the dynamic scene containing non-rigid objects. My method can handle dynamic rigid or non-rigid objects, including complicated objects such as humans. Finally, I have also developed a novel algorithm for tri-view morphing. This is an efficient image-based method to navigate a scene based on only three wide-baseline un-calibrated images without the explicit use of a 3D model. After automatically recovering corresponding points between each pair of images using our wide baseline matching method, an accurate trifocal plane is extracted from the trifocal tensor implied in these three images. Next, employing a trinocular-stereo algorithm and barycentric blending technique, we generate an arbitrary novel view to navigate the scene in a 2D space. Furthermore, after self-calibration of the cameras, a 3D model can also be correctly augmented into this virtual environment synthesized by the tri-view morphing algorithm. We have applied our view morphing framework to several interesting applications: 4D video synthesis, automatic target recognition, multi-view morphing
Recommended from our members
Performance analysis of an ATM network with multimedia traffic: a simulation study
Traffic and congestion control are important in enabling ATM networks to maintain the Quality of Service (QoS) required by end users. A Call Admission Control (CAC) strategy ensures that the network has sufficient resources available at the start of each call, but this does not prevent a traffic source from violating the negotiated contract. A policing strategy (User Parameter Control (UPC)) is also required to enforce the negotiated rates for a particular connection and to protect conforming users from network overload.
The aim of this work is to investigate traffic policing and bandwidth management at the User to Network Interface (UNI). A policing function is proposed which is based on the leaky bucket (LB) which offers improved performance for both real time (RT) traffic such as speech and video and non-real time (non-RT) traffic, mainly data by taking into account the QoS requirements. A video cell in violation of the negotiated bit rate causes the remainder of the slice to be discarded. This 'tail clipping' provides protection for the decoder from damaged video slices. Speech cells are coded using a frequency domain coder, which places the most significant bits of a double speech sample into a high priority cell and the least significant bits into a high priority cell. In the case of congestion, the low priority cell can be discarded with little impact on the intelligibility of the received speech. However, data cells require loss-free delivery and are buffered rather than being discarded or tagged for subsequent deletion. This triple strategy is termed the super leaky bucket (SLB).
Separate queues for RT and non-RT traffic, are also proposed at the multiplexer, with non pre-emptive priority service for RT traffic if the queue exceeds a predetermined threshold. If the RT queue continues to grow beyond a second threshold, then all low priority cells (mainly speech) are discarded. This scheme protects non-RT traffic from being tagged and subsequently discarded, by queueing the cells and also by throttling back non-RT sources during periods of congestion. It also prevents the RT cells from being delayed excessively in the multiplexer queue.
A simulation model has been designed and implemented to test the proposal. Realistic sources have been incorporated into the model to simulate the types of traffic which could be expected on an ATM network.
The results show that the S-LB outperforms the standard LB for video cells. The number of cells discarded and the resulting number of damaged video slices are significantly reduced. Dual queues with cyclic service at the multiplexer also reduce the delays experienced by RT cells. The QoS for all categories of traffic is preserved
Efficient compression of motion compensated residuals
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Implementation and Algorithm Development of 3D ARFI and SWEI Imaging for in vivo Detection of Prostate Cancer
<p>Prostate cancer (PCa) is the most common non-cutaneous cancer in men with an estimated almost 30,000 deaths occurring in the United States in 2014. Currently, the most widely utilized methods for screening men for prostate cancer include the digital rectal exam and prostate specific antigen analysis; however, these methods lack either high sensitivity or specificity, requiring needle biopsy to confirm the presence of cancer. The biopsies are conventionally performed with only B-mode ultrasound visualization of the organ and no targeting of specific regions of the prostate, although recently, multi-parametric magnetic resonance imaging has shown promise for targeting biopsies. Earlier work has demonstrated the feasibility of acoustic radiation force impulse (ARFI) imaging and shear wave elasticity imaging (SWEI) to visualize cancer in the prostate, however multiple challenges with both methods have been identified.</p><p>The aim of this thesis is to contribute to both the technical development and clinical applications of ARFI and SWEI imaging using the latest advancements in ultrasound imaging technology.</p><p>The introduction of the Siemens Acuson SC2000 provided multiple technological improvements over previous generations of ultrasound scanners, including: an improved power supply, arbitrary waveform generator, and additional parallel receive beamforming. In this thesis, these capabilities were utilized to improve both ARFI and SWEI imaging and reduce acoustic exposure and acquisition duration. However, the SC2000 did not originally have radiation force imaging capabilities; therefore, a new tool set for prototyping these sequences was developed along with rapid data processing and display code. These tools leveraged the increasing availability of general purpose computing on graphics processing units (GPUs) to significantly reduce the data processing time, facilitating real-time display for ultrasonic research systems.</p><p>These technical developments for both acquisition and processing were applied to investigate new methods for ARFI and SWEI imaging. Specifically, the power supply on the SC2000 allowed for a new type of multi-focal zone ARFI images to be acquired, which are shown to provide improved image quality over an extended depth of field. Additionally, a new algorithm for SWEI image processing was developed using an adaptive filter based on a maximum a posteriori estimator, demonstrating increases in the contrast to noise ratio of lesion targets upwards of 50%.</p><p>Finally, the optimized ARFI imaging methods were integrated with a transrectal ultrasound transducer to acquire volumetric in vivo data in patients undergoing robotic radical prostatectomy procedures in an ongoing study. When the study was initiated, it was recognized that the technological improvements of Siemens Acuson SC2000 allowed for the off-axis response to the radiation force excitation to be concurrently recorded without impacting ARFI image quality. This volumetric SWEI data was reconstructed retrospectively using the approaches developed in this thesis, but the images were low quality. A further investigation identified multiple challenges with the SWEI sequence, which should be addressed in future studies. The ARFI image volumes were very high quality and are currently being analyzed to assess the accuracy of ARFI to visualize prostate anatomy and clinically significant prostate cancer tumors. After a blinded evaluation of the ARFI image volumes for suspicion of prostate cancer, three readers correctly identified 63% of all clinically significant tumors and 74% of clinically significant tumors in the posterior region, showing great promise for using ARFI in the context of prostate cancer visualization for targeting biopsies, focal therapy, and watchful waiting.</p>Dissertatio
- …