30 research outputs found
Algoritmo de estimação de movimento e sua arquitetura de hardware para HEVC
Doutoramento em Engenharia EletrotécnicaVideo coding has been used in applications like video surveillance, video
conferencing, video streaming, video broadcasting and video storage. In a
typical video coding standard, many algorithms are combined to compress a
video. However, one of those algorithms, the motion estimation is the most
complex task. Hence, it is necessary to implement this task in real time by
using appropriate VLSI architectures. This thesis proposes a new fast motion
estimation algorithm and its implementation in real time. The results show that
the proposed algorithm and its motion estimation hardware architecture out
performs the state of the art. The proposed architecture operates at a
maximum operating frequency of 241.6 MHz and is able to process
1080p@60Hz with all possible variables block sizes specified in HEVC
standard as well as with motion vector search range of up to ±64 pixels.A codificação de vídeo tem sido usada em aplicações tais como, vídeovigilância,
vídeo-conferência, video streaming e armazenamento de vídeo.
Numa norma de codificação de vídeo, diversos algoritmos são combinados
para comprimir o vídeo. Contudo, um desses algoritmos, a estimação de
movimento é a tarefa mais complexa. Por isso, é necessário implementar esta
tarefa em tempo real usando arquiteturas de hardware apropriadas. Esta tese
propõe um algoritmo de estimação de movimento rápido bem como a sua
implementação em tempo real. Os resultados mostram que o algoritmo e a
arquitetura de hardware propostos têm melhor desempenho que os existentes.
A arquitetura proposta opera a uma frequência máxima de 241.6 MHz e é
capaz de processar imagens de resolução 1080p@60Hz, com todos os
tamanhos de blocos especificados na norma HEVC, bem como um domínio de
pesquisa de vetores de movimento até ±64 pixels
Recommended from our members
QOE-AWARE CONTENT DISTRIBUTION SYSTEMS FOR ADAPTIVE BITRATE VIDEO STREAMING
A prodigious increase in video streaming content along with a simultaneous rise in end system capabilities has led to the proliferation of adaptive bit rate video streaming users in the Internet. Today, video streaming services range from Video-on-Demand services like traditional IP TV to more recent technologies such as immersive 3D experiences for live sports events. In order to meet the demands of these services, the multimedia and networking research community continues to strive toward efficiently delivering high quality content across the Internet while also trying to minimize content storage and delivery costs.
The introduction of flexible and adaptable technologies such as compute and storage clouds, Network Function Virtualization and Software Defined Networking continue to fuel content provider revenue. Today, content providers such as Google and Facebook build their own Software-Defined WANs to efficiently serve millions of users worldwide, while NetFlix partners with ISPs such as ATT (using OpenConnect) and cloud providers such as Amazon EC2 to serve their content and manage the delivery of several petabytes of high-quality video content for millions of subscribers at a global scale, respectively. In recent years, the unprecedented growth of video traffic in the Internet has seen several innovative systems such as Software Defined Networks and Information Centric Networks as well as inventive protocols such as QUIC, in an effort to keep up with the effects of this remarkable growth. While most existing systems continue to sub-optimally satisfy user requirements, future video streaming systems will require optimal management of storage and bandwidth resources that are several orders of magnitude larger than what is implemented today. Moreover, Quality-of-Experience metrics are becoming increasingly fine-grained in order to accurately quantify diverse content and consumer needs.
In this dissertation, we design and investigate innovative adaptive bit rate video streaming systems and analyze the implications of recent technologies on traditional streaming approaches using real-world experimentation methods. We provide useful insights for current and future content distribution network administrators to tackle Quality-of-Experience dilemmas and serve high quality video content to several users at a global scale. In order to show how Quality-of-Experience can benefit from core network architectural modifications, we design and evaluate prototypes for video streaming in Information Centric Networks and Software-Defined Networks. We also present a real-world, in-depth analysis of adaptive bitrate video streaming over protocols such as QUIC and MPQUIC to show how end-to-end protocol innovation can contribute to substantial Quality-of-Experience benefits for adaptive bit rate video streaming systems. We investigate a cross-layer approach based on QUIC and observe that application layer-based information can be successfully used to determine transport layer parameters for ABR streaming applications
Entrega de conteúdos multimédia em over-the-top: caso de estudo das gravações automáticas
Doutoramento em Engenharia EletrotécnicaOver-The-Top (OTT) multimedia delivery is a very appealing approach for providing
ubiquitous,
exible, and globally accessible services capable of low-cost
and unrestrained device targeting. In spite of its appeal, the underlying delivery
architecture must be carefully planned and optimized to maintain a high Qualityof-
Experience (QoE) and rational resource usage, especially when migrating from
services running on managed networks with established quality guarantees. To address
the lack of holistic research works on OTT multimedia delivery systems, this
Thesis focuses on an end-to-end optimization challenge, considering a migration
use-case of a popular Catch-up TV service from managed IP Television (IPTV)
networks to OTT. A global study is conducted on the importance of Catch-up
TV and its impact in today's society, demonstrating the growing popularity of
this time-shift service, its relevance in the multimedia landscape, and tness as
an OTT migration use-case. Catch-up TV consumption logs are obtained from
a Pay-TV operator's live production IPTV service containing over 1 million subscribers
to characterize demand and extract insights from service utilization at a
scale and scope not yet addressed in the literature. This characterization is used
to build demand forecasting models relying on machine learning techniques to enable
static and dynamic optimization of OTT multimedia delivery solutions, which
are able to produce accurate bandwidth and storage requirements' forecasts, and
may be used to achieve considerable power and cost savings whilst maintaining a
high QoE. A novel caching algorithm, Most Popularly Used (MPU), is proposed,
implemented, and shown to outperform established caching algorithms in both
simulation and experimental scenarios. The need for accurate QoE measurements
in OTT scenarios supporting HTTP Adaptive Streaming (HAS) motivates the creation
of a new QoE model capable of taking into account the impact of key HAS
aspects. By addressing the complete content delivery pipeline in the envisioned
content-aware OTT Content Delivery Network (CDN), this Thesis demonstrates
that signi cant improvements are possible in next-generation multimedia delivery
solutions.A entrega de conteúdos multimédia em Over-The-Top (OTT) e uma proposta
atractiva para fornecer um serviço flexível e globalmente acessível, capaz de alcançar qualquer dispositivo, com uma promessa de baixos custos. Apesar das suas vantagens, e necessario um planeamento arquitectural detalhado e optimizado para manter níveis elevados de Qualidade de Experiência (QoE), em particular aquando da migração dos serviços suportados em redes geridas com garantias de qualidade pré-estabelecidas. Para colmatar a falta de trabalhos de investigação na área de sistemas de entrega de conteúdos multimédia em OTT, esta Tese foca-se na optimização destas soluções como um todo, partindo do caso de uso de migração de um serviço popular de Gravações Automáticas suportado em redes de Televisão sobre IP (IPTV) geridas, para um cenário de entrega em OTT. Um estudo global para aferir a importância das Gravações Automáticas revela a sua relevância no panorama de serviços multimédia e a sua adequação enquanto caso de uso de
migração para cenários OTT. São obtidos registos de consumos de um serviço
de produção de Gravações Automáticas, representando mais de 1 milhão de assinantes,
para caracterizar e extrair informação de consumos numa escala e âmbito
não contemplados ate a data na literatura. Esta caracterização e utilizada para
construir modelos de previsão de carga, tirando partido de sistemas de machine
learning, que permitem optimizações estáticas e dinâmicas dos sistemas de entrega
de conteúdos em OTT através de previsões das necessidades de largura de banda e
armazenamento, potenciando ganhos significativos em consumo energético e custos.
Um novo mecanismo de caching, Most Popularly Used (MPU), demonstra um
desempenho superior as soluções de referencia, quer em cenários de simulação quer
experimentais. A necessidade de medição exacta da QoE em streaming adaptativo
HTTP motiva a criaçao de um modelo capaz de endereçar aspectos específicos
destas tecnologias adaptativas. Ao endereçar a cadeia completa de entrega através
de uma arquitectura consciente dos seus conteúdos, esta Tese demonstra que são
possíveis melhorias de desempenho muito significativas nas redes de entregas de
conteúdos em OTT de próxima geração
MediaSync: Handbook on Multimedia Synchronization
This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences
Recommended from our members
Interoperability of wireless communication technologies in hybrid networks: Evaluation of end-to-end interoperability issues and quality of service requirements
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Hybrid Networks employing wireless communication technologies have nowadays brought closer the vision of communication “anywhere, any time with anyone”. Such communication technologies consist of various standards, protocols, architectures, characteristics, models, devices, modulation and coding techniques. All these different technologies naturally may share some common characteristics, but there are also many important differences. New advances in these technologies are emerging very rapidly, with the advent of new models, characteristics, protocols and architectures. This rapid evolution imposes many challenges and issues to be addressed, and of particular importance are the interoperability issues of the following wireless technologies: Wireless Fidelity (Wi-Fi) IEEE802.11, Worldwide Interoperability for Microwave Access (WiMAX) IEEE 802.16, Single Channel per Carrier (SCPC), Digital Video Broadcasting of Satellite (DVB-S/DVB-S2), and Digital Video Broadcasting Return Channel through Satellite (DVB-RCS). Due to the differences amongst wireless technologies, these technologies do not generally interoperate easily with each other because of various interoperability and Quality of Service (QoS) issues.
The aim of this study is to assess and investigate end-to-end interoperability issues and QoS requirements, such as bandwidth, delays, jitter, latency, packet loss, throughput, TCP performance, UDP performance, unicast and multicast services and availability, on hybrid wireless communication networks (employing both satellite broadband and terrestrial wireless technologies).
The thesis provides an introduction to wireless communication technologies followed by a review of previous research studies on Hybrid Networks (both satellite and terrestrial wireless technologies, particularly Wi-Fi, WiMAX, DVB-RCS, and SCPC). Previous studies have discussed Wi-Fi, WiMAX, DVB-RCS, SCPC and 3G technologies and their standards as well as their properties and characteristics, such as operating frequency, bandwidth, data rate, basic configuration, coverage, power, interference, social issues, security problems, physical and MAC layer design and development issues. Although some previous studies provide valuable contributions to this area of research, they are limited to link layer characteristics, TCP performance, delay, bandwidth, capacity, data rate, and throughput. None of the studies cover all aspects of end-to-end interoperability issues and QoS requirements; such as bandwidth, delay, jitter, latency, packet loss, link performance, TCP and UDP performance, unicast and multicast performance, at end-to-end level, on Hybrid wireless networks.
Interoperability issues are discussed in detail and a comparison of the different technologies and protocols was done using appropriate testing tools, assessing various performance measures including: bandwidth, delay, jitter, latency, packet loss, throughput and availability testing. The standards, protocol suite/ models and architectures for Wi-Fi, WiMAX, DVB-RCS, SCPC, alongside with different platforms and applications, are discussed and compared. Using a robust approach, which includes a new testing methodology and a generic test plan, the testing was conducted using various realistic test scenarios on real networks, comprising variable numbers and types of nodes. The data, traces, packets, and files were captured from various live scenarios and sites. The test results were analysed in order to measure and compare the characteristics of wireless technologies, devices, protocols and applications.
The motivation of this research is to study all the end-to-end interoperability issues and Quality of Service requirements for rapidly growing Hybrid Networks in a comprehensive and systematic way.
The significance of this research is that it is based on a comprehensive and systematic investigation of issues and facts, instead of hypothetical ideas/scenarios or simulations, which informed the design of a test methodology for empirical data gathering by real network testing, suitable for the measurement of hybrid network single-link or end-to-end issues using proven test tools.
This systematic investigation of the issues encompasses an extensive series of tests measuring delay, jitter, packet loss, bandwidth, throughput, availability, performance of audio and video session, multicast and unicast performance, and stress testing. This testing covers most common test scenarios in hybrid networks and gives recommendations in achieving good end-to-end interoperability and QoS in hybrid networks.
Contributions of study include the identification of gaps in the research, a description of interoperability issues, a comparison of most common test tools, the development of a generic test plan, a new testing process and methodology, analysis and network design recommendations for end-to-end interoperability issues and QoS requirements. This covers the complete cycle of this research.
It is found that UDP is more suitable for hybrid wireless network as compared to TCP, particularly for the demanding applications considered, since TCP presents significant problems for multimedia and live traffic which requires strict QoS requirements on delay, jitter, packet loss and bandwidth. The main bottleneck for satellite communication is the delay of approximately 600 to 680 ms due to the long distance factor (and the finite speed of light) when communicating over geostationary satellites.
The delay and packet loss can be controlled using various methods, such as traffic classification, traffic prioritization, congestion control, buffer management, using delay compensator, protocol compensator, developing automatic request technique, flow scheduling, and bandwidth allocation
A Survey of Driving Research Simulators Around the World.
The literature review is part of the EPSRC funded project "Driver performance in the EPSRC driving simulator: a validation study". The aim of the project is to validate this simulator, located at the Department of Psychology, University of Leeds, and thereby to indicate the strengths and weaknesses of the existing configuration. It will provide guidance on how the simulator can be modified and overcome any deficiencies that are detected and also provide "benchmarks" against which other simulators can be compared. The literature review will describe the technical characteristics of the most well-known driving simulators around the world, their special features and their application areas until today. The simulators will be described and compared according to their cost (low, medium and high) and also contact addresses and photographs of the simulators will be provided by the end of the paper. In the process of gathering this information, it became apparent that there are mainly two types of papers published - either in journals or in proceedings from conferences: those describing only the technical characteristics of a specific simulator and those referring only to the applications of a specific simulator. For the first type of papers, the level of detail, format and content varies significantly where for the second one it has been proven extremely difficult to find any information about the technical characteristics of the simulator where the study had been carried out. A number of details provided in this paper are part of personal communication, or personal visits to those particular driving simulator centres or from the World Wide Web. It should also be noted here that most of the researchers contacted here offered very detail technical characteristics and application areas of their driving simulators and the author is grateful to them
Hardware based High Accuracy Integer Motion Estimation and Merge Mode Estimation
학위논문 (박사)-- 서울대학교 대학원 공과대학 전기·컴퓨터공학부, 2017. 8. 이혁재.HEVC는 H.264/AVC 대비 2배의 뛰어난 압축 효율을 가지지만, 많은 압축 기술이 사용됨으로써, 인코더 측의 계산 복잡도를 크게 증가시켰다. HEVC의 높은 계산 복잡도를 줄이기 위한 많은 연구들이 이루어졌지만, 대부분의 연구들은 H.264/AVC를 위한 계산 복잡도 감소 방법을 확장 적용하는 데에 그쳐, 만족스럽지 않은 계산 복잡도 감소 성능을 보이거나, 지나치게 큰 압축 효율 손실을 동반하여 HEVC의 최대 압축 성능을 끌어내지 못했다. 특히 앞서 연구된 하드웨어 기반의 인코더는 실시간 인코더의 실현이 우선되어 압축 효율의 희생이 매우 크다. 그러므로, 본 연구에서는 하드웨어 기반 Inter prediction의 고속화를 이룸과 동시에 HEVC가 가진 압축 성능의 손실을 최소화하고, 실시간 코딩이 가능한 하드웨어 구조를 제안하였다. 본 연구에서 제안한 bottom-up MV 예측 방법은 기존의 공간적, 시간적으로 인접한 PU로부터 MV를 예측하는 방법이 아닌, HEVC의 계층적으로 인접한 PU로부터 MV를 예측하는 방법을 제안하여 MV 예측의 정확도를 큰 폭으로 향상시켰다. 결과적으로 압축 효율의 변화 없이 IME의 계산 복잡도를 67% 감소시킬 수 있었다. 또한, 본 연구에서는 제안된 bottom-up IME 알고리즘을 적용하여 실시간 동작이 가능한 하드웨어 기반의 IME를 제안하였다. 기존의 하드웨어 기반 IME는 고속 IME 알고리즘이 갖는 단계별 의존성으로 인한 idle cycle의 발생과 참조 데이터 접근 문제로 인해, 고속 IME 알고리즘을 사용하지 않거나 또는 하드웨어에 맞게 고속 IME 알고리즘을 수정하였기 때문에 압축 효율의 저하가 수 퍼센트 이상으로 매우 컸다. 그러나 본 연구에서는 고속 IME 알고리즘인 TZS 알고리즘을 채택하여 TZS 알고리즘의 계산 복잡도 감소 성능을 훼손하지 않는 하드웨어 기반의 IME를 제안하였다. 고속 IME 알고리즘을 하드웨어에서 사용하기 위해서 다음 세 가지 사항을 제안하고 하드웨어에 적용하였다. 첫 째로, 고속 IME 알고리즘의 고질적 문제인 idle cycle 발생 문제를 서로 다른 참조 픽쳐와 서로 다른 depth에 대한 IME를 컨텍스트 스위칭을 통해 해결하였다. 둘 째로, 참조 데이터로의 빠르고 자유로운 접근을 위해 참조 데이터의 locality 이용한 multi bank SRAM 구조를 제안하였다. 셋 째로, 지나치게 자유로운 참조 데이터 접근이 발생시키는 대량의 스위칭 mux의 사용을 피하기 위해 탐색 중심을 기준으로 하는 제한된 자유도의 참조 데이터 접근을 제안하였다. 결과 제안된 IME 하드웨어는 HEVC의 모든 블록 크기를 지원하면서, 참조 픽처 4장를 사용하여, 4k UHD 영상을 60fps의 속도로 처리할 수 있으며 이 때 압축 효율의 손실은 0.11%로 거의 나타나지 않는다. 이 때 사용되는 하드웨어 리소스는 1.27M gates이다.
HEVC에 새로이 채택된 merge mode estimation은 압축 효율 개선 효과가 뛰어난 새로운 기술이지만, 매 PU 마다 계산 복잡도의 변동 폭이 커서 하드웨어로 구현되는 경우 하드웨어 리소스의 낭비가 많다. 그러므로 본 연구에서는 효율적인 하드웨어 기반 MME 방법과 하드웨어 구조를 함께 제안하였다. 기존 MME 방식은 이웃 PU에 의해 보간 필터 적용 여부가 결정되기 때문에, 보간 필터의 사용률은 50% 이하를 나타낸다. 그럼에도 불구하고 하드웨어는 보간 필터를 사용하는 경우에 맞추어 설계되어왔기 때문에 하드웨어 리소스의 사용 효율이 낮았다. 본 연구에서는 가장 하드웨어 리소스를 많이 사용하는 세로 방향 보간 필터를 절반 크기로 줄인 두 개의 데이터 패스를 갖는 MME 하드웨어 구조를 제안하였고, 높은 하드웨어 사용률을 유지하면서 압축 효율 손실을 최소화 하는 merge 후보 할당 알고리즘을 제안하였다. 결과, 기존 하드웨어 기반 MME 보다 24% 적은 하드웨어 리소스를 사용하면서도 7.4% 더 빠른 수행 시간을 갖는 새로운 하드웨어 기반의 MME를 달성하였다. 제안된 하드웨어 기반의 MME는 460.8K gates의 하드웨어 리소스를 사용하고 4k UHD 영상을 30 fps의 속도로 처리할 수 있다.제 1 장 서 론 1
1.1 연구 배경 1
1.2 연구 내용 3
1.3 공통 실험 환경 5
1.4 논문 구성 6
제 2 장 관련 연구 7
2.1 HEVC 표준 7
2.1.1 쿼드-트리 기반의 계층적 블록 구조 7
2.1.2 HEVC 의 Inter Prediction 9
2.2 화면 간 예측의 속도 향상을 위한 이전 연구 17
2.2.1 고속 Integer Motion Estimation 알고리즘 17
2.2.2 고속 Merge Mode Estimation 알고리즘 20
2.3 화면 간 예측 하드웨어 구조에 대한 이전 연구 21
2.3.1 하드웨어 기반 Integer Motion Estimation 연구 21
2.3.2 하드웨어 기반 Merge Mode Estimation 연구 25
제 3 장 Bottom-up Integer Motion Estimation 26
3.1 서로 다른 계층 간의 Motion Vector 관계 관찰 26
3.1.1 서로 다른 계층 간의 Motion Vector 관계 분석 26
3.1.2 Top-down 및 Bottom-up 방향의 Motion Vector 관계 분석 30
3.2 Bottom-up Motion Vector Prediction 33
3.3 Bottom-up Integer Motion Estimation 37
3.3.1 Bottom-up Integer Motion Estimation - Single MVP 37
3.3.2 Bottom-up Integer Motion Estimation - Multiple MVP 38
3.4 실험 결과 40
제 4 장 하드웨어 기반 Integer Motion Estimation 46
4.1 Bottom-up Integer Motion Estimation의 하드웨어 적용 46
4.2 하드웨어를 위한 수정된 Test Zone Search 47
4.2.1 SAD-tree를 활용한 CU 내 PU의 병렬 처리 47
4.2.2 Grid 기반의 Sampled Raster Search 53
4.2.3 서로 다른 PU 간의 중복 연산 제거 55
4.3 Idle cycle이 감소된 5-stage 파이프라인 스케줄 56
4.3.1 파이프라인 스테이지 별 동작 56
4.3.2 Test Zone Search의 의존성으로 인한 Idle cycle 도입 58
4.3.3 컨텍스트 스위칭을 통한 Idle cycle 감소 60
4.4 고속 동작을 위한 참조 데이터 공급 방법 63
4.4.1 참조 데이터 접근 패턴 및 접근 지연 발생 시 문제점 63
4.4.2 Search Points의 Locality를 활용한 참조 데이터 접근 64
4.4.3 단일 cycle 참조 데이터 접근을 위한 Multi Bank 메모리 구조 66
4.4.4 참조 데이터 접근의 자유도 제어를 통한 스위칭 복잡도 저감 방법 68
4.5 하드웨어 구조 72
4.5.1 전체 하드웨어 구조 72
4.5.2 하드웨어 세부 스케줄 78
4.6 하드웨어 구현 결과 및 실험 결과 82
4.6.1 하드웨어 구현 결과 82
4.6.2 수행 시간 및 압축 효율 84
4.6.3 제안 방법 적용 단계 별 성능 변화 88
4.6.4 이전 연구와의 비교 91
제 5 장 하드웨어 기반 Merge Mode Estimation 96
5.1 기존 Merge Mode Estimation의 하드웨어 관점에서의 고찰 96
5.1.1 기존 Merge Mode Estimation 96
5.1.2 기존 Merge Mode Estimation 하드웨어 구조 및 분석 98
5.1.3 기존 Merge Mode Estimation의 하드웨어 사용률 저하 문제 100
5.2 연산량 변동폭을 감소시킨 새로운 Merge Mode Estimation 103
5.3 새로운 Merge Mode Estimation의 하드웨어 구현 106
5.3.1 후보 타입 별 독립적 path를 갖는 하드웨어 구조 106
5.3.2 하드웨어 사용률을 높이기 위한 적응적 후보 할당 방법 109
5.3.3 적응적 후보 할당 방법을 적용한 하드웨어 스케줄 111
5.4 실험 결과 및 하드웨어 구현 결과 114
5.4.1 수행 시간 및 압축 효율 변화 114
5.4.2 하드웨어 구현 결과 116
제 6 장 Overall Inter Prediction 117
6.1 CTU 단위의 3-stage 파이프라인 Inter Prediction 117
6.2 Two-way Encoding Order 119
6.2.1 Top-down 인코딩 순서와 Bottom-up 인코딩 순서 119
6.2.2 기존 고속 알고리즘과 호환되는 Two-way Encoding Order 120
6.2.3 기존 고속 알고리즘과 결합 및 비교 실험 결과 123
제 7 장 Next Generation Video Coding으로의 확장 127
7.1 Bottom-up Motion Vector Prediction의 확장 127
7.2 Bottom-up Integer Motion Estimation의 확장 130
제 8 장 결 론 132Docto