Computationally Efficient MCTF for MC-EZBC Scalable Video Coding Framework

Kotegar, Karunakar A and Pai, Manohara M.M. (2007) Computationally Efficient MCTF for MC-EZBC Scalable Video Coding Framework. In: Proceedings of the 2nd international conference on Pattern recognition and machine intelligence .

[img] PDF
8_Computationally_Efficient_MCTF_for_MC-EZBC_Scalable_Video_Coding_Framework.pdf - Published Version
Restricted to Registered users only

Download (281kB) | Request a copy
Official URL:


The discrete wavelet transforms (DWTs) applied temporally under motion compensation (i.e. Motion Compensation Temporal Filtering (MCTF)) has recently become a very powerful tool in scalable video compression, especially when implemented through lifting. The major bottleneck for speed of the encoder is the computational complexity of the bidirectional motion estimation in MCTF. This paper proposes a novel predictive technique to reduce the computational complexity of MCTF. In the proposed technique the temporal filtering is done without motion compensation. The resultant high frequency frames are used to predict the blocks under motion. Motion estimation is carried out only for the predicted blocks under motion. This significantly reduces the number of blocks that undergoes motion estimation and hence the computationally complexity of MCTF is reduced by 44% to 92% over variety of standard test sequences without compromising the quality of the decoded video. The proposed algorithm is implemented in MC-EZBC, a 3D-subband scalable video coding system.

Item Type: Conference or Workshop Item (Paper)
Uncontrolled Keywords: Motion Estimation, Motion Compensated Temporal Filtering, Temporal Filtering, MC-EZBC
Subjects: Engineering > MIT Manipal > Information and Communication Technology
Depositing User: MIT Library
Date Deposited: 21 Jun 2011 06:14
Last Modified: 21 Jun 2011 06:14

Actions (login required)

View Item View Item