Design Space Exploration of Practical VVC Encoding for Emerging Media Applications

Research output: Contribution to journalArticleScientificpeer-review

9 Citations (Scopus)
180 Downloads (Pure)

Abstract

Versatile Video Coding (VVC/H.266) is the latest video coding standard designed for a broad range of next-generation media applications. This paper explores the design space of practical VVC encoding by profiling the Fraunhofer Versatile Video Encoder (VVenC). All experiments were conducted over five 2160p video sequences and their downsampled versions under the random access (RA) condition. The exploration was performed by analyzing the rate-distortion-complexity (RDC) of the VVC block structure and coding tools. First, VVenC was profiled to provide a breakdown of coding block distribution and coding tool utilization in it. Then, the usefulness of each VVC coding tool was analyzed for its individual impact on overall RDC performance. Finally, our findings were elevated to practical implementation guidelines: the highest coding gains come with the multi type tree (MTT) structure, adaptive loop filter (ALF), cross component linear model (CCLM), and bi-directional optical flow (BDOF) coding tools, whereas multi transform selection (MTS) and affine motion estimation are the primary candidates for complexity reduction. To the best of our knowledge, this is the first work to provide a comprehensive RDC analysis for practical VVC encoding. It can serve as a basis for practical VVC encoder implementation or optimization on various computing platforms.
Original languageEnglish
Pages (from-to)387 - 400
JournalIEEE Transactions on Consumer Electronics
Volume68
Issue number4
Early online date28 Jul 2022
DOIs
Publication statusPublished - 2022
Publication typeA1 Journal article-refereed

Publication forum classification

  • Publication forum level 1

Fingerprint

Dive into the research topics of 'Design Space Exploration of Practical VVC Encoding for Emerging Media Applications'. Together they form a unique fingerprint.

Cite this