Technology Introduction


Video Compression

A compression system consists of compressor or coder, a transmission channel and a matching expander or decoder. The combination of coder and decoder is known as a codec.

Video compression has made it possible to use digital video in transmission and storage environments that would not support uncompressed (‘raw’) video. Moreover, video compression enables more efficient use of transmission and storage resources.

For over 20 years image and video compression has been a very active field of research and development and many different technology, systems and algorithms for video compression and decompression have been proposed and developed. In order to standardize methods of compression encoding and decoding to allow products from different manufacturers to communicate effectively and to encourage interworking, competition, and increased choice, several key International Standards for image and video compression have been developed including JPEG, MPEG and H.26x series of standards.

 MPEG History

MPEG is an acronym for the Moving Pictures Experts Group which was formed by the ISO (International Standards Organization) to set standards for audio and video compression and transmission.

The ISO moving picture standardization process started in 1988 with a strong emphasis on real-time decoding of compressed data stored on digital storage devices (DSM) such as CD-ROMs. That allowed for the possibility of a much more complex encoder which did not need to run in real time. The technical work for that MPEG effort was nearly complete when a new project was started to target higher bits rates and better quality for applications such as broadcast TV. The two projects then become known as MPEG-1 and MPEG-2. An MPEG-3 project was anticipated to be aimed at HDTV, but MPEG-2 was shown to be capable of filling that need and MPEG-3 never occurred. For very low bitrates, a fourth project, MPEG-4, was started. However, MPEG-4 has now developed into a generic coding technique that is not limited to low bitrates.

MPEG-1 was of limited application and subsequent MPEG-2 standard was considerably broader in scope and of wider appeal. MPEG-2 has become the most common standard since it was established in 1993.  
 

MPEG-2

MPEG-1 was targeted primarily at bitrates of around 1.5 Mbits /s and was particularly suitable for storage media applications such as CD-ROM retrieval. MPEG-2 is aimed at much higher  bitrates and more diverse applications such as television broadcasting, digital storage media, digital high-definition TV (HDTV), and communication.

Some of the applications are: broadcast satellite service (BSS) to the home; digital cable or broadcast TV (DTV and HDTV), DVD, electronic cinema (EC); home television theater (HTT); interpersonal communications (IPC) such as video conferencing and videophone; remote video surveillance (RVS); satellite news gathering (SNG); professional video such as nonlinear editing, studio post production; networked video such as video on ATM, video on Ethernet and LANs. 

MPEG-2 is the most widely used video coding standard today.  
 

 MPEG-4

MPEG-4 (a multi-part standard covering audio coding, systems issues and related aspects of audio/visual communication) was first conceived in 1993 and Part 2 MPEG-4 Visual was standardized in 1999. MPEG-4 Visual and H.264 have related but significantly different visions. Both are concerned with compression of visual data but MPEG-4 Visual emphasizes flexibility while H.264’s emphasis is on efficiency and reliability.

MPEG-4 Visual provides a highly flexible toolkit of coding techniques and resources, making it possible to deal with a wide range of types of visual data including rectangular frames, video objects, still images and hybrids of natural and synthetic visual information.

MPEG-4 Visual provides its functionality through a set of coding tools, organized into ‘profiles’, recommended groupings of tools suitable for certain applications. Classes of profiles include ‘simple profile’ (coding of rectangular video frames), object-based profiles (coding of arbitrary-shaped visual objects), still texture profiles (coding of still images or ‘texture’), scalable profiles (coding at multiple resolutions or quality levels) and studio profile (coding for high-quality studio applications).
 

H.264/AVC

The H.264 standardization effort was initiated by the Video Coding Experts Group (VCEG), a working group of the International Telecommunication Union (ITU-T). The final stages of developing the H.264 standard have been carried out by both VCEG and MPEG and the final H. 264 standard was published in 2003.

The new H.264/AVC standard is the most advanced standard and is designed to emphasize on efficiency and reliability and provide a broad range of applications including:

  • Interactive or serial storage on optical magnetic storage devices, DVD, etc.
  • Broadcast over cable, satellite, cable modern, DSL, terrestrial
  • Video-on-demand or multimedia streaming services over cable modem, DSL, ISDN, LAN, wireless networks.
  • Conversational services over ISDN, Ethernet, LAN, DSL, Wireless and mobile networks, modems
  • Multimedia messaging services over DSL, ISDN.

H264/AVC supports very broad range of bitrates and picture sizes, enabling video coding ranging from low bitrate, low frame rate for mobile and dial-up devices, through to entertainment-quality standard-definition television services, HDTV. A flexible system interface for the coded video is specified to enable the adaptation of video content for use over this full variety of network and channel-type environments.

Key features of the standard include compression efficiency (providing significantly better compression than any previous standard), transmission efficiency (with a number of built-in features to support reliable, robust transmission over a range of channels and networks) and a focus on popular applications of video compression.

Only three profiles are currently supported, each targeted at a class of popular video communication applications. The Baseline profile may be particularly useful for “conversational” applications such as video conferencing, the Extended profile adds extra tools that are likely to be useful for video streaming across networks and the Main profile includes tools that may be suitable for consumer applications such as video broadcast and storage.

 

Resource Links

Official MPEG Home Page

http://www.chiariglione.org/mpeg/

SNHC

http://www.mpeg-snhc.org/

ISO

http://www.iso.org/

AVC Alliance

http://www.avc-alliance.org/

MPEG Industry Forum 

http://www.mpegif.org

Internet Streaming Media Alliance

http://www.isma.tv/

MPEG.org

http://www.mpeg.org

PC Magazine

http://www.pcmag.com/

 

 

 

 

         © 2005 M Lab, Inc. All Rights Reserved.     Home    About us      Products     Service    Technology     Downloads    Source code license     Contact us