The MJPEG encoder provides high-quality still and video compression with a wide quality range, and it supports an industry-standard algorithm.
To learn more about using the MJPEG encoder, see:
For introductory information, see Programming with Video Codec Objects.
The JPEG syntax includes all the information that a decoder needs to decompress a frame, independently of any file format that can happen to contain the compressed frame.
However, JPEG does not specify all the information that might be needed to format and display a decompressed frame. For example, it does not include fields to specify the color space of the compressed data or the pixel dimensions of the decompressed frame; rather, the AVI file format supplies this extra information. The JPEG objects can be used by themselves to support raw JPEG, or they can be used with the UMSAVIReadWrite object to support JPEG-in-AVI.
Note: At the time Ultimedia Services was under development, the definition for JPEG and MJPEG in AVI was in draft form. This draft did not clearly define a standard location for the Huffman tables. This implementation is based on information available at the time and can be modified in future releases to comply with a final standard. This implementation includes the Huffman tables (the X'FF' DHT segment) in each frame of an AVI sequence rather than using an abbreviated JPEG format. Since this is suspected to be different from the MJPEG standard, the biCompression field of the BITMAPINFOHEADER is coded "mJPG" rather than "MJPG".
The JPEG standard includes a baseline algorithm, extended algorithms, and an independent lossless algorithm. (The lossless algorithm is unrelated to the other algorithms.) The extended algorithms build on the baseline algorithm to add extra capabilities, directed primarily at the transmission of still images over low-bandwidth channels. They allow for the refinement of a decompressed image over time. Finally, JPEG provides for the use of either Huffman or arithmetic coding as the last stage of the compression.
Note: The most popular algorithm, by far, is the baseline algorithm with Huffman encoding. This is the only algorithm that Ultimedia Services supports. In the remainder of this section, JPEG refers to this algorithm.
JPEG supports arbitrary image sizes. Also, JPEG is a symmetric algorithm--the times to encode and decode a given image are similar.
Although JPEG was designed for still images, it is often used to compress video as well. While codecs that use interframe correlations can achieve better compression, JPEG offers the advantages of fast compression and easier editing than if delta frames are used.
The JPEG standard treats the different components of a frame separately and includes no color-conversion. Also, it allows for subsampling of the different components relative to the full size of the frame.
Despite this flexibility, the most common color space for JPEG-compressed images is the YCbCr (YUV) color space. For this color space, the component subsampling is limited to the chrominance (Cb and Cr or U and V) components and the subsampling factors are small. The JPEG encoder and decoder objects support subsampling of the chrominance components by 2 in either or both directions.
The most common application of JPEG is to compress RGB images after they have been converted to the YCbCr color space. Therefore, the JPEG encoder and decoder objects integrate color conversion with the JPEG codec.
To encode JPEG frames, do the following:
Repeat step 7 for more frames.
When the video is finished, free all buffers. When the objects are no longer needed, destroy them with _somFree.
The following tips provide important information for using the UMSJPEGEncoder:
chrominance subsampling | none | |
frame dimensions | 0 x 0 | |
subimage dimensions | 0 x 0 | |
subimage coordinates | (0, 0) |
For introductory information, see Programming with Video Codec Objects.