Motion Estimation and Compensation. Motion estimation by the encoder is very computationally intensive, since it generally requires repeated evaluation of the effectiveness of candidate motion compensation vectors. However the possible motion vectors are chosen, using a fast evaluation function speeds up the algorithm. The Intel IPP functions ippiSAD16x16, ippiSqrDiff16x16, and ippiSqrDiff16x16 compare blocks from one frame against motion-compensated blocks in a reference frame. ippiSAD calculates the sum of absolute differences between the pixels, while ippiSqrDiff calculates the sum of squared differences. The Intel IPP sample uses the former.
Color Conversion. The standard Intel IPP color conversion functions include conversions to and from YCbCr 4:2:2, 4:2:0, and 4:4:4. Because they are in the general color conversion set, these functions are called RGBToYUV422 / YUV422ToRGB, RGBToYUV420 / YUV420ToRGB, and RGBToYUV / YUVToRGB. These functions support interleaved and planar YCbCr data. Listing 5 shows a conversion of decoded MPEG-2 pixels into RGB for display.
src[0] = frame->Y_comp_data + pContext->Video[0].frame_buffer.video_memory_offset; src[1] = frame->V_comp_data + pContext-Video[0].frame_buffer.video_memory_offset/4; src[2] = frame->U_comp_data + pContext->Video[0].frame_buffer.video_memory_offset/4; srcStep[0] = frame->Y_comp_pitch; srcStep[1] = pitch_UV; srcStep[2] = pitch_UV; ippiYUV420ToRGB_8u_P3AC4R(src, srcStep, video_memory + pContext->Video[0].frame_buffer.video_memory_offset/4, roi.width<<2, roi);
Once the encoder has finished searching the space of possible motion vectors, it can use the many ippiGetDiffS functions to find the difference between the current frame and the reference frame after motion compensation.
Both the encoder and decoder need a motion compensation algorithm. Intel IPP-based algorithms can use ippiMC or ippiAdd to combine the reference frame with the decoded difference information. Listing 4 shows such an algorithm for a macroblock from a 4:2:0 B-frame.
// Determine whether shift is half or full pel // in horizontal and vertical directions // Motion vectors are in half-pels in bitstream // The bit code generated is: // FF = 0000b; FH = 0100b; HF = 1000b; HH = 1100b flag1 = pContext->macroblock.prediction_type | ((pContext->macroblock.vector[0] & 1) << 3) | ((pContext->macroblock.vector[1] & 1) << 2); flag2 = pContext->macroblock.prediction_type| ((pContext->macroblock.vector[0] & 2) << 2) | ((pContext->macroblock.vector[1] & 2) << 1); flag3 = pContext->macroblock.prediction_type| ((pContext->macroblock.vector[2] & 1) << 3) | ((pContext->macroblock.vector[3] & 1) << 2); flag4 = pContext->macroblock.prediction_type| ((pContext->macroblock.vector[2] & 2) << 2) | ((pContext->macroblock.vector[3] & 2) << 1); // Convert motion vectors from half-pels to full-pel // also convert for chroma subsampling // down, previous frame vector_luma[1] = pContext->macroblock.vector[1] >>1; vector_chroma[1] = pContext->macroblock.vector[1] >>2; // right, previous frame vector_luma[0] = pContext->macroblock.vector[0] >> 1; vector_chroma[0] = pContext->macroblock.vector[0] >> 2; // down, subsequent frame vector_luma[3] = pContext->macroblock.vector[3] >> 1; vector_chroma[3] = pContext->macroblock.vector[3] >> 2; // right, subsequent frame vector_luma[2] = pContext->macroblock.vector[2] >> 1; vector_chroma[2] = pContext->macroblock.vector[2] >> 2; offs1 = (pContext->macroblock.motion_vertical_field_select[0] + vector_luma[1] + pContext->row_l) * pitch_y + vector_luma[0] + pContext->col_l, offs2 = (pContext->macroblock.motion_vertical_field_select[1] + vector_luma[3] + pContext->row_l) * pitch_y + vector_luma[2] + pContext->col_l, i = ippiMC16x16B_8u_C1( ref_Y_data1 + offs1, ptc_y, flag1, ref_Y_data2 + offs2, ptc_y, flag3, pContext->block.idct, 32, frame->Y_comp_data + pContext->offset_l, ptc_y, 0); assert(i == ippStsOk); offs1 = (pContext->macroblock.motion_vertical_field_select[0] + vector_chroma[1] + pContext->row_c )* pitch_uv + vector_chroma[0] + pContext->col_c; offs2 = (pContext->macroblock.motion_vertical_field_select[1] + vector_chroma[3] + pContext->row_c )* pitch_uv + vector_chroma[2] + pContext->col_c; i = ippiMC8x8B_8u_C1( ref_U_data1 + offs1, ptc_uv, flag2, ref_U_data2 + offs2, ptc_uv, flag4, pContext->block.idct+256,16, frame->U_comp_data + pContext->offset_c, ptc_uv, 0); assert(i == ippStsOk); i = ippiMC8x8B_8u_C1( ref_V_data1 + offs1, ptc_uv,flag2, ref_V_data2 + offs2, ptc_uv,flag4, pContext->block.idct+320,16, frame->V_comp_data + pContext->offset_c, ptc_uv, 0); assert(i == ippStsOk);
The first step is to convert the motion vectors from half-pel accuracy to full-pel accuracy, because the half-pel information is passed into the ippiMC functions as a flag. The code drops the least-significant bit of each motion vector and uses it to generate this flag. The starting point of each reference block is then offset vertically and horizontally by the amount of the motion vector.
Because this code handles bi-directional prediction, the code repeats all these steps for two separate motion vectors and two separate reference frames. This is the last decoding step, so the code places the result directly in the YCbCr output frame.