MediaCodec Coding from Surface

encoding from surface

This is a function after api 18. Next, try to render the phone camera preview to the Input Surface. Then encode the Surface.
First, the Surface is created by MediaCodec:

mSurface = mMediaCodec.createInputSurface(); // API >= 18

Then manually initialize the EGL for the Surface, render the GL screen and encode it.

add on 06/30/2016 SurfaceEncode Example .

Encoding output
First look at how the encoding is output .Typical MediaCodec Buffer encoding, the pipeline sequence of dequeueOutputBuffer is as follows:
1: changeFormat.
2: deque sps, pps buffer. The flags of BufferInfo are set to: BUFFER_FLAG_CODEC_CONFIG. (If the encoding is h264, this Should be written into the h264 file)
3: IFrame (2+3 = IDR Frame)
4: many pFrame …

But when using InputSurface, the second step is skipped and also That is, there is no BUFFER_FLAG_CODEC_CONFIG buffer, and you should manually fetch sps and pps at this time:

ByteBuffer sps = newFormat.getByteBuffer("csd-0"

span>); ByteBuffer pps = newFormat.getByteBuffer("csd-1");

If it is H264 bare stream, you must first put these two Write buffer to the file header or push stream.
See SufraceEncoder.md drainAllEncoderMuxer function for details.

Initialize EGL
In this example, we take mobile phone camera recording as an example. In preview Rendering thread adds video thread:

mSurfaceEncoder = new SurfaceEncoder(thread.mVideoSource.mCols,thread.mVideoSource.mRows); //@wei TODO here(960x1280) Surface sur = mSurfaceEncoder.getInputSurface(); mRendererHolder.setRecordingSurface(sur);

where mRendererHolder is the preivew main thread.
seRecordingSSurface Function:

public void setRecordingSurface(final Surface surface){ RecordSurfaceRenderHandler rh = RecordSurfaceRenderHandler.createHandler(); rh.setEglContext(mMasterEgl.getContext(), mTexId, surface, true); mClients.add(rh); }

RecordSurfaceRenderHandler completes the creation of the recording thread, initializes EGL for Surafce, and when the frame update callback Switch EGL Context and draw to Surface.
Source RecordSurfaceRenderHandler.md
Regarding EGL initialization, first understand what EGL is. I have read two articles here and it is better:
#1科学网— EGL resource data sharing application and low-level driver implementation-Guo Yejun’s blog post
#2 Learning OpenGL-ES: 2-EGL analysis-kiffa-Blog Garden
The first chapter will talk about the principle and essence of EGL. The second chapter will focus on EGL Application.
Through the above article we know that we are going to share a piece of texture, then EGL contex t is the only one. Any ELG Surface must be bound to this Context before rendering. Make it the rendering target. If your rendering main thread comes from GLSurfaceView, then you must first obtain its EGLContext. I have not practiced it yet, maybe you can refer to this question :android-How can GLSurfaceView use my EGLDisplay, EGLContext and eglSurface?-Stack Overflow
The handleSetEglContext function initializes the EGL environment for the surface. EGLBase is available in the grafika project. The process and principle are combined with the source code and refer to the #2 link above. You can think of it. What is done is to prepare for makeCurrent (switching rendering objects). The implementation of EGLBase#makeCurrent calls one sentence:

EGL14.eglMakeCurrent(mEglDisplay, surface, surface, mEglContext)

pre>

Bind the shared Context with our Surface.

Draw

Send a message to the encoding thread in the frame callback of the main rendering thread:

handler.sendMessage( handler.obtainMessage(MSG_RENDER_DRAW2, (int) (timestamp >> 32), (int) timestamp,mTxtMat));< /span>

GLDrawer2D is responsible for compiling the shader and calling the GLES interface for rendering. RencordSurfaceRenderHandler#handleFrameAvailable completes rendering and drain encoding:

private void handleFrameAvailable(int tex_id,float[ ] transform, long timestampNanos) {Log.v(TAG,"handleFrameAvailable #0"); SurfaceEncoder mVideoEncoder = SurfaceEncoder.getInstance(); if(mVideoEncoder==null || !mVideoEncoder.isRecording()) return; Log.d(TAG, "handleDrain: #3"); mVideoEncoder.drainAllEncoderMuxer(false); mDrawer.draw(tex_id, transform); mTargetSurface.setPresentationTime(timestampNanos); mTargetSurface.swap(); Log.v(TAG,"handleFrameAvailable #1"); }

mVideoEncoder.drainAllEncoderMuxer The encoding output is mentioned before , Similar to BufferInput, just remove the Buffer input and pay attention to saving sps and pps.
The frame callback of the recording thread is just added mTargetSurface.swap(). Its function is to call GL14.eglSwapBuffers(mEglDisplay, surface) to draw to EglDisplay Change the buffer on the Surface to the frame buffer of the Surface and record the drawing of the thread.

reference :

saki4510t/UVCCamera: library and sample to access to UVC web camera on non-rooted Android device
google/grafika: Grafika test app

encoding from surface

This is a feature after api 18. Next, try to render the phone camera preview to the Input Surface. Then encode the Surface.
First, the Surface is created by MediaCodec:

mSurface = mMediaCodec.createInputSurface(); // API >= 18

Then manually initialize the EGL for the Surface, render the GL screen and encode it.

add on 06/30/2016 SurfaceEncode Example .

Encoding output
First look at how the encoding is output .Typical MediaCodec Buffer encoding, the pipeline sequence of dequeueOutputBuffer is as follows:
1: changeFormat.
2: deque sps, pps buffer. The flags of BufferInfo are set to: BUFFER_FLAG_CODEC_CONFIG. (If the encoding is h264, this Should be written into the h264 file)
3: IFrame (2+3 = IDR Frame)
4: many pFrame …

But when using InputSurface, the second step is skipped and also That is, there is no BUFFER_FLAG_CODEC_CONFIG buffer, and you should manually fetch sps and pps at this time:

ByteBuffer sps = newFormat.getByteBuffer("csd-0"

span>); ByteBuffer pps = newFormat.getByteBuffer("csd-1");

If it is H264 bare stream, you must first put these two Write buffer to the file header or push stream.
See SufraceEncoder.md drainAllEncoderMuxer function for details.

Initialize EGL
In this example, we take mobile phone camera recording as an example. In preview Rendering thread adds video thread:

mSurfaceEncoder = new SurfaceEncoder(thread.mVideoSource.mCols,thread.mVideoSource.mRows);  //@wei TODO here(960x1280) Surface sur = mSurfaceEncoder.getInputSurface(); mRendererHolder.setRecordingSurface(sur);

where mRendererHolder is the preivew main thread.
seRecordingSSurface function:< /p>

public void setRecordingSurface(final Surface surface){ RecordSurfaceRenderHandler rh = RecordSurfaceRenderHandler.createHandler(); rh.setEglContext(mMasterEgl.getContext(), mTexId, surface, < span class="hljs-keyword">true); mClients.add(rh); }

RecordSurfaceRenderHandler completes the creation of the recording thread, initializes EGL for Surafce, and switches the EGL Context when the frame update callback , And draw it to Surface.
Source RecordSurfaceRenderHandler.md
Regarding EGL initialization, first understand what EGL is. I have read two articles here and it is better:
#1 Science Network—EGL Resources Data sharing application and low-level driver implementation-Guo Yejun's blog post
#2 Learn OpenGL-ES: 2-EGL analysis-kiffa-Blog Garden
The first article talks about the principle and essence of EGL. The second article applies EGL. < br> From the above article, we know that we are going to share a piece of texture, then the EGL context is the only one. Any ELG Surface must be bound to this Context before rendering. Make it the rendering target. If your main rendering thread comes from GLSurfaceView, then you must first obtain its EGLContext. I have not practiced it yet. Maybe you can refer to this question: android-How can GLSurfaceView use my EGLDisplay, EGLContext and eglSurface?-Stack Overflow
The handleSetEglContext function initializes the EGL environment for the surface. EGLBase is available in the grafika project. The process and principle are combined with the source code. Refer to the #2 link above. It can be considered that all it does is makeCurrent (Switch rendering object) prepare. The implementation of EGLBase#makeCurrent calls:

EGL14.eglMakeCurrent(mEglDisplay, surface, surface, mEglContext)

The shared Context is bound to our Surface.

Draw

Send a message to the encoding thread in the frame callback of the main rendering thread:

handler.sendMessage(handler. obtainMessage(MSG_RENDER_DRAW2, (int) (timestamp >> 32 ), (int) timestamp,mTxtMat));

GLDrawer2D is responsible for compiling the shader and calling the GLES interface for rendering. RencordSurfaceRend erHandler#handleFrameAvailable completes rendering and drain encoding:

private void handleFrameAvailable(int tex_id,float[ ] transform, long timestampNanos) {Log.v(TAG,"handleFrameAvailable #0"); SurfaceEncoder mVideoEncoder = SurfaceEncoder.getInstance(); if(mVideoEncoder==null || !mVideoEncoder.isRecording()) return; Log.d(TAG, "handleDrain: #3"); mVideoEncoder.drainAllEncoderMuxer(false); mDrawer.draw(tex_id, transform); mTargetSurface.setPresentationTime(timestampNanos); mTargetSur face.swap(); Log.v(TAG,"handleFrameAvailable #1"); }

mVideoEncoder.drainAllEncoderMuxer The encoding output is mentioned before , Similar to BufferInput, just remove the Buffer input and pay attention to saving sps and pps.
The frame callback of the recording thread is just added mTargetSurface.swap(). Its role is to call GL14.eglSwapBuffers(mEglDisplay, surface) to draw to EglDisplay Change the cache on the Surface to the frame cache of the Surface, and record the drawing of the thread.

reference :

saki4510t/UVCCamera: library and sample to access to UVC web camera on non-rooted Android device
google/grafika: Grafika test app

Leave a Comment

Your email address will not be published.