http://blog.csdn.net/andyhuabing/article/details/7489776
GraphicBuffer Yes Surface 系统中用于GDI内存共享缓冲区管理类,封装了与硬件相关的细节,从而简化应用层的处理逻辑
SurfaceFlinger是个服务端,而每个请求服务的应用程序Both correspond to a Client. The Surface drawing is performed by the Client, and SurfaceFlinger synthesizes all the graphics drawn by the Client for output. So how do the two share the memory of this graphics buffer? The brief is to use mmap/ummap, then how are these structured in the android system?
frameworks\base\include\ui\GraphicBuffer.h 类定义:
class GraphicBuffer
: public EGLNativeBase<
android_native_buffer_t,
GraphicBuffer,
LightRefBase
EGLNativeBase 是一个模板类:
template
class EGLNativeBase : public NATIVE_TYPE, public REF
类GraphicBuffer 继承LightRefBase支持轻量级引用计数控制
派生Flattenable 用于数据序列化给Binder进行传输
我们来看下android_native_buffer.h 文件,这个android_native_buffer_t 结构:
typedef struct android_native_buffer_t
{
#ifdef __cplusplus
android_native_buffer_t() {
common.magic = ANDROID_NATIVE_BUFFER_MAGIC;
common.version = sizeof(android_native_buffer_t);
common.version = sizeof(android_native_buffer_t);
common. ));
}
#endif
struct android_native_base_t common;
int width;
int height;
int stride;
int forma t;
int usage;
void* reserved[2];
buffer_handle_t handle;
void* reserved_proc[8];
} android_native_buffer_t;
注意这里有个关键的变量: buffer_handle_t handle; 这个就是显示内存分配与管理的私有数据结构
1、 native_handle_t 对private_handle_t 的包裹
typedef struct
{
Int version; /* sizeof(native_handle_t) */
int numFds; /* number of file-descriptors at &data[0] */
int numInts; numInts at */data of numFds/*
int data[0]; /* numFds + numInts ints */ Here is how to use GCC’s indefinite parameter transfer< br> } native_handle_t;
/* keep the old definition for backward source-compatibility */
typedef native_handle_t native_handle;
typedef const native_handle* buffer_handle_t;
< br> native_handle_t is an upper-level abstract data structure used for inter-process transfer. For Gralloc, its content is:
data[0] 指向具体对象的内容,其中:
static const int sNumInts = 8;
static const int sNumFds = 1;
sNumFds=1表示有一个文件句柄:fd
sNumInts= 8表示后面跟了8个INT型的数据:magic,flags,size,offset,base,lockState,writeOwner,pid;
< br> Because the upper system does not care about the specific content of data in buffer_handle_t. Passing buffer_handle_t(native_handle_t)
between processes is actually passing the content of this handle to the client. Reading readNativeHandle @Parcel.cpp through Binder on the client side generates a new native_handle.
native_handle* Parcel::readNativeHandle() const
{
int numFds, numInts;
err = readInt32(&numFds);
err = readInt32(&numInts);
native_handle* h = native_handle_create(numFds, numInts);
for (int i=0 ; err==NO_ERROR && i
if (h->data[i] <0) err = BAD_VALUE;
}< br> err = read(h->data + numFds, sizeof(int)*numInts);
…
}
When the native_handle of the client is constructed here, dup processing is performed on fd (different processes ), other direct read and copy use.
magic, flags, size, offset, base, lockState, writeOwner, pid, etc. are copied to the client, so as to obtain the corresponding information for buffer sharing.
For fd writing binder special mark BINDER_TYPE_FD: Tell the Binder driver that this is an fd descriptor
status_t Parcel::writeFil eDescriptor(int fd)
{
flat_binder_object obj;
obj.type = BINDER_TYPE_FD;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
d d;obj.br> cookie.handle = (void*)0;
return writeObject(obj, true);
}
2、GraphicBuffer 内存分配
三种分配方式:
GraphicBuffer();
// creates w * h buffer
GraphicBuffer(uint32_t w, uint32_t h, PixelFormat format, uint32_t usage);
// create a buffer from an existing handle
GraphicBuffer(uint32_t w, uint32_t h, PixelFormat format, uint32_t usage,
uint32_t stride, native_handle_t* handle, bool keepOwnership);
In fact, it is through the function: initSize
status_t GraphicBuffer::initSize(uint32_t w, uint32_t h, PixelFormat format,
uint32_t reqUsage)
{
If (format == PIXEL_FORMAT_RGBX_8888)
FORM_RGB_ AT format = PIX A_8888;
GraphicBufferAllocator& allocator = GraphicBufferAllocator::get();
status_t err = allocator.alloc(w, h, format, reqUsage, &handle, &stride);
if (err == NO_ERROR) {
this->width = w;
this->height = h;
this->format = format;
this->usage = reqUsage;
0;
}
return err;
}
利用GraphicBufferAllocator 类分配内存:
首先Load the libGralloc.hwXX.so dynamic library, allocate a memory for display, and shield the difference between different hardware platforms.
GraphicBufferAllocator::GraphicBufferAllocator()
: mAllocDev(0)
{
hw_module_t const* module;
int err = hw_get_module(GRALL_HARDWARE_MODE_MODE_MODE_MODE_HARDWARE,
(err, “FATAL: can’t find the %s module”, GRALLOC_HARDWARE_MODULE_ID);
If (err == 0) {
Gralloc_open(module, &mAllocDev);
}
}
分配方式有两种:
status_t GraphicBufferAllocator::alloc(uint32_t w, uint32_t h, PixelFormat format,
int usage, buffer_handle_t* handle, int32_t* stride)
{< br> if (usage & GRALLOC_USAGE_HW_MASK) {
err = mAllocDev->alloc(mAllocDev, w, h, format, usage, handle, stride);
}} else {
er_alloc: er_handle_ er_ w, h, format, usage, handle, stride);
}
…
}
具体的内存分配方式如下:
3、共享句柄的传递
frameworks\base\libs\surfaceflinger_client\ISurface.cpp
客户端请求处理:BpSurface class:
virtual sp
{
Parcel data, reply;
Data.writeInterfaceToken(ISurface::getInterfaceDescriptor());
data.writeInt32(bufferIdx);
data.writeInt32(usage);
remote()->transact(REQUEST_BUFFER, data, &reply);< br> sp
reply.read(*buffer);
return buffer;
}
here use sp
服务端呼应处理: BnSurface 类:
status_t BnSurface::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code ) {
Case REQUEST_BUFFER: {
CHECK_INTERFACE(ISurface, data, reply);
Int bufferIdx = data. ReadInt32();
data. ReadInt32();
Data.readInt32();
s ReadInt32();
data.readInt32();
style=”color:rgb(0,153,0)”>sp
If (buffer == NULL)
Return BAD_VALUE;
return reply->write(*buffer);
}}}}}}}}}}}}}}}}}}}}}}}}}}}} The requestBuffer function server calling process:
requestBuffer @ surfaceflinger\Layer.cpp
sp
{
buffer = new GraphicBuffer(w, h, mFormat, effectiveUsage);
…
< /span>return buffer;
}
如此的话,客户端利用new 的GraphicBuffer() 对象从Parcel中读取native_handle 对象及其内容,而在服务端由同样由requestBuffer 请求返回一个真正的GraphicBuffer Object. So how are these two data serialized and transmitted?
flatten @ GraphicBuffer.cpp
status_t GraphicBuffer::flatten(void* buffer, size_t size,
int fds[], size_t count) const
{
…
if (handle) {
buf[6] = handle->numFds;
buf[7] = handle->numInts;
native_handle_t const* const h = handle;
memcpy(fds, h->data, h->numFds*sizeof(int));
memcpy(&buf[8], h->data, h->numFds h->numInts*sizeof(int));
}
The function of flatten is to write the handle variable information of GraphicBuffer into the Parcel sentence, and the receiving end uses unflatten read
status_t GraphicBuffer::unflatten(void const* buffer, size_t size,
int fds[], size_t count)
{
native_handle* h = native_handle_create(numFds, numInts);
memcpy(h->data, numFds*sizeof(int));
memcpy(h->data + numFds, &buf[8], numInts*)
> Handle = h;
}
After the above operations, an equivalent GraphicBuffer object is constructed on the client. The following将继续讲两者如何操作相同的内存块
4、共享内存的管理– Graphic Mapper 功能
How to share memory between two processes, and how to obtain shared memory? Mapper does this. Two pieces of information need to be used: the shared buffer device handle and the offset during allocation. When the client needs to operate a piece of shared memory, first register a buffer_handle_t with registerBuffer, and then use the lock function to obtain the first address of the buffer for drawing, that is, use Lock and unlock map memory for use.
Using lock(mmap ) And unlock (ummap) to map a buffer.
The important code is as follows: mapper.cpp
static int gralloc_map(gralloc_module_t const* module,
buffer_handle_t handle,
void** vaddr){
private_handle_t* hnd = (private_handle_t*)handle;
void* mappedAddress = mmap(0, size,
PROT_READ|PROT_WRITE, MAP_SHARED, hnd->fd, 0);
If (mappedAddress == MAP_FAILED) {
“Could LOGE not mmap %s”, strerror(errno));
return -errno;
}
hnd->base = intptr_t(mappedAddress) + hnd->offset;
addr = (*v )hnd->base;
return 0;
}
static int gralloc_unmap(gralloc_module_t const* module,
buffer_handle_t handle){
private_handle_t* hnd = (private_handle_t*)handle;
nd
munmap(base, size);
hnd->base = 0;
return 0;
}
利用buffer_handle_t与private_handle_t句柄完成共享进程数据的共享:
总结:
Android uses shared memory to manage display-related buffers in this section. He designed two layers, the upper layer is the buffer management agency GraphicBuffer,
And related native_buffer_t, the lower layer is the allocation management of the specific buffer and the buffer itself. The upper object can be passed through Binder frequently, and the buffer itself is not passed between processes, but mmap is used to obtain the mapping address pointing to the common physical memory.
- Top
- 13
- Dislike
- 0
div>
- Guess what you are looking for