一.硬件加速初始化
Canvas API用来绘制应用程序的UI元素,在硬件加速渲染环境中,这些Canvas API调用最终会转化为Open GL API调用(转化过程对应用程序来说是透明的)。因此,新的Activity启动的时候初始化好Open GL环境(又称Open GL渲染上下文)尤为重要。
下面展示下hwui 过程图:
一个Activity在OpenGL环境中对应一个ANativeWindow,ANativeWindow从SF中dequeueBuffer得到GraphicBuffer之后通过OpenGL绘制完成后queueBuffer到SF进行合成显示。
1) Open GL渲染上下文只能与一个线程关联,避免多线程冲突,与只能在UI线程中更新UI一个意思,因此,初始化过程任务之一就是要创建一个Render Thread;
2) 一个Android应用程序可能存在多个Activity组件,当Main Thread向Render Thread发出渲染命令时,Render Thread要知道当前要渲染的窗口是哪个,因此,初始化任务之二就是要告诉Render Thread当前要渲染的窗口是哪个。
下面就从这2个方面介绍hwui 初始化过程:
1.RenderThread初始化
1.1 java层分析
从ViewRootImpl的setView函数开始说起。在该函数内部会判断有些不会走hwui,比如:canvas api不支持转换成opengl函数的;还有些不需要hwui绘制的(因为hwui会增加内存开销)。
frameworks/base/core/java/android/view/ViewRootImpl.java
public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView) {
...
// If the application owns the surface, don't enable hardware acceleration
if (mSurfaceHolder == null) {
// While this is supposed to enable only, it can effectively disable
// the acceleration too.
enableHardwareAcceleration(attrs);
SurfaceView是完全由应用程序自己来控制自己的渲染,因此不需要开启硬件加速。
frameworks/base/core/java/android/view/ViewRootImpl.java
private void enableHardwareAcceleration(WindowManager.LayoutParams attrs) {
...
// Don't enable hardware acceleration when the application is in compatibility mode
if (mTranslator != null) return;
// Try to enable hardware acceleration if requested
final boolean hardwareAccelerated =
(attrs.flags & WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED) != 0;
if (hardwareAccelerated) {
if (!ThreadedRenderer.isAvailable()) {
return;
}
// Persistent processes (including the system) should not do
// accelerated rendering on low-end devices. In that case,
// sRendererDisabled will be set. In addition, the system process
// itself should never do accelerated rendering. In that case, both
// sRendererDisabled and sSystemRendererDisabled are set. When
// sSystemRendererDisabled is set, PRIVATE_FLAG_FORCE_HARDWARE_ACCELERATED
// can be used by code on the system process to escape that and enable
// HW accelerated drawing. (This is basically for the lock screen.)
final boolean fakeHwAccelerated = (attrs.privateFlags &
WindowManager.LayoutParams.PRIVATE_FLAG_FAKE_HARDWARE_ACCELERATED) != 0;
final boolean forceHwAccelerated = (attrs.privateFlags &
WindowManager.LayoutParams.PRIVATE_FLAG_FORCE_HARDWARE_ACCELERATED) != 0;
if (fakeHwAccelerated) {
// This is exclusively for the preview windows the window manager
// shows for launching applications, so they will look more like
// the app being launched.
mAttachInfo.mHardwareAccelerationRequested = true;
} else if (!ThreadedRenderer.sRendererDisabled
|| (ThreadedRenderer.sSystemRendererDisabled && forceHwAccelerated)) {
...
mAttachInfo.mThreadedRenderer = ThreadedRenderer.create(mContext, translucent,
attrs.getTitle().toString());
兼容模式下不走hwui;
硬件需要支持hwui,通过isAvailable体现;
fakeHwAccelerated(true)代表的是“Starting Window xxx” layer;
sRendererDisabled(true)代表Persistent进程(一般系统级别的应用可以在Manifest中配置persistent属性),sSystemRendererDisabled && forceHwAccelerated代表锁屏场景,即当既不是Persistent进程,或者是system进程(其中有很多线程是需要显示UI的,但是这些UI一般都是比较简单的)但是是锁屏场景的话会走hwui。
ThreadedRenderer(Context context, boolean translucent, String name) {
...
long rootNodePtr = nCreateRootRenderNode();
mRootNode = RenderNode.adopt(rootNodePtr);
mRootNode.setClipToBounds(false);
mNativeProxy = nCreateProxy(translucent, rootNodePtr);
nSetName(mNativeProxy, name);
ProcessInitializer.sInstance.init(context, mNativeProxy);
loadSystemProperties();
}
java层的ThreadedRenderer初始化主要完成native层renderthread线程的创建以及RenderProxy的创建,后者主要用于向renderthread中post消息,具体过程如下:
1.2 native层分析
根据上图,可以发现:此时renderthread线程已经起来了,继续往下分析,在分析之前看下androirdP上新增的WorkQueue,在RenderProxy、RenderThread和CanvasContext之间增加了WorkQueue机制,具体工作原理如下:
RenderThread线程起来后首先会设置线程优先级,然后会初始化一些对象,下面看下initThreadLocals做了哪些初始化动作:
299bool RenderThread::threadLoop() {
300 setpriority(PRIO_PROCESS, 0, PRIORITY_DISPLAY);
301 if (gOnStartHook) {
302 gOnStartHook();
303 }
304 initThreadLocals();
305
306 while (true) {
307 waitForWork();
308 processQueue();
309
310 if (mPendingRegistrationFrameCallbacks.size() && !mFrameCallbackTaskPending) {
311 drainDisplayEventQueue();
312 mFrameCallbacks.insert(mPendingRegistrationFrameCallbacks.begin(),
313 mPendingRegistrationFrameCallbacks.end());
314 mPendingRegistrationFrameCallbacks.clear();
315 requestVsync();
316 }
317
318 if (!mFrameCallbackTaskPending && !mVsyncRequested && mFrameCallbacks.size()) {
319 // TODO: Clean this up. This is working around an issue where a combination
320 // of bad timing and slow drawing can result in dropping a stale vsync
321 // on the floor (correct!) but fails to schedule to listen for the
322 // next vsync (oops), so none of the callbacks are run.
323 requestVsync();
324 }
325 }
326
327 return false;
328}
在initThreadLocals做了一些animation的动作、初始化EglManager、RenderState、VulkanManager以及CacheManager。
下面主要分析下initThreadLocals过程,具体如下:
1)创建的DisplayEventReceiver用于请求和接收vsync,与Choreographer中提及到的java层DisplayEventReceiver应该是一个用处 ->std::make_unique<DisplayEventReceiver>;
2)创建的DisplayEventReceiver对象所关联的文件描述符被注册到了Render Thread的消息循环中 ->addFd;
优点:surfaceflinger分发vsync的时候会借助fd去唤醒renderthread线程,接着调用displayEventReceiverCallback;
3)紧接着Renderthread::drainDisplayEventQueue去处理vsync: 通过DisplayEventReceiverWrapper对象获取最近一次的vsync时间,>0的话表示有效的vsync,然后将mVsyncRequested置为false,表示上次上传的vsync已经接收到了。接下来看DispatchFrameCallbacks 的task(9.0之前有很多task,如:drawFrameTask等,9.0就没有了,可以简单的理解为WorkQueue替换了TaskQueue)是否已经添加了,如果添加了mFrameCallbackTaskPending就等于true,就不会执行RenderThread::dispatchFrameCallbacks。
而dispatchFrameCallbacks主要用来干嘛的呢?答:用来显示动画的~
下面研究下dispatchFrameCallbacks函数
定义一个指向IFrameCallback的set mPendingRegistrationFrameCallbacks,
在post的时候向其中插入数据,在pushBack中也向其中插入数据,唯一的不同的会删除“前者”mFrameCallbacks中的callBack,可以把mPendingRegistrationFrameCallbacks理解为“Back Buffer”,mFrameCallbacks理解为“Front Buffer”。在remove的时候删除“前后对象”中的数据,后面会有交换过程分析。
std::set<IFrameCallback*> mPendingRegistrationFrameCallbacks;
frameworks/base/libs/hwui/renderthread/RenderThread.cpp
330void RenderThread::postFrameCallback(IFrameCallback* callback) {
331 mPendingRegistrationFrameCallbacks.insert(callback);
332}
333
334bool RenderThread::removeFrameCallback(IFrameCallback* callback) {
335 size_t erased;
336 erased = mFrameCallbacks.erase(callback);
337 erased |= mPendingRegistrationFrameCallbacks.erase(callback);
338 return erased;
339}
340
341void RenderThread::pushBackFrameCallback(IFrameCallback* callback) {
342 if (mFrameCallbacks.erase(callback)) {
343 mPendingRegistrationFrameCallbacks.insert(callback);
344 }
345}
在RenderThread等到有处理的task的时候会处理callback,将mPendingRegistrationFrameCallbacks中的数据全部copy到mFrameCallbacks中去,同时会清空mPendingRegistrationFrameCallbacks中数据。
310 if (mPendingRegistrationFrameCallbacks.size() && !mFrameCallbackTaskPending) {
311 drainDisplayEventQueue();
312 mFrameCallbacks.insert(mPendingRegistrationFrameCallbacks.begin(),
313 mPendingRegistrationFrameCallbacks.end());
314 mPendingRegistrationFrameCallbacks.clear();
315 requestVsync();
316 }
那么mFrameCallbacks作用是什么呢?
将mFrameCallbacks数据交换到临时变量callbacks中去,如果有数据的话就会取出来调用doFrame,那么mFrameCallbacks中保存的是什么呢?
搜索发现只有CanvasContext会继承IFrameCallback,那么回到前面看下什么时候post和pushBack的~
274void RenderThread::dispatchFrameCallbacks() {
275 ATRACE_CALL();
276 mFrameCallbackTaskPending = false;
277
278 std::set<IFrameCallback*> callbacks;
279 mFrameCallbacks.swap(callbacks);
280
281 if (callbacks.size()) {
282 // Assume one of them will probably animate again so preemptively
283 // request the next vsync in case it occurs mid-frame
284 requestVsync();
285 for (std::set<IFrameCallback*>::iterator it = callbacks.begin(); it != callbacks.end();
286 it++) {
287 (*it)->doFrame();
288 }
289 }
290}
简单说post进的数据都是借助于CanvasContext:
在prepareTree的时候会postFrameCallback,上层即Java层注册一个动画类型的Render Node到Render Thread时,一个类型为IFrameCallback的回调接口就会通过RenderThread类的成员函数postFrameCallback;
在notifyFramePending会调用pushBackFrameCallback,上层的触发处在scheduleTraversals中。
总结下:
1)displayEventReceiverCallback主要用于处理动画,将动画的每一帧同步到Vsync信号来显示;
2)renderthread此处渲染的是下一帧数据,即还未显示的;
3)在接收到本地的vsync后会做doFrame,然后请求下一个vsync。
继续往下走:看下RenderProxy还干了什么?
创建完renderthread后,开始创建CanvasContext,即窗口的画布,后期会分析怎么关联到窗口上,其中主要是确定pipline方式~
有没有思考过new RenderThread后才会new CanvasContext,那么之前分析会用到CanvasContext,怎么回事呢?
2.绑定窗口到RenderThread
一旦Render Thread知道了当前要渲染的窗口,它就将可以将该窗口绑定到Open GL渲染上下文中去,从而使得后面的渲染操作都是针对被绑定的窗口的。
2.1 java层分析
上面分析基于ViewRootImpl的setView的基础上,现在开始到了真正的绘制阶段了,即ViewRootImpl的performTraversals函数中,执行measure、layout、draw动作。在绘制之前要获取一个surface,获取成功后再绑定到对应的renderThread线程中去。
frameworks/base/core/java/android/view/ViewRootImpl.java
392 public final Surface mSurface = new Surface();
...
1676 private void performTraversals() {
...
2083 if (!hadSurface) {
2084 if (mSurface.isValid()) {
...
2092 newSurface = true;
2093 mFullRedrawNeeded = true;
2094 mPreviousTransparentRegion.setEmpty();
2095
2096 // Only initialize up-front if transparent regions are not
2097 // requested, otherwise defer to see if the entire window
2098 // will be transparent
2099 if (mAttachInfo.mThreadedRenderer != null) {
2100 try {
2101 hwInitialized = mAttachInfo.mThreadedRenderer.initialize(
2102 mSurface);
...
2301 performMeasure(childWidthMeasureSpec, childHeightMeasureSpec);
...
2320 performLayout(lp, mWidth, mHeight);
...
2477 if (!cancelDraw && !newSurface) {
...
2485 performDraw();
如果这个Surface是新创建的,那么会将该surface(mSurface)通过initialize将它绑定到Render Thread中去,绑定完成后才会做measure、layout、draw的动作。
2.2 native层分析
下面主要从C++层分析下绑定窗口的过程:
frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
689static void android_view_ThreadedRenderer_initialize(JNIEnv* env, jobject clazz,
690 jlong proxyPtr, jobject jsurface) {
691 RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
692 sp<Surface> surface = android_view_Surface_getSurface(env, jsurface);
693 proxy->initialize(surface);
694}
frameworks/base/libs/hwui/renderthread/RenderProxy.cpp
85void RenderProxy::initialize(const sp<Surface>& surface) {
86 mRenderThread.queue().post(
87 [ this, surf = surface ]() mutable { mContext->setSurface(std::move(surf)); });
88}
直接将上层的surface通过setSurface函数经过workQueue传到CanvasContext中去,成功的话返回true,即拥有了一个新的surface。
frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
186void CanvasContext::setSurface(sp<Surface>&& surface) {
187 ATRACE_CALL();
188
189 mNativeSurface = std::move(surface);
190
191 ColorMode colorMode = mWideColorGamut ? ColorMode::WideColorGamut : ColorMode::Srgb;
192 bool hasSurface = mRenderPipeline->setSurface(mNativeSurface.get(), mSwapBehavior, colorMode);
193
194 mFrameNumber = -1;
195
196 if (hasSurface) {
197 mHaveNewSurface = true;
198 mSwapHistory.clear();
CanvasContext::setSurface有systrace 标签,通过systrace可以看出具体过程来,如下:
再往下到具体的Pipeline去setSurface,这里以默认的Pipeline介绍:
首先判断EglSurface是否已经存在,如果存在先destroy,然后再去create一个EglSurface,该mEglSurface表示的是一个绘图表面,有了这个mEglSurface之后,当执行Open GL命令的时候,就可以知道这些命令是作用在哪个窗口上的了。
frameworks/base/libs/hwui/renderthread/OpenGLPipeline.cpp
146bool OpenGLPipeline::setSurface(Surface* surface, SwapBehavior swapBehavior, ColorMode colorMode) {
147 if (mEglSurface != EGL_NO_SURFACE) {
148 mEglManager.destroySurface(mEglSurface);
149 mEglSurface = EGL_NO_SURFACE;
150 }
151
152 if (surface) {
153 const bool wideColorGamut = colorMode == ColorMode::WideColorGamut;
154 mEglSurface = mEglManager.createSurface(surface, wideColorGamut);
155 }
156
157 if (mEglSurface != EGL_NO_SURFACE) {
158 const bool preserveBuffer = (swapBehavior != SwapBehavior::kSwap_discardBuffer);
159 mBufferPreserved = mEglManager.setPreserveBuffer(mEglSurface, preserveBuffer);
160 return true;
161 }
162
163 return false;
164}
在EglManager中createSurface中首先会调用EglManager::initialize(关键哦~),其中完成EGL的初始化动作(eglGetDisplay、eglInitialize、eglChooseConfig、eglCreateContext、eglCreatePbufferSurface、eglMakeCurrent、eglSwapInterval),在makeCurrent的时候首先看是不是surface没有改变,如果是的话就不需要重新eglMakeCurrent了,此处makeCurrent的是当前的mPBufferSurface。然后才是真正的创建当前surface的eglCreateWindowSurface;最后调用eglSurfaceAttrib。
在EglManager中setPreserveBuffer如果不是SwapBehavior::Preserved就直接返回false了,目前很多平台应该都是SwapBehavior::BufferAge。
那么问题来了,前面创建的surface什么时候设置到上下文环境中的呢?
查看代码发现:
DrawFrameTask::syncFrameState -> CanvasContext::makeCurrent ->OpenGLPipeline::makeCurrent
bool haveNewSurface = mEglManager.makeCurrent(mEglSurface, &error); //mEglSurface就是前在setSurface中绑定的surface
这样的话就绑定到了前面创建的surface了。
继续看下EglManager::makeCurrent发现:
1)在OpenGLPipeline::onStop、EglManager::destroySurface的时候设定上下文surface为EGL_NO_SURFACE
2)在EglManager::initialize时候会绑定PBSurface(前面已经介绍了)
3)在EglManager::beginFrame时候会绑定传入的surface,此处值得深入跟踪下,
mEglManager.beginFrame(mEglSurface) //mEglSurface就是前在setSurface中绑定的surface
CanvasContext::draw ->OpenGLPipeline::getFrame ->EglManager::beginFrame->makeCurrent(surface)
因此,将当前surface设置到OpenGL渲染上下文中总共做了2次,在syncFrameState做1次,在CanvasContext::draw时做1次。在makeCurrent的实现的地方会判断if (isCurrent(surface)) return false; 也就是说后面1次的makeCurrent可能直接返回false。
总结下:初始化过程中会通过eglCreateWindowSurface创建上层surface对应的底层surface(EglSurface),makeCurrent当前的PBSurface,然后在syncFrameSate时会makeCurrent之前创建的底层surface(EglSurface),最后在draw的时候同样也会makeCurrent一次,但是直接返回了,没有执行真正的eglMakeCurrent。
到此,RenderThread线程已经run了,OpenGL、EGL环境也已经准备就绪,上层Surface也已经创建完成并成功绑定到HWUI层pipeline中mEglSurface。
二.资源地图集服务
以下内容描述都是7.0平台上的,androidO上开始已经没有Asset Atlas Service。不清楚androidO上叫什么~有知道的同学回复呀~
Android启动的时候会预加载一些资源,方便应用的后期快速访问,同时达到共享目的。hwui中做了进一步优化,将预加载资源合成为一个纹理上传到GPU去,并且能够在所有的应用程序之间进行共享。
资源预加载发生在Zygote进程的,然后Zygote进程fork了应用程序进程,这样就保证了资源的共享,但是在hwui中,如果每个应用都去使用预加载的资源的话,那么每个应用都要将资源作为纹理传入到GPU中,这样太浪费GPU内存了,这块是不是有问题需要优化呢,这就是本节的重点。
Zygote进程将预加载的资源作为texture传到system进程中去,System进程中运行了一个Asset Atlas Service,该service就是上面提到的将预加载资源合成为一个纹理上传到GPU去。这样的话,app的renderthread线程直接请求Asset Atlas Service得到纹理即可,无需单独上传纹理到GPU。
1.Zygote
加载资源
启动system_server
接收AMS的请求创建应用进程
下面主要分析加载资源环节,preloadClasses、preloadResources、nativePreloadAppProcessHALs、 preloadOpenGL、preloadSharedLibraries、preloadTextResources,主要分析preloadResources。
而在preloadResources中会分别调用preloadDrawables(对应R.array.preloaded_drawables)和preloadColorStateLists(对应R.array.preloaded_color_state_lists),主要跟踪preloadDrawables。
mResources.getDrawable加载所有drawable~
Zygote预加载的Drawable将会被运行在System进程里面的Asset Atlas Service合成一个地图集,最后作为纹理上传到GPU去,因此,接下来我们就继续分析Asset Atlas Service的实现。
2. system_server
Zygote进程启动System进程,System进程会加载系统服务,其中就包括Asset Atlas Service。
在system进程启动服务之前会进行一些属性的设置,在startOtherServices函数中会启动Asset Atlas Service,Asset Atlas Service是一个非系统服务,工厂模式下不会启用。
注:7.0、7.1还存在AssetAtlasService,到8.0开始就已经取消了~
AssetAtlasService计算将所有预加载的Drawable资源合成在一张图片中所需要的最小宽度和高度值,有了这两个值之后创建一块Graphic Buffer。然后将Drawable渲染到该Buffer中去。最后上传到GPU中去。
三. android4.4 hwui
这边插一章节介绍下旧版本hwui的实现过程,相对比较简单,主要过程如下:
3.1 hardware draw
下面主要分析下hardware draw过程,引用一个大神的图,关键的几个步骤做简单分析:
1)beginFrame主要完成EGLDisplay(用于显示) 和一个EGLSurface(OpenGL将在这个Surface上进行绘图),然后eglBeginFrame主要是校验参数的合法性;
2)buildDisplayList主要完成录制过程,构建native层的DisplayList;
3)prepareFrame完成dirtyRegion的构建;
4)onPostDraw会进行OpenGLRenderer的finish过程;
5)swapBuffer完成对buffer向SurfaceFlinger的递交,注意在java层调用的哦~
四.DisplayList的构建过程
在上面初始化过程中会调用performTraversals,然后调用initialize完成初始化过程,之后在performTraversals中会调用performMeasure、performLayout、performDraw,本节主要介绍performDraw过程:
frameworks/base/core/java/android/view/ViewRootImpl.java
private boolean draw(boolean fullRedrawNeeded) {
...
if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) {
if (mAttachInfo.mThreadedRenderer != null && mAttachInfo.mThreadedRenderer.isEnabled()) {
...
mAttachInfo.mThreadedRenderer.draw(mView, mAttachInfo, this, callback);
} else {
...
if (!drawSoftware(surface, mAttachInfo, xOffset, yOffset,
scalingRequired, dirty, surfaceInsets)) {
return false;
}
...
}
frameworks/base/core/java/android/view/ThreadedRenderer.java
void draw(View view, AttachInfo attachInfo, HardwareDrawCallbacks callbacks) {
......
updateRootDisplayList(view, callbacks);
......
if (attachInfo.mPendingAnimatingRenderNodes != null) {
final int count = attachInfo.mPendingAnimatingRenderNodes.size();
for (int i = 0; i < count; i++) {
registerAnimatingRenderNode(
attachInfo.mPendingAnimatingRenderNodes.get(i));
}
attachInfo.mPendingAnimatingRenderNodes.clear();
// We don't need this anymore as subsequent calls to
// ViewRootImpl#attachRenderNodeAnimator will go directly to us.
attachInfo.mPendingAnimatingRenderNodes = null;
}
...
int syncResult = nSyncAndDrawFrame(mNativeProxy, frameInfo, frameInfo.length);
if ((syncResult & SYNC_LOST_SURFACE_REWARD_IF_FOUND) != 0) {
...
attachInfo.mViewRootImpl.invalidate();
}
}
...
}
下面主要分析updateRootDisplayList:
可以看到在该函数中存在“Record View#draw()”的trace tag,首先完成updateViewTreeDisplayList,稍后分析;然后判断rootNode是否有更新或者rootNode是否无效。
那么什么时候isValid为true呢?mRootNode.end结束时候该值会被置为true,详细分析下面会有介绍。
frameworks/base/core/java/android/view/ThreadedRenderer.java
private void updateRootDisplayList(View view, DrawCallbacks callbacks) {
Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Record View#draw()");
updateViewTreeDisplayList(view);
if (mRootNodeNeedsUpdate || !mRootNode.isValid()) {
DisplayListCanvas canvas = mRootNode.start(mSurfaceWidth, mSurfaceHeight);
try {
final int saveCount = canvas.save();
canvas.translate(mInsetLeft, mInsetTop);
callbacks.onPreDraw(canvas);
canvas.insertReorderBarrier();
canvas.drawRenderNode(view.updateDisplayListIfDirty());
canvas.insertInorderBarrier();
callbacks.onPostDraw(canvas);
canvas.restoreToCount(saveCount);
mRootNodeNeedsUpdate = false;
} finally {
mRootNode.end(canvas);
}
}
Trace.traceEnd(Trace.TRACE_TAG_VIEW);
}
mRootNode.isValid值来自于native层,当第一次设置的时候RenderNode中displayList不为空,因此isValid值为true;当renderNode被destroy时该值为false。
frameworks/base/core/java/android/view/RenderNode.java
public void end(DisplayListCanvas canvas) {
long displayList = canvas.finishRecording();
nSetDisplayList(mNativeRenderNode, displayList);
canvas.recycle();
}
frameworks/base/core/jni/android_view_RenderNode.cpp
static void android_view_RenderNode_setDisplayList(JNIEnv* env,
jobject clazz, jlong renderNodePtr, jlong displayListPtr) {
RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
DisplayList* newData = reinterpret_cast<DisplayList*>(displayListPtr);
renderNode->setStagingDisplayList(newData);
}
frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::destroyHardwareResources(TreeInfo* info) {
...
setStagingDisplayList(nullptr);
frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::setStagingDisplayList(DisplayList* displayList) {
mValid = (displayList != nullptr);
mNeedsDisplayListSync = true;
delete mStagingDisplayList;
mStagingDisplayList = displayList;
}
好了,下面开始分析DisplayList的构建过程中重要的几个环节:
1.updateViewTreeDisplayList
view.updateDisplayListIfDirty();
2.start
start时候会在native层创建RecordingCanvas、DisplayList,而9.0默认使用skiagl,那么native层就对应SkiaRecordingCanvas、SkiaDisplayList。
frameworks/base/core/java/android/view/RenderNode.java
public DisplayListCanvas start(int width, int height) {
return DisplayListCanvas.obtain(this, width, height);
}
接下来是对应的canvas做save操作,在RecordingCanvas逻辑中会创建对应的Snapshot,在skiagl中走的是SkiaCanvas::save,最终会设置到SkCanvas中。
后面的translate函数和上面的save效果类似。
3.drawRenderNode
在drawRenderNode的前后分别insertBarrier目的是创建一个新的chunk,然后向displayList中加入RenderNodeOp,而skiagl中会走SkiaRecordingCanvas::drawRenderNode。
4.end
得到native层的DisplayList对象地址displayList,最后将displayList设置到mStagingDisplayList。
frameworks/base/core/java/android/view/RenderNode.java
public void end(DisplayListCanvas canvas) {
long displayList = canvas.finishRecording();
nSetDisplayList(mNativeRenderNode, displayList);
canvas.recycle();
}
5.addOp
将addOp单独拉出来讲,因为上面drawRenderNode以及在updateViewTreeDisplayList中的drawBitmap都会往DisplayList中写入对应的Op。
当上层触发drawColor、drawRect等操作时会调用native层的addOp(具体可参见之前博客:HWUI绘制系列——从java到C++),传入对应的op参数进来,下面就详细分析下addOp函数,因为该函数为后面渲染做铺垫。
可以将op理解为一个人,首先判断这个人的rect是否为空,然后得到当前的DisplayList中的ops的last值的索引,将当前的op加到ops中去。
判断mDeferredBarrierType,研究下该值,初始化的时候该值为DeferredBarrierType::None,当resetRecording时会重置为DeferredBarrierType::InOrder,还有就是在insertReorderBarrier时会进行重新赋值。那么有什么意义呢?
1)刚初始化renderthread的时候会new RecordingCanvas,这样会resetRecording,表示要重新建一个Chunk了,还有就是进程再次obtain也会强制做resetRecording;
2) 在updateViewTreeDisplayList结束后,在drawRenderNode前后会分别调用insertReorderBarrier(true)和insertInorderBarrier(false)进行mDeferredBarrierType重置,也就是说drawRenderNode时会新建一个chunk。
3)大小关系:ops > chunk > children > op
frameworks/base/libs/hwui/RecordingCanvas.cpp
int RecordingCanvas::addOp(RecordedOp* op) {
// skip op with empty clip
if (op->localClip && op->localClip->rect.isEmpty()) {
// NOTE: this rejection happens after op construction/content ref-ing, so content ref'd
// and held by renderthread isn't affected by clip rejection.
// Could rewind alloc here if desired, but callers would have to not touch op afterwards.
return -1;
}
int insertIndex = mDisplayList->ops.size();
mDisplayList->ops.push_back(op);
if (mDeferredBarrierType != DeferredBarrierType::None) {
// op is first in new chunk
mDisplayList->chunks.emplace_back();
DisplayList::Chunk& newChunk = mDisplayList->chunks.back();
newChunk.beginOpIndex = insertIndex;
newChunk.endOpIndex = insertIndex + 1;
newChunk.reorderChildren = (mDeferredBarrierType == DeferredBarrierType::OutOfOrder);
newChunk.reorderClip = mDeferredBarrierClip;
int nextChildIndex = mDisplayList->children.size();
newChunk.beginChildIndex = newChunk.endChildIndex = nextChildIndex;
mDeferredBarrierType = DeferredBarrierType::None;
} else {
// standard case - append to existing chunk
mDisplayList->chunks.back().endOpIndex = insertIndex + 1;
}
return insertIndex;
}
void RecordingCanvas::insertReorderBarrier(bool enableReorder) {
if (enableReorder) {
mDeferredBarrierType = DeferredBarrierType::OutOfOrder;
mDeferredBarrierClip = getRecordedClip();
} else {
mDeferredBarrierType = DeferredBarrierType::InOrder;
mDeferredBarrierClip = nullptr;
}
}
frameworks/base/core/java/android/view/DisplayListCanvas.java
static DisplayListCanvas obtain(@NonNull RenderNode node, int width, int height) {
if (node == null) throw new IllegalArgumentException("node cannot be null");
DisplayListCanvas canvas = sPool.acquire();
if (canvas == null) {
canvas = new DisplayListCanvas(node, width, height);
} else {
nResetDisplayListCanvas(canvas.mNativeCanvasWrapper, node.mNativeRenderNode,
width, height);
}
frameworks/base/core/java/android/view/ThreadedRenderer.java
private void updateRootDisplayList(View view, DrawCallbacks callbacks) {
Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Record View#draw()");
updateViewTreeDisplayList(view);
if (mRootNodeNeedsUpdate || !mRootNode.isValid()) {
DisplayListCanvas canvas = mRootNode.start(mSurfaceWidth, mSurfaceHeight);
try {
final int saveCount = canvas.save();
canvas.translate(mInsetLeft, mInsetTop);
callbacks.onPreDraw(canvas);
canvas.insertReorderBarrier();
canvas.drawRenderNode(view.updateDisplayListIfDirty());
canvas.insertInorderBarrier();
callbacks.onPostDraw(canvas);
canvas.restoreToCount(saveCount);
mRootNodeNeedsUpdate = false;
} finally {
mRootNode.end(canvas);
}
}
Trace.traceEnd(Trace.TRACE_TAG_VIEW);
}
五.绘制过程介绍
1.syncFrameState
应用主线程向RT线程的workQueue中post消息并等待消息处理完毕唤醒UI线程,等到RT 执行该消息时会回调run方法,虽然回调了但是不会立刻唤醒UI线程。
DrawFrameTask中只有两处持锁,下面会分析什么时候会调用unblockUiThread,当unblockUiThread时会唤醒UI线程继续往下执行~
先剧透下:当syncFrameState完成后会唤醒UI线程,还有就是本次draw完后会唤醒(但是这种情况是不理想的,不应该存在的)~
frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
void DrawFrameTask::postAndWait() {
AutoMutex _lock(mLock);
mRenderThread->queue().post([this]() { run(); });
mSignal.wait(mLock);
}
void DrawFrameTask::unblockUiThread() {
AutoMutex _lock(mLock);
mSignal.signal();
}
下面看下syncFrameState过程:
1)首先同步当前vsync到TimeLord中的mFrameTimeNanos,即更新上一次的vsync时间;
2)然后是makeCurrent,当VRI发出setStop的时候会停止makeCurrent,也就停止渲染,否则一直往EglManager去makeCurrent,此处会判断当前的surface是不是已经makeCurrent过了,如果已经makeCurrent了,那么就不去调用eglMakeCurrent,或者没有surface的话会makeCurrent pbSurface。那么看下makeCurrent的位置有哪些?
a.EglManager::initialize时候makeCurrent(mPBufferSurface);
b.EglManager::beginFrame时候makeCurrent(surface),而beginFrame是在CanvasContext::draw时候调用的;
c.EglManager::destroySurface时候makeCurrent(EGL_NO_SURFACE)。
因此,正常情况下在初始化RT时候会OpenGLPipeline::setSurface将当前待渲染的surface设置进来,然后在syncFrameState时候将surface设置为opengl上下文中。
3)unpinImages主要是为了提高hwui精度的,对每个object做cache,然后让cache去unpin:caches.textureCache.resetMarkInUse(this);前面一步应该也有这个过程;
frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
std::vector<sp<DeferredLayerUpdater> > mLayers;
Rect mContentDrawBounds;
bool DrawFrameTask::syncFrameState(TreeInfo& info) {
ATRACE_CALL();
int64_t vsync = mFrameInfo[static_cast<int>(FrameInfoIndex::Vsync)];
mRenderThread->timeLord().vsyncReceived(vsync);
bool canDraw = mContext->makeCurrent();
mContext->unpinImages();
for (size_t i = 0; i < mLayers.size(); i++) {
mLayers[i]->apply();
}
mLayers.clear();
mContext->setContentDrawBounds(mContentDrawBounds);
mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode);
...
if (info.out.hasAnimations) {
if (info.out.requiresUiRedraw) {
mSyncResult |= SyncResult::UIRedrawRequired;
}
}
if (!info.out.canDrawThisFrame) {
mSyncResult |= SyncResult::FrameDropped;
}
// If prepareTextures is false, we ran out of texture cache space
return info.prepareTextures; //构造TreeInfo的时候赋值的:true
}
4)处理layer,在TextureLayer中会将layer借助于ThreadedRenderer传到native层的DrawFrameTask中并用mLayers保存起来。
是不是很想知道DeferredLayerUpdater类中是否有保存layer的name呢?很可惜没有,只有getWidth()和getHeight() 。
frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
void DrawFrameTask::pushLayerUpdate(DeferredLayerUpdater* layer) {
LOG_ALWAYS_FATAL_IF(!mContext,
"Lifecycle violation, there's no context to pushLayerUpdate with!");
for (size_t i = 0; i < mLayers.size(); i++) {
if (mLayers[i].get() == layer) {
return;
}
}
mLayers.push_back(layer);
}
void DrawFrameTask::removeLayerUpdate(DeferredLayerUpdater* layer) {
for (size_t i = 0; i < mLayers.size(); i++) {
if (mLayers[i].get() == layer) {
mLayers.erase(mLayers.begin() + i);
return;
}
}
}
frameworks/base/core/java/android/view/ThreadedRenderer.java
void pushLayerUpdate(TextureLayer layer) {
nPushLayerUpdate(mNativeProxy, layer.getDeferredLayerUpdater());
}
5)然后再看下layer的apply过程:
a.首先mCreateLayerFn创建一个layer,该为函数指针,在pipeline中传入的,那么就会调用OpenGLPipeline的createLayer方法,根据传入的变量去生成纹理(glActiveTexture、glGenTextures)。
b.在setRenderTarget的时候glBindTexture(target, texture)同时glTexParameteri
到此,layer纹理也绑定结束了~
frameworks/base/libs/hwui/DeferredLayerUpdater.cpp
Layer* mLayer;
CreateLayerFn mCreateLayerFn;
void DeferredLayerUpdater::apply() {
if (!mLayer) {
mLayer = mCreateLayerFn(mRenderState, mWidth, mHeight, mColorFilter, mAlpha, mMode, mBlend);
}
mLayer->setColorFilter(mColorFilter);
mLayer->setAlpha(mAlpha, mMode);
if (mSurfaceTexture.get()) {
if (mLayer->getApi() == Layer::Api::Vulkan) {
if (mUpdateTexImage) {
mUpdateTexImage = false;
doUpdateVkTexImage();
}
} else {
LOG_ALWAYS_FATAL_IF(mLayer->getApi() != Layer::Api::OpenGL,
"apply surfaceTexture with non GL backend %x, GL %x, VK %x",
mLayer->getApi(), Layer::Api::OpenGL, Layer::Api::Vulkan);
if (!mGLContextAttached) {
mGLContextAttached = true;
mUpdateTexImage = true;
mSurfaceTexture->attachToContext(static_cast<GlLayer*>(mLayer)->getTextureId());
}
if (mUpdateTexImage) {
mUpdateTexImage = false;
doUpdateTexImage();
}
GLenum renderTarget = mSurfaceTexture->getCurrentTextureTarget();
static_cast<GlLayer*>(mLayer)->setRenderTarget(renderTarget);
}
if (mTransform) {
mLayer->getTransform().load(*mTransform);
setTransform(nullptr);
}
}
}
frameworks/base/libs/hwui/renderthread/OpenGLPipeline.cpp
DeferredLayerUpdater* OpenGLPipeline::createTextureLayer() {
mEglManager.initialize();
return new DeferredLayerUpdater(mRenderThread.renderState(), createLayer, Layer::Api::OpenGL);
}
static Layer* createLayer(RenderState& renderState, uint32_t layerWidth, uint32_t layerHeight,
sk_sp<SkColorFilter> colorFilter, int alpha, SkBlendMode mode,
bool blend) {
GlLayer* layer =
new GlLayer(renderState, layerWidth, layerHeight, colorFilter, alpha, mode, blend);
Caches::getInstance().textureState().activateTexture(0);
layer->generateTexture();
return layer;
}
frameworks/base/libs/hwui/renderstate/TextureState.cpp
void TextureState::activateTexture(GLuint textureUnit) {
LOG_ALWAYS_FATAL_IF(textureUnit >= kTextureUnitsCount,
"Tried to use texture unit index %d, only %d exist", textureUnit,
kTextureUnitsCount);
if (mTextureUnit != textureUnit) {
glActiveTexture(kTextureUnits[textureUnit]);
mTextureUnit = textureUnit;
}
}
frameworks/base/libs/hwui/GlLayer.cpp
void GlLayer::generateTexture() {
if (!texture.mId) {
glGenTextures(1, &texture.mId);
}
}
6)再往下setContentDrawBounds设置绘制区域大小,初始化时mContentDrawBounds(0, 0, 0, 0),在VRI中updateContentDrawBounds时会设置bounds。
frameworks/base/libs/hwui/renderthread/CanvasContext.h
void setContentDrawBounds(const Rect& bounds) { mContentDrawBounds = bounds; }
frameworks/base/libs/hwui/renderthread/DrawFrameTask.h
void setContentDrawBounds(int left, int top, int right, int bottom) {
mContentDrawBounds.set(left, top, right, bottom);
}
frameworks/base/core/java/android/view/ThreadedRenderer.java
public void setContentDrawBounds(int left, int top, int right, int bottom) {
nSetContentDrawBounds(mNativeProxy, left, top, right, bottom);
}
7)最后一步就是prepareTree过程了,这一步主要完成每个renderNode的prepareTree过程:
frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued,
RenderNode* target) {
mRenderThread.removeFrameCallback(this);
for (const sp<RenderNode>& node : mRenderNodes) {
// Only the primary target node will be drawn full - all other nodes would get drawn in
// real time mode. In case of a window, the primary node is the window content and the other
// node(s) are non client / filler nodes.
info.mode = (node.get() == target ? TreeInfo::MODE_FULL : TreeInfo::MODE_RT_ONLY);
node->prepareTree(info);
GL_CHECKPOINT(MODERATE);
}
...
freePrefetchedLayers();
...
} else {
info.out.canDrawThisFrame = true;
}
...
}
a.prepareTree过程,首先看下mRenderNodes是如何构建起来的,在CanvasContext初始化时将rootRenderNode加进来,之后通过addRenderNode加入,通过removeRenderNode移除。
//加入rootRenderNode
frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
CanvasContext::CanvasContext(...RenderNode* rootRenderNode,...){
...
mRenderNodes.emplace_back(rootRenderNode);
...
}
frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
static jlong android_view_ThreadedRenderer_createRootRenderNode(JNIEnv* env, jobject clazz) {
RootRenderNode* node = new RootRenderNode(env);
node->incStrong(0);
node->setName("RootRenderNode");
return reinterpret_cast<jlong>(node);
}
void CanvasContext::addRenderNode(RenderNode* node, bool placeFront) {
int pos = placeFront ? 0 : static_cast<int>(mRenderNodes.size());
node->makeRoot();
mRenderNodes.emplace(mRenderNodes.begin() + pos, node);
}
void CanvasContext::removeRenderNode(RenderNode* node) {
node->clearRoot();
mRenderNodes.erase(std::remove(mRenderNodes.begin(), mRenderNodes.end(), node),
mRenderNodes.end());
}
在pushStagingDisplayListChanges中调用syncDisplayList,在该函数中取走mStagingDisplayList(mDisplayList = mStagingDisplayList;),该displaylist在setDisplayList中赋值的。
这样的话displayList就取到了。
frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::prepareTreeImpl(TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer) {
...
if (info.mode == TreeInfo::MODE_FULL) {
pushStagingPropertiesChanges(info);
}
...
if (info.mode == TreeInfo::MODE_FULL) {
pushStagingDisplayListChanges(observer, info);
}
...
}
b.清空mPrefetchedLayers中保存的RenderNode,那么什么时候insert呢?答案:CanvasContext::buildLayer(RenderNode* node),java层触发。
frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
std::set<RenderNode*> mPrefetchedLayers;
void CanvasContext::freePrefetchedLayers() {
if (mPrefetchedLayers.size()) {
for (auto& node : mPrefetchedLayers) {
ALOGW("Incorrectly called buildLayer on View: %s, destroying layer...",
node->getName());
node->destroyLayers();
node->decStrong(nullptr);
}
mPrefetchedLayers.clear();
}
}
2.现在看下是draw之前的deferLayers,
frameworks/base/libs/hwui/FrameBuilder.cpp
void FrameBuilder::deferLayers(const LayerUpdateQueue& layers) {
// Render all layers to be updated, in order. Defer in reverse order, so that they'll be
// updated in the order they're passed in (mLayerBuilders are issued to Renderer in reverse)
for (int i = layers.entries().size() - 1; i >= 0; i--) {
RenderNode* layerNode = layers.entries()[i].renderNode.get();
// only schedule repaint if node still on layer - possible it may have been
// removed during a dropped frame, but layers may still remain scheduled so
// as not to lose info on what portion is damaged
OffscreenBuffer* layer = layerNode->getLayer();
if (CC_LIKELY(layer)) {
ATRACE_FORMAT("Optimize HW Layer DisplayList %s %ux%u", layerNode->getName(),
layerNode->getWidth(), layerNode->getHeight());
Rect layerDamage = layers.entries()[i].damage;
// TODO: ensure layer damage can't be larger than layer
layerDamage.doIntersect(0, 0, layer->viewportWidth, layer->viewportHeight);
layerNode->computeOrdering();
// map current light center into RenderNode's coordinate space
Vector3 lightCenter = mCanvasState.currentSnapshot()->getRelativeLightCenter();
layer->inverseTransformInWindow.mapPoint3d(lightCenter);
saveForLayer(layerNode->getWidth(), layerNode->getHeight(), 0, 0, layerDamage,
lightCenter, nullptr, layerNode);
if (layerNode->getDisplayList()) {
deferNodeOps(*layerNode);
}
restoreForLayer();
}
}
}
1)是不是很想知道传入的参数const LayerUpdateQueue& layers从哪里得到的?很诱人吧,来来来,我们来揭晓下:
先看下LayerUpdateQueue类,发现它有成员变量mEntries,表示保存所有的layer信息(RenderNode
和damage)。
现在来分析下参数的由来,当RenderNode::pushLayerUpdate(最开始是prepareTree)时会传入RenderNode对象和待更新区域dirty,这样赋值给LayerUpdateQueue中的mEntries。
//LayerUpdateQueue类定义处:
class LayerUpdateQueue {
public:
struct Entry {
Entry(RenderNode* renderNode, const Rect& damage)
: renderNode(renderNode), damage(damage) {}
sp<RenderNode> renderNode;
Rect damage;
};
LayerUpdateQueue() {}
void enqueueLayerWithDamage(RenderNode* renderNode, Rect dirty);
void clear();
const std::vector<Entry>& entries() const { return mEntries; }
private:
std::vector<Entry> mEntries;
};
//参数的由来
void RenderNode::pushLayerUpdate(TreeInfo& info) {
...
info.layerUpdateQueue->enqueueLayerWithDamage(this, dirty); //有很多RenderNode,但是只有一个info.layerUpdateQueue
...
}
frameworks/base/libs/hwui/LayerUpdateQueue.cpp
void LayerUpdateQueue::enqueueLayerWithDamage(RenderNode* renderNode, Rect damage) {
...
if (!damage.isEmpty()) {
for (Entry& entry : mEntries) {
if (CC_UNLIKELY(entry.renderNode == renderNode)) {
entry.damage.unionWith(damage);
return;
}
}
mEntries.emplace_back(renderNode, damage);
}
}
2)再看下renderNode中OffscreenBuffer对象的由来,同样在RenderNode::pushLayerUpdate(最开始是prepareTree)且在1)之前会构建OffscreenBuffer对象,在OpenGLPipeline中创建并通过setLayer设置到renderNode中去,然后deferLayers就可以通过getLayer得到得到RenderNode的成员变量mLayer,下面列举了RenderNode的部分成员变量:
class RenderNode : public VirtualLightRefBase {
String8 mName;
DisplayList* mDisplayList;
DisplayList* mStagingDisplayList;
OffscreenBuffer* mLayer = nullptr;
RenderProperties mProperties;
RenderProperties mStagingProperties;
}
frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::pushLayerUpdate(TreeInfo& info) {
LayerType layerType = properties().effectiveLayerType(); //softlayer不做处理,直接返回
...
if (info.canvasContext.createOrUpdateLayer(this, *info.damageAccumulator, info.errorHandler)) {
damageSelf(info);
}
if (!hasLayer()) {
return;
}
SkRect dirty;
info.damageAccumulator->peekAtDirty(&dirty);
info.layerUpdateQueue->enqueueLayerWithDamage(this, dirty); //构建LayerUpdateQueue中的成员变量mEntries
// There might be prefetched layers that need to be accounted for.
// That might be us, so tell CanvasContext that this layer is in the
// tree and should not be destroyed.
info.canvasContext.markLayerInUse(this);
}
frameworks/base/libs/hwui/renderthread/OpenGLPipeline.cpp
bool OpenGLPipeline::createOrUpdateLayer(RenderNode* node,
const DamageAccumulator& damageAccumulator,
bool wideColorGamut,
ErrorHandler* errorHandler) {
RenderState& renderState = mRenderThread.renderState();
OffscreenBufferPool& layerPool = renderState.layerPool();
bool transformUpdateNeeded = false;
if (node->getLayer() == nullptr) {
node->setLayer(
layerPool.get(renderState, node->getWidth(), node->getHeight(), wideColorGamut));
transformUpdateNeeded = true;
}
3)继续往下,对layerDamage做交集处理,保证damage的区域在layer范围内。然后计算每个renderNode的ordering。再继续map 当前的light center到renderNode坐标空间中。
接下来调用saveForLayer,看起来应该不错,那就看一下呗。
mCanvasState.save会构建一个SnapShot,然后在writableSnapshot时获取该snapshot,每一个renderNode对应一个snapShot,然后将参数设置到snapshot中去。
将当前的mLayerBuilders个数加入到mLayerStack中(只有2个值,一个0,一个size),那么是不是想知道mLayerBuilders之前怎么构建出来的呢?
那么你就来对地方了,看下FrameBuidler的构造函数就知道了,会构建一个fbo0的LayerBuilder。
继续往下看:
重新构建一个LayerBuilder,再加入到mLayerBuilders中,那么此时mLayerBuilders就有1个fbo0和N个renderNode对应的LayerBuilder。而mLayerStack对应它们的索引,是不是很神奇呢?
frameworks/base/libs/hwui/FrameBuilder.cpp
void FrameBuilder::saveForLayer(uint32_t layerWidth, uint32_t layerHeight, float contentTranslateX,
float contentTranslateY, const Rect& repaintRect,
const Vector3& lightCenter, const BeginLayerOp* beginLayerOp,
RenderNode* renderNode) {
mCanvasState.save(SaveFlags::MatrixClip);
mCanvasState.writableSnapshot()->initializeViewport(layerWidth, layerHeight);
mCanvasState.writableSnapshot()->roundRectClipState = nullptr;
mCanvasState.writableSnapshot()->setRelativeLightCenter(lightCenter);
mCanvasState.writableSnapshot()->transform->loadTranslate(contentTranslateX, contentTranslateY,
0);
mCanvasState.writableSnapshot()->setClip(repaintRect.left, repaintRect.top, repaintRect.right,
repaintRect.bottom);
// create a new layer repaint, and push its index on the stack
mLayerStack.push_back(mLayerBuilders.size());
auto newFbo = mAllocator.create<LayerBuilder>(layerWidth, layerHeight, repaintRect,
beginLayerOp, renderNode);
mLayerBuilders.push_back(newFbo);
}
//1.save的过程就是构建一个Snapshot过程:
frameworks/base/libs/hwui/CanvasState.cpp
int CanvasState::save(int flags) {
return saveSnapshot(flags);
}
int CanvasState::saveSnapshot(int flags) {
mSnapshot = allocSnapshot(mSnapshot, flags);
return mSaveCount++;
}
Snapshot* CanvasState::allocSnapshot(Snapshot* previous, int savecount) {
void* memory;
if (mSnapshotPool) {
memory = mSnapshotPool;
mSnapshotPool = mSnapshotPool->previous;
mSnapshotPoolCount--;
} else {
memory = malloc(sizeof(Snapshot));
}
return new (memory) Snapshot(previous, savecount);
}
frameworks/base/libs/hwui/CanvasState.h
inline Snapshot* writableSnapshot() { return mSnapshot; }
frameworks/base/libs/hwui/FrameBuilder.h
LinearStdAllocator<void*> mStdAllocator;
LinearAllocator mAllocator;
LsaVector<size_t> mLayerStack;
LsaVector<LayerBuilder*> mLayerBuilders;
FrameBuilder::FrameBuilder(const SkRect& clip, uint32_t viewportWidth, uint32_t viewportHeight,
const LightGeometry& lightGeometry, Caches& caches)
: mStdAllocator(mAllocator)
, mLayerBuilders(mStdAllocator)
, mLayerStack(mStdAllocator)
, mCanvasState(*this)
, mCaches(caches)
, mLightRadius(lightGeometry.radius)
, mDrawFbo0(true) {
// Prepare to defer Fbo0
auto fbo0 = mAllocator.create<LayerBuilder>(viewportWidth, viewportHeight, Rect(clip));
mLayerBuilders.push_back(fbo0);
mLayerStack.push_back(0);
mCanvasState.initializeSaveStack(viewportWidth, viewportHeight, clip.fLeft, clip.fTop,
clip.fRight, clip.fBottom, lightGeometry.center);
}
//如果FrameBuilder中没有指定viewportWidth、viewportHeight和clip,那么选择1替代:
auto fbo0 = mAllocator.create<LayerBuilder>(1, 1, Rect(1, 1));
4)接下来看下getDisplayList动作,我们知道在end的时候会将之前创建的DisplayList对象设置到mStagingDisplayList,(具体可参见之前博客:HWUI绘制系列——从java到C++),这边得到的就是该displayList对象。
frameworks/base/libs/hwui/RenderNode.h
DisplayList* mDisplayList;
DisplayList* mStagingDisplayList;
const DisplayList* getDisplayList() const { return mDisplayList; }
frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::prepareTreeImpl(TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer) {
...
pushStagingDisplayListChanges(observer, info);
...
pushLayerUpdate(info);
...
}
void RenderNode::pushStagingDisplayListChanges(TreeObserver& observer, TreeInfo& info) {
if (mNeedsDisplayListSync) {
mNeedsDisplayListSync = false;
// Damage with the old display list first then the new one to catch any
// changes in isRenderable or, in the future, bounds
damageSelf(info);
syncDisplayList(observer, &info);
damageSelf(info);
}
}
void RenderNode::syncDisplayList(TreeObserver& observer, TreeInfo* info) {
// Make sure we inc first so that we don't fluctuate between 0 and 1,
// which would thrash the layer cache
if (mStagingDisplayList) {
mStagingDisplayList->updateChildren([](RenderNode* child) { child->incParentRefCount(); });
}
deleteDisplayList(observer, info);
mDisplayList = mStagingDisplayList;
mStagingDisplayList = nullptr;
if (mDisplayList) {
mDisplayList->syncContents();
}
}
void RenderNode::setStagingDisplayList(DisplayList* displayList) {
mValid = (displayList != nullptr);
mNeedsDisplayListSync = true;
delete mStagingDisplayList;
mStagingDisplayList = displayList;
}
介绍几个大神的博客:
http://blog.csdn.net/guoqifa29/article/details/45131099
http://blog.csdn.net/wind_hzx?viewmode=contents
http://www.tuicool.com/articles/bEjYbqN(android 5.0)(简书地址://www.greatytc.com/p/bc1c1d2fadd1)
http://blog.csdn.net/jinzhuojun/article/details/54234354(android 7.0)