前言
从Android4.0开始,Android源码开始合入BlurFilter(模糊滤波)相关算法,提供模糊(毛玻璃)效果支持,帮助上层应用在使用窗口、视图或图片时更有层次感。图像处理的模糊滤波算法有很多,包括常见的均值模糊和高斯模糊,Android 12采用的模糊滤波为Kawase滤波算法。
卷积
提及图像滤波一定需要讲到卷积。在图像处理的基础原理中,卷积是图像处理的基本操作。通过 图像源矩阵 乘 卷积核 得到 结果矩阵 ,这里的结果矩阵即单次图像处理的结果。
均值/高斯/Kawase滤波器
不同的滤波器采用的卷积核各不相同,以下列三种为例:
-
均值滤波器
均值滤波器的卷积核通常是一个m
*
m的矩阵,且每个元素为均等的1/(m*
m); -
高斯滤波器
高斯滤波器的卷积核中的元素遵循高斯分布,卷积核中心的值相对边缘较大,因此在做卷积处理时中心点的权重最大,权重逐步向边缘递减。
-
Kawase滤波器
Kawase滤波器采用的是取中心点向外的四个点进行采样,且随着迭代次数增加向外扩张更远的采样距离
通过对模糊品质、稳定性和性能三个标准的比较(引用十种模糊算法比较)
- 模糊品质:模糊品质的好坏是模糊算法是否优秀的主要指标
- 模糊稳定性:模糊的稳定性决定了在画面变化过程中,模糊是否稳定,不会出现跳变或者闪烁。
- 性能:性能的好坏是模糊算法是否能被广泛使用的关键所在。
RenderEngine
在图像显示系统中,需要使用特定的渲染引擎通过GPU绘制到暂存缓冲区(采用Client合成方式的图层),因此在设备启动时会进行渲染引擎的创建。
渲染引擎的选择是根据renderEngineType来确定的。在Android 12 中默认选择SKIA_GL_THREADED,Android 11 默认选择GLES。前者是采用异步的SKIA_GL,底层实现是在OpenGL基础上做的封装, 后者是直接调用OpenGL接口并使用GLES语言进行着色器编辑。
private:
// 1 means RGBA_8888
int pixelFormat = 1;
uint32_t imageCacheSize = 0;
bool useColorManagement = true;
bool enableProtectedContext = false;
bool precacheToneMapperShaderOnly = false;
bool supportsBackgroundBlur = false;
RenderEngine::ContextPriority contextPriority = RenderEngine::ContextPriority::MEDIUM;
RenderEngine::RenderEngineType renderEngineType =
RenderEngine::RenderEngineType::SKIA_GL_THREADED;
};
开发人员可根据设置DEBUG_RENDERENGINE_BACKEND参数为“gles/threaded/skiagl/skiaglthreaded"选择渲染引擎。
[\framework\native\libs\renderengine\RenderEngine.cpp]
RenderEngineType renderEngineType = args.renderEngineType;
// Keep the ability to override by PROPERTIES:
char prop[PROPERTY_VALUE_MAX];
property_get(PROPERTY_DEBUG_RENDERENGINE_BACKEND, prop, "");
if (strcmp(prop, "gles") == 0) {
renderEngineType = RenderEngineType::GLES;
}
if (strcmp(prop, "threaded") == 0) {
renderEngineType = RenderEngineType::THREADED;
}
if (strcmp(prop, "skiagl") == 0) {
renderEngineType = RenderEngineType::SKIA_GL;
}
if (strcmp(prop, "skiaglthreaded") == 0) {
renderEngineType = RenderEngineType::SKIA_GL_THREADED;
}
switch (renderEngineType) {
case RenderEngineType::THREADED:
ALOGD("Threaded RenderEngine with GLES Backend");
return renderengine::threaded::RenderEngineThreaded::create(
[args]() { return android::renderengine::gl::GLESRenderEngine::create(args); },
renderEngineType);
case RenderEngineType::SKIA_GL:
ALOGD("RenderEngine with SkiaGL Backend");
return renderengine::skia::SkiaGLRenderEngine::create(args);
case RenderEngineType::SKIA_GL_THREADED: {
// These need to be recreated, since they are a constant reference, and we need to
// let SkiaRE know that it's running as threaded, and all GL operation will happen on
// the same thread.
RenderEngineCreationArgs skiaArgs =
RenderEngineCreationArgs::Builder()
.setPixelFormat(args.pixelFormat)
.setImageCacheSize(args.imageCacheSize)
.setUseColorManagerment(args.useColorManagement)
.setEnableProtectedContext(args.enableProtectedContext)
.setPrecacheToneMapperShaderOnly(args.precacheToneMapperShaderOnly)
.setSupportsBackgroundBlur(args.supportsBackgroundBlur)
.setContextPriority(args.contextPriority)
.setRenderEngineType(renderEngineType)
.build();
ALOGD("Threaded RenderEngine with SkiaGL Backend");
return renderengine::threaded::RenderEngineThreaded::create(
[skiaArgs]() {
return android::renderengine::skia::SkiaGLRenderEngine::create(skiaArgs);
},
renderEngineType);
}
case RenderEngineType::GLES:
default:
ALOGD("RenderEngine with GLES Backend");
return renderengine::gl::GLESRenderEngine::create(args);
}
GLESRenderEngine
以Android 11 为例,因为renderEngineType默认为GLES,因此在渲染引擎初始化时会选择GLESRenderEngine进行创建。在创建过程中如果设备支持Blur则创建BlurFilter对象。
[\framework\native\libs\renderengine\gl\GLESRenderEngine.cpp]
GLESRenderEngine::GLESRenderEngine(const RenderEngineCreationArgs& args, EGLDisplay display,
EGLConfig config, EGLContext ctxt, EGLSurface stub,
EGLContext protectedContext, EGLSurface protectedStub)
: RenderEngine(args.renderEngineType),
mEGLDisplay(display),
mEGLConfig(config),
mEGLContext(ctxt),
mStubSurface(stub),
mProtectedEGLContext(protectedContext),
mProtectedStubSurface(protectedStub),
mVpWidth(0),
mVpHeight(0),
mFramebufferImageCacheSize(args.imageCacheSize),
mUseColorManagement(args.useColorManagement),
mPrecacheToneMapperShaderOnly(args.precacheToneMapperShaderOnly) {
...
if (args.supportsBackgroundBlur) {
mBlurFilter = new BlurFilter(*this);
checkErrors("BlurFilter creation");
}
...
}
BlurFilter初始化时,标识统一变量链接着色器中的属性值部分会有一定耗时,在初始化中完成能够减少因代码逻辑问题的反复链接导致的渲染性能降低。
[\framework\native\libs\renderengine\gl\filters\BlurFilter.cpp]
BlurFilter::BlurFilter(GLESRenderEngine& engine)
: mEngine(engine),
mCompositionFbo(engine),
mPingFbo(engine),
mPongFbo(engine),
mMixProgram(engine),
mBlurProgram(engine) {
mMixProgram.compile(getVertexShader(), getMixFragShader());
mMPosLoc = mMixProgram.getAttributeLocation("aPosition");
mMUvLoc = mMixProgram.getAttributeLocation("aUV");
mMTextureLoc = mMixProgram.getUniformLocation("uTexture");
mMCompositionTextureLoc = mMixProgram.getUniformLocation("uCompositionTexture");
mMMixLoc = mMixProgram.getUniformLocation("uMix");
mBlurProgram.compile(getVertexShader(), getFragmentShader());
mBPosLoc = mBlurProgram.getAttributeLocation("aPosition");
mBUvLoc = mBlurProgram.getAttributeLocation("aUV");
mBTextureLoc = mBlurProgram.getUniformLocation("uTexture");
mBOffsetLoc = mBlurProgram.getUniformLocation("uOffset");
static constexpr auto size = 2.0f;
static constexpr auto translation = 1.0f;
const GLfloat vboData[] = {
// Vertex data
translation - size, -translation - size,
translation - size, -translation + size,
translation + size, -translation + size,
// UV data
0.0f, 0.0f - translation,
0.0f, size - translation,
size, size - translation
};
mMeshBuffer.allocateBuffers(vboData, 12 /* size */);
}
在GLESRenderEngine初始化创建完成后,如果SurfaceFlinger采用Client合成方式,那么各个Layer的内容会用GPU渲染到暂存缓冲区,最后将暂存缓冲区传送到显示组件。具体使用GLESRenderEngine::drawLayers将所有图层绘制到该缓冲区。
[\framework\native\libs\renderengine\gl\GLESRenderEngine.cpp]
status_t GLESRenderEngine::drawLayers(const DisplaySettings& display,
const std::vector<const LayerSettings*>& layers,
const std::shared_ptr<ExternalTexture>& buffer,
const bool useFramebufferCache, base::unique_fd&& bufferFence,
base::unique_fd* drawFence) {
ATRACE_CALL();
if (layers.empty()) {
ALOGV("Drawing empty layer stack");
return NO_ERROR;
}
if (bufferFence.get() >= 0) {
// Duplicate the fence for passing to waitFence.
base::unique_fd bufferFenceDup(dup(bufferFence.get()));
if (bufferFenceDup < 0 || !waitFence(std::move(bufferFenceDup))) {
ATRACE_NAME("Waiting before draw");
sync_wait(bufferFence.get(), -1);
}
}
...
在针对每个Layer的渲染过程中,如果该图层被设置进行模糊滤波处理,则通过BlurFilter::prepare进行Kawase滤波算法的图像渲染。这里的做法是先缩小整个图像进行高效率的模糊处理,再通过BlurFilter::render放大模糊效果,将使用较大的合成纹理对其进行插值得到一帧画面,以隐藏缩小的瑕疵。
[\framework\native\libs\renderengine\gl\GLESRenderEngine.cpp]
status_t GLESRenderEngine::drawLayers(const DisplaySettings& display,
const std::vector<const LayerSettings*>& layers,
const std::shared_ptr<ExternalTexture>& buffer,
const bool useFramebufferCache, base::unique_fd&& bufferFence,
base::unique_fd* drawFence) {
...
for (auto const layer : layers) {
if (blurLayers.size() > 0 && blurLayers.front() == layer) {
blurLayers.pop_front();
auto status = mBlurFilter->prepare();
...
if (blurLayers.size() == 0) {
// Done blurring, time to bind the native FBO and render our blur onto it.
fbo = std::make_unique<BindNativeBufferAsFramebuffer>(*this,
buffer.get()
->getBuffer()
->getNativeBuffer(),
useFramebufferCache);
status = fbo->getStatus();
setViewportAndProjection(display.physicalDisplay, display.clip);
} else {
// There's still something else to blur, so let's keep rendering to our FBO
// instead of to the display.
status = mBlurFilter->setAsDrawTarget(display,
blurLayers.front()->backgroundBlurRadius);
}
...
status = mBlurFilter->render(blurLayersSize > 1);
...
}
...
}
BlurFilter——GL
具体到模糊滤波的代码,在BlurFilter::prepare中,通过传入的模糊半径参数mRadius计算模糊限制步长stepX/stepY和kawase模糊迭代次数passes。Android 12中限定的最多迭代次数为6次。
[\frameworks\native\libs\renderengine\gl\filters\BlurFilter.cpp]
status_t BlurFilter::prepare() {
ATRACE_NAME("BlurFilter::prepare");
const auto radius = mRadius / 6.0f;
// Calculate how many passes we'll do, based on the radius.
// Too many passes will make the operation expensive.
const auto passes = min(kMaxPasses, (uint32_t)ceil(radius));
const float radiusByPasses = radius / (float)passes;
const float stepX = radiusByPasses / (float)mCompositionFbo.getBufferWidth();
const float stepY = radiusByPasses / (float)mCompositionFbo.getBufferHeight();
...
}
传入纹理信息mCompositionFbo.getTextureName()、步长stepX,stepY进行片段渲染。
status_t BlurFilter::prepare() {
...
// Let's start by downsampling and blurring the composited frame simultaneously.
mBlurProgram.useProgram();
glActiveTexture(GL_TEXTURE0);
glUniform1i(mBTextureLoc, 0);
glBindTexture(GL_TEXTURE_2D, mCompositionFbo.getTextureName());
glUniform2f(mBOffsetLoc, stepX, stepY);
glViewport(0, 0, mPingFbo.getBufferWidth(), mPingFbo.getBufferHeight());
mPingFbo.bind();
drawMesh(mBUvLoc, mBPosLoc);
...
}
调用OpenGL API 执行单次模糊渲染。
void BlurFilter::drawMesh(GLuint uv, GLuint position) {
glEnableVertexAttribArray(uv);
glEnableVertexAttribArray(position);
mMeshBuffer.bind();
glVertexAttribPointer(position, 2 /* size */, GL_FLOAT, GL_FALSE,
2 * sizeof(GLfloat) /* stride */, 0 /* offset */);
glVertexAttribPointer(uv, 2 /* size */, GL_FLOAT, GL_FALSE, 0 /* stride */,
(GLvoid*)(6 * sizeof(GLfloat)) /* offset */);
mMeshBuffer.unbind();
// draw mesh
glDrawArrays(GL_TRIANGLES, 0 /* first */, 3 /* count */);
}
原生限制最多6次的迭代模糊渲染,通过两个帧缓冲区read和draw做离屏读写缓冲,使用帧缓冲区渲染到纹理的方式将结果纹理作为下次渲染所需要的纹理图片,迭代反复渲染。
status_t BlurFilter::prepare() {
...
// And now we'll ping pong between our textures, to accumulate the result of various offsets.
GLFramebuffer* read = &mPingFbo;
GLFramebuffer* draw = &mPongFbo;
glViewport(0, 0, draw->getBufferWidth(), draw->getBufferHeight());
for (auto i = 1; i < passes; i++) {
ATRACE_NAME("BlurFilter::renderPass");
draw->bind();
j
glBindTexture(GL_TEXTURE_2D, read->getTextureName());
glUniform2f(mBOffsetLoc, stepX * i, stepY * i);
drawMesh(mBUvLoc, mBPosLoc);
// Swap buffers for next iteration
auto tmp = draw;
draw = read;
read = tmp;
}
mLastDrawTarget = read;
return NO_ERROR;
}
原生GL渲染引擎的着色器采用Kawase算法,着色器的修改和自定义可以在该部分完成。需要注意的是修改着色器中的滤波算法可能由于算法限制导致渲染效率低,画面显示帧率降低。
string BlurFilter::getFragmentShader() const {
return R"SHADER(#version 300 es
precision mediump float;
uniform sampler2D uTexture;
uniform vec2 uOffset;
in highp vec2 vUV;
out vec4 fragColor;
void main() {
fragColor = texture(uTexture, vUV, 0.0);
fragColor += texture(uTexture, vUV + vec2( uOffset.x, uOffset.y), 0.0);
fragColor += texture(uTexture, vUV + vec2( uOffset.x, -uOffset.y), 0.0);
fragColor += texture(uTexture, vUV + vec2(-uOffset.x, uOffset.y), 0.0);
fragColor += texture(uTexture, vUV + vec2(-uOffset.x, -uOffset.y), 0.0);
fragColor = vec4(fragColor.rgb * 0.2, 1.0);
}
)SHADER";
}
BlurFilter——Skia
Android 12 上针对模糊滤波的相关内容同Android 11,不过是由原来的GL换成了Skia。内部的模糊滤波算法和执行流程近似相同。
以下简单列出Skia中BlurFilter源码,其中BlurFilter::generate类比GL中的BlurFilter::prepare
sk_sp<SkImage> BlurFilter::generate(GrRecordingContext* context, const uint32_t blurRadius,
const sk_sp<SkImage> input, const SkRect& blurRect) const {
...
// And now we'll build our chain of scaled blur stages
for (auto i = 1; i < numberOfPasses; i++) {
const float stepScale = (float)i * kInputScale;
blurBuilder.child("input") =
tmpBlur->makeShader(SkTileMode::kClamp, SkTileMode::kClamp, linear);
blurBuilder.uniform("in_blurOffset") = SkV2{stepX * stepScale, stepY * stepScale};
blurBuilder.uniform("in_maxSizeXY") =
SkV2{blurRect.width() * kInputScale, blurRect.height() * kInputScale};
tmpBlur = blurBuilder.makeImage(context, nullptr, scaledInfo, false);
}
return tmpBlur;
}
Skia中的BlurFilter::drawBlurRegion类比GL中的BlurFilter::render。
void BlurFilter::drawBlurRegion(SkCanvas* canvas, const SkRRect& effectRegion,
const uint32_t blurRadius, const float blurAlpha,
const SkRect& blurRect, sk_sp<SkImage> blurredImage,
sk_sp<SkImage> input) {
ATRACE_CALL();
SkPaint paint;
paint.setAlphaf(blurAlpha);
const auto blurMatrix = getShaderTransform(canvas, blurRect, kInverseInputScale);
SkSamplingOptions linearSampling(SkFilterMode::kLinear, SkMipmapMode::kNone);
const auto blurShader = blurredImage->makeShader(SkTileMode::kClamp, SkTileMode::kClamp,
linearSampling, &blurMatrix);
if (blurRadius < kMaxCrossFadeRadius) {
// For sampling Skia's API expects the inverse of what logically seems appropriate. In this
// case you might expect the matrix to simply be the canvas matrix.
SkMatrix inputMatrix;
if (!canvas->getTotalMatrix().invert(&inputMatrix)) {
ALOGE("matrix was unable to be inverted");
}
SkRuntimeShaderBuilder blurBuilder(mMixEffect);
blurBuilder.child("blurredInput") = blurShader;
blurBuilder.child("originalInput") =
input->makeShader(SkTileMode::kClamp, SkTileMode::kClamp, linearSampling,
inputMatrix);
blurBuilder.uniform("mixFactor") = blurRadius / kMaxCrossFadeRadius;
paint.setShader(blurBuilder.makeShader(nullptr, true));
} else {
paint.setShader(blurShader);
}
...
}
注意事项
不同于gl传入片段着色器的是纹理映射坐标,范围是(0,1);skia中传入片段着色器的是屏幕顶点坐标,范围是(screenW,screenH)。
本文介绍的BlurFilter处理流程都是SurfaceFlinger采用Client合成方式的图层,因此会通过renderEngine渲染后传给HWC的Layer list,如果采用Device合成方式的图层不会经过renderEngine渲染,而是直接将该图层对应的GraphicBuffer的buffer handle放入HWC的Layer list。