作者:@鱿鱼先生
本文为原创,转载请注明://www.greatytc.com/u/5a6744f8f69e
安卓相机相关开发的文章已经数不胜数,今天提笔想给开发者说说安卓相机开发的一些小秘密,当然也会进行一些基础知识的普及😄。如果还没有相机开发相关支持的小伙伴,建议打开谷歌的文档 Camera
和 Camera Guide 进行相关的学习,然后再结合本文的内容,一定可以达到事倍功半的效果。
这里提前附上参考代码的克隆地址: ps: 😊贴心的博主特地使用码云方便国内的小伙伴们高速访问代码。
本文主要是介绍安卓Camera1相关的介绍,Camera2的就等待我的更新吧:)😊
<span id = "opencamera">1. 启动相机</span>
从API文档和很多网络的资料一般的启动套路代码:
/** A safe way to get an instance of the Camera object. */
public static Camera getCameraInstance(){
Camera c = null;
try {
c = Camera.open(); // attempt to get a Camera instance
}
catch (Exception e){
// Camera is not available (in use or does not exist)
}
return c; // returns null if camera is unavailable
}
但是调用该函数获取相机实例的时候,一般调用都是直接在 MainThread
中直接调用该函数:
@Override
protected void onCreate(Bundle savedInstanceState) {
// ...
Camera camera = getCameraInstance();
}
让我们来看看安卓源码的是实现,Camera.java:
/**
* Creates a new Camera object to access the first back-facing camera on the
* device. If the device does not have a back-facing camera, this returns
* null.
* @see #open(int)
*/
public static Camera open() {
int numberOfCameras = getNumberOfCameras();
CameraInfo cameraInfo = new CameraInfo();
for (int i = 0; i < numberOfCameras; i++) {
getCameraInfo(i, cameraInfo);
if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
return new Camera(i);
}
}
return null;
}
Camera(int cameraId) {
mShutterCallback = null;
mRawImageCallback = null;
mJpegCallback = null;
mPreviewCallback = null;
mPostviewCallback = null;
mUsingPreviewAllocation = false;
mZoomListener = null;
Looper looper;
if ((looper = Looper.myLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else if ((looper = Looper.getMainLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else {
mEventHandler = null;
}
String packageName = ActivityThread.currentPackageName();
native_setup(new WeakReference<Camera>(this), cameraId, packageName);
}
注意mEventHandler
如果当前的启动线程不带 Looper
则默认的 mEventHandler
使用UI线程的默认 Looper
。从源码我们可以看到 EventHandler
负责处理底层的消息的回调。正常情况下,我们期望所有回调都在UI线程这样可以方便我们直接操作相关的页面逻辑。但是针对一些特殊场景我们可以做一些特殊的操作,目前可以把这个知识点记下,以便后续他用。
2. 设置相机📷预览模式
2.1 使用 SurfaceHolder
预览
根据官方的 Guide 文章我们直接使用 SurfaceView
作为预览的展示对象。
@Override
protected void onCreate(Bundle savedInstanceState) {
// ...
SurfaceView surfaceView = findViewById(R.id.camera_surface_view);
surfaceView.getHolder().addCallback(this);
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
// TODO: Connect Camera.
if (null != mCamera) {
try {
mCamera.setPreviewDisplay(holder);
mCamera.startPreview();
mHolder = holder;
} catch (IOException e) {
e.printStackTrace();
}
}
}
重新运行下程序,我相信你已经可以看到预览的画面,当然它可能有些方向的问题。但是我们至少看到了相机的画面。
2.2 使用 SurfaceTexture
预览
该方式目前主要是针对需要利用 OpenGL ES 作为相机 GPU 预览的模式。此时使用的目标 View
也换成了 GLSurfaceView
。在使用的时候⚠️注意3个小细节:
- 关于
GLSurfaceView
的基础设置
GLSurfaceView surfaceView = findViewById(R.id.gl_surfaceview);
surfaceView.setEGLContextClientVersion(2); // 开启 OpenGL ES 2.0 支持
surfaceView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); // 启用被动刷新。
surfaceView.setRenderer(this);
关于被动刷新的开启,第三点会详细介绍它的意思。
- 创建纹理对应的
SurfaceTexture
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// Init Camera
int[] textureIds = new int[1];
GLES20.glGenTextures(1, textureIds, 0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textureIds[0]);
// 超出纹理坐标范围,采用截断到边缘
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
//过滤(纹理像素映射到坐标点) (缩小、放大:GL_LINEAR线性)
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
mSurfaceTexture = new SurfaceTexture(textureIds[0]);
mCameraTexture = textureIds[0];
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
try {
// 创建的 SurfaceTexture 作为预览用的 Texture
mCamera.setPreviewTexture(mSurfaceTexture);
mCamera.startPreview();
} catch (IOException e) {
e.printStackTrace();
}
}
这里创建的纹理是一种特殊的来自 OpenGL ES
的扩展,GLES11Ext.GL_TEXTURE_EXTERNAL_OES
有且只有在使用此种类型纹理的时候,开发者才能通过自己的 GPU 代码进行摄像头内容的实时处理。
- 数据驱动刷新
将原有的 GLSurfaceView
连续刷新的模式改成,只有当数据有变化的时候才刷新。
GLSurfaceView surfaceView = findViewById(R.id.gl_surfaceview);
surfaceView.setEGLContextClientVersion(2);
surfaceView.setRenderer(this);
// 添加以下设置,改成被动的 GL 渲染。
// Change SurfaceView render mode to RENDERMODE_WHEN_DIRTY.
surfaceView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
当数据变化的时候我们可以通过以下方式进行通知
mSurfaceTexture.setOnFrameAvailableListener(surfaceTexture -> {
// 有数据可以进行展示,同时GL线程工作。
mSurfaceView.requestRender();
});
其余的部分可以不变,这样的好处是刷新的帧率可以随着相机的帧率变化而变化。不是自己一直自动刷新造成不必要的GPU功耗。
2.3 使用YUV-NV21 预览
本节将重点介绍如何使用YUV数据进行相机的画面的预览的技术实现。这个技术方案主要的落地场景是 人脸识别(Face Detection) 或是其他 CV 领域的实时算法数据加工。
2.3.1 设置回调 Camera
预览 YUV 数据回调 Buffer
本步骤利用旧版本的接口 Camera.setPreviewCallbackWithBuffer
, 但是使用此函数需要做一个必要操作,就是往相机里面添加回调数据的 Buffer。
// 设置目标的预览分辨率,可以直接使用 1280*720 目前的相机都会有该分辨率
parameters.setPreviewSize(previewSize.first, previewSize.second);
// 设置相机 NV21 数据回调使用用户设置的 buffer
mCamera.setPreviewCallbackWithBuffer(this);
mCamera.setParameters(parameters);
// 添加4个用于相机进行处理的 byte[] buffer 对象。
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
这里需要注意⚠️,如果设置预览回调使用的是 Camera.setPreviewCallback
那么相机返回的数据 onPreviewFrame(byte[] data, Camera camera)
中的 data
是由相机内部创建。
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
// TODO: 预处理相机输入数据
if (!bytesToByteBuffer.containsKey(data)) {
Log.d(TAG, "Skipping frame. Could not find ByteBuffer associated with the image "
+ "data from the camera.");
} else {
// 因为我们使用的是 setPreviewCallbackWithBuffer 所以必须把data还回去
mCamera.addCallbackBuffer(data);
}
}
如果不进行 mCamera.addCallbackBuffer(byte[])
, 当回调 4 次之后,就不会再触发 onPreviewFrame
。可以发现次数刚好等于相机初始化时候添加的 Buffer 个数。
2.3.2 启动相机预览
我们目的是使用 onPreviewFrame
返回数据进行渲染,所以设置 mCamera.setPreviewTexture
的逻辑代码需要去除,因为我们不希望相机还继续把预览的数据继续发送给之前设置的 SurfaceTexture
这个就系统浪费资源了。
😂支持注释相机 mCamera.setPreviewTexture(mSurfaceTexture);
的代码段:
try {
// mCamera.setPreviewTexture(mSurfaceTexture);
mCamera.startPreview();
} catch (Exception e) {
e.printStackTrace();
}
通过测试发现 onPreviewFrame
居然不工作了,快速看下文档,里面提到以下信息:
/**
* Starts capturing and drawing preview frames to the screen
* Preview will not actually start until a surface is supplied
* with {@link #setPreviewDisplay(SurfaceHolder)} or
* {@link #setPreviewTexture(SurfaceTexture)}.
*
* <p>If {@link #setPreviewCallback(Camera.PreviewCallback)},
* {@link #setOneShotPreviewCallback(Camera.PreviewCallback)}, or
* {@link #setPreviewCallbackWithBuffer(Camera.PreviewCallback)} were
* called, {@link Camera.PreviewCallback#onPreviewFrame(byte[], Camera)}
* will be called when preview data becomes available.
*
* @throws RuntimeException if starting preview fails; usually this would be
* because of a hardware or other low-level error, or because release()
* has been called on this Camera instance.
*/
public native final void startPreview();
相机的有且仅有被设置的对应的 Surface
资源之后才能正确的启动预览。
下面是见证奇迹的时刻了:
/**
* The dummy surface texture must be assigned a chosen name. Since we never use an OpenGL context,
* we can choose any ID we want here. The dummy surface texture is not a crazy hack - it is
* actually how the camera team recommends using the camera without a preview.
*/
private static final int DUMMY_TEXTURE_NAME = 100;
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// ... codes
SurfaceTexture dummySurfaceTexture = new SurfaceTexture(DUMMY_TEXTURE_NAME);
mCamera.setPreviewTexture(dummySurfaceTexture);
// ... codes
}
这个操作之后,相机的 onPreviewFrame
又开始被触发了。这个虚拟的 SurfaceTexture
它可以让相机工作起来,并且通过设置 :
dummySurfaceTexture.setOnFrameAvailableListener(surfaceTexture -> {
Log.d(TAG, "dummySurfaceTexture working.");
});
我们会发现系统是能自己判断出 SurfaceTexture
是否有效,接着 onFrameAvailable
也毫无反应。
2.3.3 渲染 YUV 数据绘制到 SurfaceView
。
目前安卓默认的YUV格式是 NV21. 所以需要使用 Shader
进行格式的转换。 在 OpenGL 中只能进行 RGB 的颜色进行绘制。具体脚本算法可以参考: nv21_to_rgba_fs.glsl
#ifdef GL_ES
precision highp float;
#endif
varying vec2 v_texCoord;
uniform sampler2D y_texture;
uniform sampler2D uv_texture;
void main (void) {
float r, g, b, y, u, v;
//We had put the Y values of each pixel to the R,G,B components by
//GL_LUMINANCE, that's why we're pulling it from the R component,
//we could also use G or B
y = texture2D(y_texture, v_texCoord).r;
//We had put the U and V values of each pixel to the A and R,G,B
//components of the texture respectively using GL_LUMINANCE_ALPHA.
//Since U,V bytes are interspread in the texture, this is probably
//the fastest way to use them in the shader
u = texture2D(uv_texture, v_texCoord).a - 0.5;
v = texture2D(uv_texture, v_texCoord).r - 0.5;
//The numbers are just YUV to RGB conversion constants
r = y + 1.13983*v;
g = y - 0.39465*u - 0.58060*v;
b = y + 2.03211*u;
//We finally set the RGB color of our pixel
gl_FragColor = vec4(r, g, b, 1.0);
}
主要思路是将N21的数据直接分离成2张纹理数据,fragment shader 里面进行颜色格式的计算,算回 RGBA。
mYTexture = new Texture();
created = mYTexture.create(mYuvBufferWidth, mYuvBufferHeight, GLES10.GL_LUMINANCE);
if (!created) {
throw new RuntimeException("Create Y texture fail.");
}
mUVTexture = new Texture();
created = mUVTexture.create(mYuvBufferWidth/2, mYuvBufferHeight/2, GLES10.GL_LUMINANCE_ALPHA); // uv 因为是两个通道所以数据的格式上选择 GL_LUMINANCE_ALPHA
if (!created) {
throw new RuntimeException("Create UV texture fail.");
}
// ...省略部分逻辑代码
//Copy the Y channel of the image into its buffer, the first (width*height) bytes are the Y channel
yBuffer.put(data.array(), 0, mPreviewSize.first * mPreviewSize.second);
yBuffer.position(0);
//Copy the UV channels of the image into their buffer, the following (width*height/2) bytes are the UV channel; the U and V bytes are interspread
uvBuffer.put(data.array(), mPreviewSize.first * mPreviewSize.second, (mPreviewSize.first * mPreviewSize.second)/2);
uvBuffer.position(0);
mYTexture.load(yBuffer);
mUVTexture.load(uvBuffer);
2.3.4 性能优化
相机的回调 YUV 的速度和 OpenGL ES 渲染相机预览画面的速度不一定是匹配的,所以我们可以进行优化。既然是相机的预览我们必须保证当前渲染的画面一定是最新的。我们可以利用 pendingFrameData
一个公用资源进行渲染线程和相机数据回调线程的同步,保证画面的时效性。
synchronized (lock) {
if (pendingFrameData != null) { // frame data tha has not been processed. Just return back to Camera.
camera.addCallbackBuffer(pendingFrameData.array());
pendingFrameData = null;
}
pendingFrameData = bytesToByteBuffer.get(data);
// Notify the processor thread if it is waiting on the next frame (see below).
// Demo 中是通知 GLThread 中渲染线程如果处理等待状态就是直接唤醒。
lock.notifyAll();
}
// 通知 GLSurfaceView 可以刷新了
mSurfaceView.requestRender();
最后还有一个优化的小技巧㊙️,需要结合在 启动相机 中提到的关于 Handler
的事情。如果我们是在安卓的主线程或是不带有 Looper 的子线程中调用相机 Camera.open()
最终的结局都是所有相机的回调信息都会从主线程的 Looper.getMainLooper()
的 Looper
进行信息处理。我们可以想象如果目前 UI 的线程正在进行重的操作,势必将影响到相机预览的帧率问题,所以最好的方法就是开辟子线程进行相机的开启操作。
final ConditionVariable startDone = new ConditionVariable();
new Thread() {
@Override
public void run() {
Log.v(TAG, "start loopRun");
// Set up a looper to be used by camera.
Looper.prepare();
// Save the looper so that we can terminate this thread
// after we are done with it.
mLooper = Looper.myLooper();
mCamera = Camera.open(cameraId);
Log.v(TAG, "camera is opened");
startDone.open();
Looper.loop(); // Blocks forever until Looper.quit() is called.
if (LOGV) Log.v(TAG, "initializeMessageLooper: quit.");
}
}.start();
Log.v(TAG, "start waiting for looper");
if (!startDone.block(WAIT_FOR_COMMAND_TO_COMPLETE)) {
Log.v(TAG, "initializeMessageLooper: start timeout");
fail("initializeMessageLooper: start timeout");
}
3. 摄像头角度问题
摄像头的数据预览是跟摄像头传感器的安装位置有关系的,相关的内容可以单独再写一篇文章进行讨论,我这边就直接上代码。
private void setRotation(Camera camera, Camera.Parameters parameters, int cameraId) {
WindowManager windowManager = (WindowManager)getSystemService(Context.WINDOW_SERVICE);
int degrees = 0;
int rotation = windowManager.getDefaultDisplay().getRotation();
switch (rotation) {
case Surface.ROTATION_0:
degrees = 0;
break;
case Surface.ROTATION_90:
degrees = 90;
break;
case Surface.ROTATION_180:
degrees = 180;
break;
case Surface.ROTATION_270:
degrees = 270;
break;
default:
Log.e(TAG, "Bad rotation value: " + rotation);
}
Camera.CameraInfo cameraInfo = new Camera.CameraInfo();
Camera.getCameraInfo(cameraId, cameraInfo);
int angle;
int displayAngle;
if (cameraInfo.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
angle = (cameraInfo.orientation + degrees) % 360;
displayAngle = (360 - angle) % 360; // compensate for it being mirrored
} else { // back-facing
angle = (cameraInfo.orientation - degrees + 360) % 360;
displayAngle = angle;
}
// This corresponds to the rotation constants.
mRotation = angle;
camera.setDisplayOrientation(displayAngle);
parameters.setRotation(angle);
}
但是测试中你会发现在使用YUV数据预览模式的时候是不起作用的,这个是因为设置的角度参数不会直接影响 PreviewCallback#onPreviewFrame
返回的结果。我们通过查看源码的注释后更加确信这点。
/**
* Set the clockwise rotation of preview display in degrees. This affects
* the preview frames and the picture displayed after snapshot. This method
* is useful for portrait mode applications. Note that preview display of
* front-facing cameras is flipped horizontally before the rotation, that
* is, the image is reflected along the central vertical axis of the camera
* sensor. So the users can see themselves as looking into a mirror.
*
* <p>This does not affect the order of byte array passed in {@link
* PreviewCallback#onPreviewFrame}, JPEG pictures, or recorded videos. This
* method is not allowed to be called during preview.
*
* <p>If you want to make the camera image show in the same orientation as
* the display, you can use the following code.
* <pre>
* public static void setCameraDisplayOrientation(Activity activity,
* int cameraId, android.hardware.Camera camera) {
* android.hardware.Camera.CameraInfo info =
* new android.hardware.Camera.CameraInfo();
* android.hardware.Camera.getCameraInfo(cameraId, info);
* int rotation = activity.getWindowManager().getDefaultDisplay()
* .getRotation();
* int degrees = 0;
* switch (rotation) {
* case Surface.ROTATION_0: degrees = 0; break;
* case Surface.ROTATION_90: degrees = 90; break;
* case Surface.ROTATION_180: degrees = 180; break;
* case Surface.ROTATION_270: degrees = 270; break;
* }
*
* int result;
* if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
* result = (info.orientation + degrees) % 360;
* result = (360 - result) % 360; // compensate the mirror
* } else { // back-facing
* result = (info.orientation - degrees + 360) % 360;
* }
* camera.setDisplayOrientation(result);
* }
* </pre>
*
* <p>Starting from API level 14, this method can be called when preview is
* active.
*
* <p><b>Note: </b>Before API level 24, the default value for orientation is 0. Starting in
* API level 24, the default orientation will be such that applications in forced-landscape mode
* will have correct preview orientation, which may be either a default of 0 or
* 180. Applications that operate in portrait mode or allow for changing orientation must still
* call this method after each orientation change to ensure correct preview display in all
* cases.</p>
*
* @param degrees the angle that the picture will be rotated clockwise.
* Valid values are 0, 90, 180, and 270.
* @throws RuntimeException if setting orientation fails; usually this would
* be because of a hardware or other low-level error, or because
* release() has been called on this Camera instance.
* @see #setPreviewDisplay(SurfaceHolder)
*/
public native final void setDisplayOrientation(int degrees);
为了得到正确的方向角度。我们需要进行YUV渲染的是改变下坐标点。
这里我用了一个很暴力的手段,直接去调整下纹理的坐标
private static final float FULL_RECTANGLE_COORDS[] = {
-1.0f, -1.0f, // 0 bottom left
1.0f, -1.0f, // 1 bottom right
-1.0f, 1.0f, // 2 top left
1.0f, 1.0f, // 3 top right
};
// FIXME: 为了绘制正确的角度,将纹理坐标按90度进行计算,中间还包含了一次纹理数据的镜像处理
private static final float FULL_RECTANGLE_TEX_COORDS[] = {
1.0f, 1.0f, // 0 bottom left
1.0f, 0.0f, // 1 bottom right
0.0f, 1.0f, // 2 top left
0.0f, 0.0f // 3 top right
};
重启程序 Perfect 搞定。
总结
关于安卓相机的开发,总结就是在踩坑中度过。建议正在学习的同学,最好能结合我参考资料里面附加的内容以及相机源码进行学习。你将会得到很大的收获。
同时我也希望自己写的经验文章可以帮到正在学习的你。🍻🍻🍻