因为最近的需求:开启相机,实时获取相机的照片并提取其中的A4纸矩形区域,获取满足条件的矩形区域后就拍照,把A4纸区域抠出来。
android官方好像是没有这方面的封装的,需要自己撸,IOS听同事说官方有这方面功能的封装,可以直接拿来用,所以下面就介绍下我在项目中对opencv提取矩形的处理步骤。
1.在opencv官网下载需要版本的sdk,这里有不同版本的release对应的sdk,
在比较老版本的sdk中,支持armeabi架构,后面的版本是不支持aremabi架构的,但是我在本文中也会介绍如何编译armeabi的步骤,放在文章最后(因为armeabi确实太老了,模拟浮点计算处理图片性能堪忧,因为我的项目是实时处理相机返回图片数据,建议在项目允许的情况下不用armeabi,速度会又一点慢,不过还好,也不是太慢)
2.编译opencv
下载之后AS要打开samples这个文件夹,打开之后如下图
对了编译opencv需要ndk,如果平时开发没有用到NDK的话,可能要去下载NDK,同时要cmake工具,本篇就不介绍怎么配置ndk开发环境了,不太清楚的,可以自己网上搜下。
2.1 编译得到so
顺利打开项目的话,能看到有多个,(注意:每个都是一个可以单独编译的app项目,不是相互依赖的module)然后我们选择“tutorial-2” 进行编译,
不管是选择apk编译,还是选择bundle编译编译完成后都会生成so,如下图
一定要记住,在生产上,需要用release的so(编译release,就可以得到了),现在编译的是debug,现在调试就不管
2.2运行apk
连接手机,直接运行在手机上,看下效果,菜单切换的时候,可以看到不同的输出图像,选择“Canny”,就能看到轮廓,这个轮廓和我们需要的差了十万八千里,所以还需要修改代码, 这个项目用到了opencv官方的两个类,CameraBridgeViewBase 和 JavaCameraView。我们主要也就是修改这两个类的代码,来提取轮廓。
3.opencv运行在自己的项目中
3.1
AS新建一个项目,再新建一个module假设命名为myopencv,在app中引入myopencv,首先把刚刚贬编译出来的so引入到myopencv中去,在myopencv 的gradle中添加对so的引用
3.2
配置好后,在把刚刚的opencv工程中的opencv代码复制到myopencv里面,(如下图,是opencv官方的Java代码,不直接在sdk中修改代码的原因是,如果改错了,可以重新复制替换,也可以防止在修改时出了问题,没有对比)
4.修改代码
4.1修改CameraBridgeViewBase类
找到方法 protected void deliverAndDrawFrame(CvCameraViewFrame frame);
我们主要修改该类的也是这个方法
提示:
显示到屏幕上的相机数据是通过 方 getHolder().unlockCanvasAndPost(canvas);绘制上去的,所以在 getHolder().unlockCanvasAndPost(canvas);方法之前我们要识别出矩形区域,在矩形区域外绘制其它的半透明颜色,来区分识别出来的矩形区域和A4纸的重合度,(看自己需求,若果不需要区分的话,就可以不用绘制矩形区域之外的颜色,直接把识别出来的矩形区域绘制出来都可以,根据自己项目需求处理)
方法 Utils.matToBitmap(modified, mCacheBitmap);是将Mat对象转化为bitmao对象,我们需要处理的就是在canvas绘制bitmap前 处理Mat对象,拿到矩形 org.opencv.core.Rect区域。那我们的目标很明确了,一:就是处理Mat对象,二:拿到Rect,
网上关于矩形提取的资料五花八门,各有千秋,但是我用了几个之后发现效果不好,所以还是根据OpenCV官方关于矩形识别的例子来调试我们代码。新建轮廓辅助类
/**
* 轮廓辅助类
*/
public class CountersAuxiliary {
public CountersAuxiliary() {
}
private Mat image;
private Mat originalImage;
private List<MatOfPoint> contours;
private Mat hierarchy;
private int HEIGHT;
private int WIDTH;
private static List<Rect> rects = new ArrayList<Rect>();
public static void setFilter(Mat image) {
//Apply gaussian blur to remove noise
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
//Threshold
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, 7, 1);
//Invert the image
Core.bitwise_not(image, image);
//Dilate
Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_DILATE, new Size(3, 3), new Point(1, 1));
Imgproc.dilate(image, image, kernel);
}
public static void findRectangle(Mat originalImage, Mat image) {
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
List<Rect> rects = new ArrayList<Rect>();
long startTime0 = System.currentTimeMillis();
Imgproc.cvtColor(originalImage, image, Imgproc.COLOR_BGR2GRAY);
setFilter(image);
rects.clear();
//Find Contours
Imgproc.findContours(image, contours, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
//For conversion later on
MatOfPoint2f approxCurve = new MatOfPoint2f();
long startTime = System.currentTimeMillis();
//For each contour found
for (int i = 0; i < contours.size(); i++) {
//Convert contours from MatOfPoint to MatOfPoint2f
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(i).toArray());
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true) * 0.02;
if (approxDistance > 1) {
//Find Polygons
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
//Rectangle Checks - Points, area, convexity
if (points.total() == 4 && Math.abs(Imgproc.contourArea(points)) > 1000 && Imgproc.isContourConvex(points)) {
double cos = 0;
double mcos = 0;
for (int sc = 2; sc < 5; sc++) {
// TO-DO Figure a way to check angle
cos = Math.abs(angle(points.toList().get(sc % 4), points.toList().get(sc - 2), points.toList().get(sc - 1)));
if (cos > mcos) {
mcos = cos;
}
}
if (mcos < 0.3) {
// Get bounding rect of contour
Rect rect = Imgproc.boundingRect(points);
// if (Math.abs(rect.height - rect.width) < 1000) {
// System.out.println(i + "| x: " + rect.x + " + width(" + rect.width + "), y: " + rect.y + "+ width(" + rect.height + ")");
rects.add(rect);
// Imgproc.rectangle(originalImage, rect.tl(), rect.br(), new Scalar(255, 0, 0), -1, 4, 0);
// Log.i("GGGGSSS", "helper:" + rect.toString());
// Imgproc.drawContours(originalImage, contours, i, new Scalar(0, 255, 0, .8), 2);
// Highgui.imwrite("detected_layers"+i+".png", originalImage);
// }
}
}
}
}
}
// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
public static double angle(Point pt1, Point pt2, Point pt0) {
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1 * dx2 + dy1 * dy2) / Math.sqrt((dx1 * dx1 + dy1 * dy1) * (dx2 * dx2 + dy2 * dy2) + 1e-10);
}
在下面方法中引用
private static int N = 16;
/**
* This method shall be called by the subclasses when they have valid
* object and want it to be delivered to external client (via callback) and
* then displayed on the screen.
*
* @param frame - the current frame to be delivered
*/
protected void deliverAndDrawFrame(CvCameraViewFrame frame) {//该方法中有部分无用代码,调试时没删除,自己可以删除
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch (Exception e) {
e.printStackTrace();
bmpValid = false;
}
} else {
isCompleted = true;
return;
}
/*****************************************************/
Mat src = new Mat();
Mat src1 = new Mat();
Mat src2 = new Mat();
Mat src3 = new Mat();
Mat des = new Mat();
// Imgproc.resize(modified, src, new Size(modified.width() / N, modified.height() / N));
Imgproc.pyrDown(modified, src);//金字塔缩小为原来的1/16,在armeabi中速度提升几百倍,如果不缩小,armeabi是根本没法实时识别的,会卡5秒起步
Imgproc.pyrDown(src, src1);
Imgproc.pyrDown(src1, src2);
Imgproc.pyrDown(src2, src3);
//pyrDown方法执行一次,会输出原来尺寸1/2大小的mat,这里执行4次,就得到了原图1/16的缩略图
long startTime = System.currentTimeMillis();
modified = src3.clone();
CountersAuxiliary.findRectangle(modified, des);
Log.i("GAFR", "CCC333_old=" + (System.currentTimeMillis() - startTime));
/*****************************************************/
Mat mat = new Mat();//mSource.clone();
Imgproc.Canny(modified, mat, 75, 200);
// Mat tmp = mSource.clone();
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
// 寻找轮廓
Mat hierarchy = new Mat();
Imgproc.findContours(mat, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
Bitmap bitmap = Bitmap.createBitmap(mat.cols(), mat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(mat, bitmap);
int index = 0;
double perimeter = 0;
// 找出匹配到的最大轮廓
for (int i = 0; i < contours.size(); i++) {
// 最大面积
double area = Imgproc.contourArea(contours.get(i));
// double length = Imgproc.arcLength(source, true);
if (area > perimeter) {
perimeter = area;
index = i;
}
}
List<org.opencv.core.Rect> rects = new ArrayList<org.opencv.core.Rect>();
MatOfPoint2f approxCurve = new MatOfPoint2f();
for (int i = 0; i < contours.size(); i++) {
//Convert contours from MatOfPoint to MatOfPoint2f
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(i).toArray());
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true) * 0.02;
if (approxDistance > 1) {
//Find Polygons
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
//Rectangle Checks - Points, area, convexity
if (points.total() == 4 && Math.abs(Imgproc.contourArea(points)) > 1000 && Imgproc.isContourConvex(points)) {
double cos = 0;
double mcos = 0;
for (int sc = 2; sc < 5; sc++) {
// TO-DO Figure a way to check angle
cos = Math.abs(CountersAuxiliary.angle(points.toList().get(sc % 4), points.toList().get(sc - 2), points.toList().get(sc - 1)));
if (cos > mcos) {
mcos = cos;
}
}
if (mcos < 0.3) {
// Get bounding rect of contour
org.opencv.core.Rect rect = Imgproc.boundingRect(points);
if (Math.abs(rect.height - rect.width) < 1000) {
System.out.println(i + "| x: " + rect.x + " + width(" + rect.width + "), y: " + rect.y + "+ width(" + rect.height + ")");
rects.add(rect);
Imgproc.rectangle(mat, rect.tl(), rect.br(), new Scalar(255, 0, 0), -1, 4, 0);
}
}
}
}
}
Utils.matToBitmap(mat, bitmap);
String str = "";
for (int m = 0; m < rects.size(); m++) {
str += ",rects=" + rects.get(m).toString();
}
String rectStr = "";
// Imgproc.drawContours(tmp, contours, index, new Scalar(0.0, 0.0, 255.0), 9, Imgproc.LINE_AA);
if (contours.size() != 0) {//只拍A4纸,所以默认面积最大的就是A4纸区域,下面的多边拟合比求最大面积误差要小些,所以rect被覆盖了(看选择哪种,其中一种可以删除,只保留一个方案)
rect = Imgproc.boundingRect(contours.get(index));
// Imgproc.rectangle(tmp, rect, new Scalar(0.0, 0.0, 255.0), 4, Imgproc.LINE_8);
// mRect = new Rect(rect.x, rect.y, rect.x + rect.width, rect.y + rect.height);
rect.x = rect.x * N;
rect.y = rect.y * N;
rect.width = rect.width * N;
rect.height = rect.height * N;
}
if (rects.size() > 0) {
rect.x = rects.get(0).x * N;
rect.y = rects.get(0).y * N;
rect.width = rects.get(0).width * N;
rect.height = rects.get(0).height * N;
}
/*****************************************************/
mat.release();
hierarchy.release();
src.release();
src1.release();
src2.release();
src3.release();
des.release();
// tmp.release();
if (mCacheBitmap == null || isExist) {
isCompleted = true;
return;
}
if (bmpValid) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
下面的代码就可以自己处理了,毕竟rect已经拿到了,
......
}
下面让oepncv默认的横屏变为竖屏,
CameraBridgeViewBase类中
protected void AllocateCache() {
// mCacheBitmap = Bitmap.createBitmap(mFrameWidth, mFrameHeight, Bitmap.Config.ARGB_8888);
/*********************************横屏转竖屏修改**********************************************/
//为了方向正确mCacheBitmap存储的时相机frame旋转90度之后的数据
//旋转90度后mFrameWidth,mFrameHeight互换
int portraitWidth = mFrameHeight;
int portraitHeight = mFrameWidth;
mCacheBitmap = Bitmap.createBitmap(portraitWidth, portraitHeight, Bitmap.Config.ARGB_8888);
/*********************************横屏转竖屏修改**********************************************/
}
protected Size calculateCameraFrameSize(List<?> supportedSizes, ListItemAccessor accessor, int surfaceWidth, int surfaceHeight) {
int calcWidth = 0;
int calcHeight = 0;
// int maxAllowedWidth = (mMaxWidth != MAX_UNSPECIFIED && mMaxWidth < surfaceWidth) ? mMaxWidth : surfaceWidth;
// int maxAllowedHeight = (mMaxHeight != MAX_UNSPECIFIED && mMaxHeight < surfaceHeight) ? mMaxHeight : surfaceHeight;
/*********************************横屏转竖屏修改**********************************************/
//允许的最大width和height
//#Modified step4
//相机Frame的mMaxWidth应该与surface的surfaceHeight比
//相机Frame的mMaxHeight应该与surface的surfaceWidth比
int maxAllowedWidth = (mMaxWidth != MAX_UNSPECIFIED && mMaxWidth < surfaceHeight) ? mMaxWidth : surfaceHeight;
int maxAllowedHeight = (mMaxHeight != MAX_UNSPECIFIED && mMaxHeight < surfaceWidth) ? mMaxHeight : surfaceWidth;
/*********************************横屏转竖屏修改**********************************************/
Collections.sort((List<android.hardware.Camera.Size>) supportedSizes, new Comparator<android.hardware.Camera.Size>() {
@Override
public int compare(Camera.Size o1, Camera.Size o2) {
return o2.height - o1.height;
}
});
for (Object size : supportedSizes) {
int width = accessor.getWidth(size);
int height = accessor.getHeight(size);
Log.d(TAG, "trying size: " + width + "x" + height);
if (width <= maxAllowedWidth && height <= maxAllowedWidth) {
if (width >= calcWidth && height >= calcHeight) {
calcWidth = (int) width;
calcHeight = (int) height;
break;
}
}
}
if ((calcWidth == 0 || calcHeight == 0) && supportedSizes.size() > 0) {
Log.i(TAG, "fallback to the first frame size");
Object size = supportedSizes.get(0);
calcWidth = accessor.getWidth(size);
calcHeight = accessor.getHeight(size);
}
return new Size(calcWidth, calcHeight);
}
JavaCameraView类中修改
protected boolean initializeCamera(int width, int height) {
......
List<String> FocusModes = params.getSupportedFocusModes();
if (FocusModes != null && FocusModes.contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO)) {
params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
}
mCamera.setParameters(params);
params = mCamera.getParameters();
mFrameWidth = params.getPreviewSize().width;
mFrameHeight = params.getPreviewSize().height;
if ((getLayoutParams().width == LayoutParams.MATCH_PARENT) && (getLayoutParams().height == LayoutParams.MATCH_PARENT))
// mScale = Math.min(((float)height)/mFrameHeight, ((float)width)/mFrameWidth);
/*********************************横屏转竖屏修改**********************************************/
/*为了在deliverAndDrawFrame里往画布上画时应用缩放<JavaCameraView>里
android:layout_width="match_parent"
android:layout_height="match_parent"
若又想指定缩放后的大小可将<JavaCameraView>放在一个有大小的
LinearLayout里且当方向是portrait时比率是
surface的width/相机frame的mFrameHeight
surface的height/相机frame的mFrameWidth
若不想设置<JavaCameraView>则这里直接去掉if语句应该也可*/
mScale = Math.min(((float) width) / mFrameHeight, ((float) height) / mFrameWidth);
/*********************************横屏转竖屏修改**********************************************/
else
mScale = 0;
if (mFpsMeter != null) {
mFpsMeter.setResolution(mFrameWidth, mFrameHeight);
}
int size = mFrameWidth * mFrameHeight;
size = size * ImageFormat.getBitsPerPixel(params.getPreviewFormat()) / 8;
mBuffer = new byte[size];
修改******部分
}
然后就是内部类
private class JavaCameraFrame implements CvCameraViewFrame {
@Override
public Mat gray() {
// return mYuvFrameData.submat(0, mHeight, 0, mWidth);
/*********************************横屏转竖屏修改**********************************************/
Core.rotate(mYuvFrameData.submat(0, mHeight, 0, mWidth), portrait_gray, Core.ROTATE_90_CLOCKWISE);
return portrait_gray;
/*********************************横屏转竖屏修改**********************************************/
}
@Override
public Mat rgba() {
if (mPreviewFormat == ImageFormat.NV21)
Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGBA_NV21, 4);
else if (mPreviewFormat == ImageFormat.YV12)
Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGB_I420, 4); // COLOR_YUV2RGBA_YV12 produces inverted colors
else
throw new IllegalArgumentException("Preview Format can be NV21 or YV12");
// return mRgba;
/*********************************横屏转竖屏修改**********************************************/
Core.rotate(mRgba, portrait_rgba, Core.ROTATE_90_CLOCKWISE);
Bitmap bitmap = Bitmap.createBitmap(portrait_mWidth, portrait_mHeight, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(portrait_rgba, bitmap);
return portrait_rgba;
/*********************************横屏转竖屏修改**********************************************/
}
public JavaCameraFrame(Mat Yuv420sp, int width, int height) {
super();
mWidth = width;
mHeight = height;
mYuvFrameData = Yuv420sp;
mRgba = new Mat();
/*********************************横屏转竖屏修改**********************************************/
portrait_mHeight = mWidth;
portrait_mWidth = mHeight;
portrait_gray = new Mat(portrait_mHeight, portrait_mWidth, CvType.CV_8UC1);
portrait_rgba = new Mat(portrait_mHeight, portrait_mWidth, CvType.CV_8UC4);
/*********************************横屏转竖屏修改**********************************************/
}
public void release() {
mRgba.release();
}
private Mat mYuvFrameData;
private Mat mRgba;
private int mWidth;
private int mHeight;
/*********************************横屏转竖屏修改**********************************************/
private int portrait_mHeight;
private int portrait_mWidth;
private Mat portrait_gray;
private Mat portrait_rgba;
/*********************************横屏转竖屏修改**********************************************/
}
5.差不多了,下面介绍编译armeabi的步骤
去OpenCV官网下载3.4.8的版本进行编译,sdk中就包含armeabi的编译文件,
如下图,
同样,还是编译“turorial-2”,在编译前,在gradle中配置so编译信息,如图
特别提示
编译出的so中低版本是java3,高版本是java4,对应的oepncv sdk中Java代码也是不能混用的,例如,要用3.4.8编译so的话,那么复制oepncv 中Java代码也要复制3.4.8中的,因为so中C/CPP代码和Java代码一一对应 ,不同版本的有差异,运行会报错
最后其余步骤和上面一样,
最后附上对闪光灯的控制方法
1.在Manifest中加上闪光灯权限
//打开闪光灯
public void turnLightOn() {
if (mCamera == null) {
return;
}
Camera.Parameters parameters = mCamera.getParameters();
if (parameters == null) {
return;
}
List<String> flashModes = parameters.getSupportedFlashModes();
// Check if camera flash exists
if (flashModes == null) {
// Use the screen as a flashlight (next best thing)
return;
}
String flashMode = parameters.getFlashMode();
Log.i(TAG, "Flash mode: " + flashMode);
Log.i(TAG, "Flash modes: " + flashModes);
if (!Camera.Parameters.FLASH_MODE_TORCH.equals(flashMode)) {
// Turn on the flash
if (flashModes.contains(Camera.Parameters.FLASH_MODE_TORCH)) {
parameters.setFlashMode(Camera.Parameters.FLASH_MODE_TORCH);
mCamera.setParameters(parameters);
} else {
}
}
}
//关闭闪光灯
public void turnLightOff() {
if (mCamera == null) {
return;
}
Camera.Parameters parameters = mCamera.getParameters();
if (parameters == null) {
return;
}
List<String> flashModes = parameters.getSupportedFlashModes();
String flashMode = parameters.getFlashMode();
// Check if camera flash exists
if (flashModes == null) {
return;
}
Log.i(TAG, "Flash mode: " + flashMode);
Log.i(TAG, "Flash modes: " + flashModes);
if (!Camera.Parameters.FLASH_MODE_OFF.equals(flashMode)) {
// Turn off the flash
if (flashModes.contains(Camera.Parameters.FLASH_MODE_OFF)) {
parameters.setFlashMode(Camera.Parameters.FLASH_MODE_OFF);
mCamera.setParameters(parameters);
} else {
Log.e(TAG, "FLASH_MODE_OFF not supported");
}
}
}