最近开发中遇到一个需求,就是想微信那样录制一个小视频,然后在录制视频的图层上播放,然后发布到朋友圈,无声播放,但有滚动起来不影响性能。一开始接到这个需求的时候我是很兴奋的,可以好好研究一番 AVFoundation 的东西了。但是在研究中不断的高潮迭起,也是让我心力交瘁呀。但是,做程序猿的成长就是这样的嘛。题外话了,好了,今天我们就说一下怎么用AVCaptureSession+AVCaptureMovieFileOutput来录制视频,并通过AVAssetExportSeeion手段来压缩视频并转换为 MP4 格式
一开始我们要了解一下 AVFoundation 做视频的类应该有那些,并且他们有什么用呢?
AVCaptureSession
AVCaptureSession:媒体(音、视频)捕获会话,负责把捕获的音视频数据输出到输出设备中。一个AVCaptureSession可以有多个输入输出。AVCaptureDevice:输入设备,包括麦克风、摄像头,通过该对象可以设置物理设备的一些属性(例如相机聚焦、白平衡等)。AVCaptureDeviceInput:设备输入数据管理对象,可以根据AVCaptureDevice创建对应的AVCaptureDeviceInput对象,该对象将会被添加到AVCaptureSession中管理。AVCaptureVideoPreviewLayer:相机拍摄预览图层,是CALayer的子类,使用该对象可以实时查看拍照或视频录制效果,创建该对象需要指定对应的AVCaptureSession对象。AVCaptureOutput:输出数据管理对象,用于接收各类输出数据,通常使用对应的子类AVCaptureAudioDataOutput、AVCaptureStillImageOutput、AVCaptureVideoDataOutput、AVCaptureFileOutput, 该对象将会被添加到AVCaptureSession中管理。 注意:前面几个对象的输出数据都是NSData类型,而AVCaptureFileOutput代表数据以文件形式输出,类似的,AVCcaptureFileOutput也不会直接创建使用,通常会使用其子类:AVCaptureAudioFileOutput、AVCaptureMovieFileOutput。当把一个输入或者输出添加到AVCaptureSession之后AVCaptureSession就会在所有相符的输入、输出设备之间 建立连接(AVCaptionConnection)。
那么建立视频拍摄的步骤如下 :
1.创建AVCaptureSession对象。
// 创建会话 (AVCaptureSession) 对象。_captureSession = [[AVCaptureSessionalloc] init];if([_captureSession canSetSessionPreset:AVCaptureSessionPreset640x480]) {// 设置会话的 sessionPreset 属性, 这个属性影响视频的分辨率[_captureSession setSessionPreset:AVCaptureSessionPreset640x480];}
2.使用AVCaptureDevice的静态方法获得需要使用的设备,例如拍照和录像就需要获得摄像头设备,录音就要获得麦克风设备。
// 获取摄像头输入设备, 创建 AVCaptureDeviceInput 对象// 在获取摄像头的时候,摄像头分为前后摄像头,我们创建了一个方法通过用摄像头的位置来获取摄像头AVCaptureDevice*videoCaptureDevice = [selfgetCameraDeviceWithPosition:AVCaptureDevicePositionBack];if(!captureDevice) {NSLog(@"---- 取得后置摄像头时出现问题---- ");return;}// 添加一个音频输入设备// 直接可以拿数组中的数组中的第一个AVCaptureDevice*audioCaptureDevice = [[AVCaptureDevicedevicesWithMediaType:AVMediaTypeAudio] firstObject];
3.利用输入设备AVCaptureDevice初始化AVCaptureDeviceInput对象。
// 视频输入对象// 根据输入设备初始化输入对象,用户获取输入数据_videoCaptureDeviceInput = [[AVCaptureDeviceInputalloc] initWithDevice:captureDevice error:&error];if(error) {NSLog(@"---- 取得设备输入对象时出错 ------ %@",error);return;}// 音频输入对象//根据输入设备初始化设备输入对象,用于获得输入数据_audioCaptureDeviceInput = [[AVCaptureDeviceInputalloc] initWithDevice:audioCaptureDevice error:&error];if(error) {NSLog(@"取得设备输入对象时出错 ------ %@",error);return;}
4.初始化输出数据管理对象,如果要拍照就初始化AVCaptureStillImageOutput对象;如果拍摄视频就初始化AVCaptureMovieFileOutput对象。
// 拍摄视频输出对象// 初始化输出设备对象,用户获取输出数据_caputureMovieFileOutput = [[AVCaptureMovieFileOutputalloc] init];
5.将数据输入对象AVCaptureDeviceInput、数据输出对象AVCaptureOutput添加到媒体会话管理对象AVCaptureSession中。
// 将视频输入对象添加到会话 (AVCaptureSession) 中if([_captureSession canAddInput:_videoCaptureDeviceInput]) { [_captureSession addInput:_videoCaptureDeviceInput];}// 将音频输入对象添加到会话 (AVCaptureSession) 中if([_captureSession canAddInput:_captureDeviceInput]) { [_captureSession addInput:audioCaptureDeviceInput];AVCaptureConnection*captureConnection = [_caputureMovieFileOutput connectionWithMediaType:AVMediaTypeVideo];// 标识视频录入时稳定音频流的接受,我们这里设置为自动if([captureConnection isVideoStabilizationSupported]) { captureConnection.preferredVideoStabilizationMode=AVCaptureVideoStabilizationModeAuto; }}
6.创建视频预览图层AVCaptureVideoPreviewLayer并指定媒体会话,添加图层到显示容器中,调用AVCaptureSession的startRuning方法开始捕获。
// 通过会话 (AVCaptureSession) 创建预览层_captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayeralloc] initWithSession:_captureSession];// 显示在视图表面的图层CALayer*layer =self.viewContrain.layer;layer.masksToBounds=true;_captureVideoPreviewLayer.frame= layer.bounds;_captureVideoPreviewLayer.masksToBounds=true;_captureVideoPreviewLayer.videoGravity=AVLayerVideoGravityResizeAspectFill;//填充模式[layer addSublayer:_captureVideoPreviewLayer];// 让会话(AVCaptureSession)勾搭好输入输出,然后把视图渲染到预览层上[_captureSession startRunning];
7.将捕获的音频或视频数据输出到指定文件。
创建一个拍摄的按钮,当我们点击这个按钮就会触发视频录制,并将这个录制的视频放到 temp 文件夹中- (IBAction)takeMovie:(id)sender {[(UIButton*)sender setSelected:![(UIButton*)sender isSelected]];if([(UIButton*)sender isSelected]) {AVCaptureConnection*captureConnection=[self.caputureMovieFileOutputconnectionWithMediaType:AVMediaTypeVideo];// 开启视频防抖模式AVCaptureVideoStabilizationModestabilizationMode =AVCaptureVideoStabilizationModeCinematic;if([self.captureDeviceInput.device.activeFormatisVideoStabilizationModeSupported:stabilizationMode]) { [captureConnection setPreferredVideoStabilizationMode:stabilizationMode]; }//如果支持多任务则则开始多任务if([[UIDevicecurrentDevice] isMultitaskingSupported]) {self.backgroundTaskIdentifier= [[UIApplicationsharedApplication] beginBackgroundTaskWithExpirationHandler:nil]; }// 预览图层和视频方向保持一致,这个属性设置很重要,如果不设置,那么出来的视频图像可以是倒向左边的。captureConnection.videoOrientation=[self.captureVideoPreviewLayerconnection].videoOrientation;// 设置视频输出的文件路径,这里设置为 temp 文件NSString*outputFielPath=[NSTemporaryDirectory() stringByAppendingString:MOVIEPATH];// 路径转换成 URL 要用这个方法,用 NSBundle 方法转换成 URL 的话可能会出现读取不到路径的错误NSURL*fileUrl=[NSURLfileURLWithPath:outputFielPath];// 往路径的 URL 开始写入录像 Buffer ,边录边写[self.caputureMovieFileOutputstartRecordingToOutputFileURL:fileUrl recordingDelegate:self];}else{// 取消视频拍摄[self.caputureMovieFileOutputstopRecording]; [self.captureSessionstopRunning]; [selfcompleteHandle];}}
当然我们录制的开始与结束都是有监听方法的,AVCaptureFileOutputRecordingDelegate这个代理里面就有我们想要做的
- (void)captureOutput:(AVCaptureFileOutput*)captureOutput didStartRecordingToOutputFileAtURL:(NSURL*)fileURL fromConnections:(NSArray*)connections{NSLog(@"---- 开始录制 ----");}- (void)captureOutput:(AVCaptureFileOutput*)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL*)outputFileURL fromConnections:(NSArray*)connections error:(NSError*)error{NSLog(@"---- 录制结束 ----");}
到此,我们录制视频就结束了,那么是不是我们录制好了视频,就可以马上把这个视频上传给服务器分享给你的小伙伴们看了呢?
我们可以用如下方法测试一下我们录制出来的视频有多大 (m)
- (CGFloat)getfileSize:(NSString*)path{NSDictionary*outputFileAttributes = [[NSFileManagerdefaultManager] attributesOfItemAtPath:path error:nil];NSLog(@"file size: %f", (unsignedlonglong)[outputFileAttributes fileSize]/1024.00/1024.00);return(CGFloat)[outputFileAttributes fileSize]/1024.00/1024.00;}
个人在这里做过测试,录制了 10s 的小视频得到的文件大小为 4.1M 左右,而且我用的分辨率还是640x480。。。很无语了是不是?
如果我们录制的视频,录制完成后要与服务器进行必要的上传,那么,我们肯定不能把这个刚刚录制出来的视频上传给服务器的,我们有必要对这个视频进行压缩了。那么我们的压缩方法,就要用到AVAssetExportSeeion这个类了。
// 这里我们创建一个按钮,当点击这个按钮,我们就会调用压缩视频的方法,然后再去重新计算大小,这样就会跟未被压缩前的大小有个明显的对比了// 压缩视频- (IBAction)compressVideo:(id)sender{NSString*cachePath=[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,YES) lastObject];NSString*savePath=[cachePath stringByAppendingPathComponent:MOVIEPATH];NSURL*saveUrl=[NSURLfileURLWithPath:savePath];// 通过文件的 url 获取到这个文件的资源AVURLAsset*avAsset = [[AVURLAssetalloc] initWithURL:saveUrl options:nil];// 用 AVAssetExportSession 这个类来导出资源中的属性NSArray*compatiblePresets = [AVAssetExportSessionexportPresetsCompatibleWithAsset:avAsset];// 压缩视频if([compatiblePresets containsObject:AVAssetExportPresetLowQuality]) {// 导出属性是否包含低分辨率// 通过资源(AVURLAsset)来定义 AVAssetExportSession,得到资源属性来重新打包资源 (AVURLAsset, 将某一些属性重新定义AVAssetExportSession*exportSession = [[AVAssetExportSessionalloc] initWithAsset:avAsset presetName:AVAssetExportPresetLowQuality];// 设置导出文件的存放路径NSDateFormatter*formatter = [[NSDateFormatteralloc] init]; [formatter setDateFormat:@"yyyy-MM-dd-HH:mm:ss"];NSDate*date = [[NSDatealloc] init];NSString*outPutPath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,true) lastObject] stringByAppendingPathComponent:[NSStringstringWithFormat:@"output-%@.mp4",[formatter stringFromDate:date]]]; exportSession.outputURL= [NSURLfileURLWithPath:outPutPath];// 是否对网络进行优化exportSession.shouldOptimizeForNetworkUse=true;// 转换成MP4格式exportSession.outputFileType=AVFileTypeMPEG4;// 开始导出,导出后执行完成的block[exportSession exportAsynchronouslyWithCompletionHandler:^{// 如果导出的状态为完成if([exportSession status] ==AVAssetExportSessionStatusCompleted) {dispatch_async(dispatch_get_main_queue(), ^{// 更新一下显示包的大小self.videoSize.text= [NSStringstringWithFormat:@"%f MB",[selfgetfileSize:outPutPath]]; }); } }];}}
经过我们的压缩,这个时候10s 的 4M 视频就只剩下不够 1M 了。
以下是一些扩展
自动闪光灯开启
- (IBAction)flashAutoClick:(UIButton*)sender { [selfsetFlashMode:AVCaptureFlashModeAuto]; [selfsetFlashModeButtonStatus];}
打开闪光灯
- (IBAction)flashOnClick:(UIButton*)sender { [selfsetFlashMode:AVCaptureFlashModeOn]; [selfsetFlashModeButtonStatus];}
关闭闪光灯
- (IBAction)flashOffClick:(UIButton*)sender { [selfsetFlashMode:AVCaptureFlashModeOff]; [selfsetFlashModeButtonStatus];}
通知
/**
* 给输入设备添加通知
*/-(void)addNotificationToCaptureDevice:(AVCaptureDevice*)captureDevice{//注意添加区域改变捕获通知必须首先设置设备允许捕获[selfchangeDeviceProperty:^(AVCaptureDevice*captureDevice) { captureDevice.subjectAreaChangeMonitoringEnabled=YES;}];NSNotificationCenter*notificationCenter= [NSNotificationCenterdefaultCenter];//捕获区域发生改变[notificationCenter addObserver:selfselector:@selector(areaChange:) name:AVCaptureDeviceSubjectAreaDidChangeNotificationobject:captureDevice];}-(void)removeNotificationFromCaptureDevice:(AVCaptureDevice*)captureDevice{NSNotificationCenter*notificationCenter= [NSNotificationCenterdefaultCenter];[notificationCenter removeObserver:selfname:AVCaptureDeviceSubjectAreaDidChangeNotificationobject:captureDevice];}/**
* 移除所有通知
*/-(void)removeNotification{NSNotificationCenter*notificationCenter= [NSNotificationCenterdefaultCenter]; [notificationCenter removeObserver:self];}-(void)addNotificationToCaptureSession:(AVCaptureSession*)captureSession{NSNotificationCenter*notificationCenter= [NSNotificationCenterdefaultCenter];//会话出错[notificationCenter addObserver:selfselector:@selector(sessionRuntimeError:) name:AVCaptureSessionRuntimeErrorNotificationobject:captureSession];}/**
* 设备连接成功
*
* @param notification 通知对象
*/-(void)deviceConnected:(NSNotification*)notification{NSLog(@"设备已连接...");}/**
* 设备连接断开
*
* @param notification 通知对象
*/-(void)deviceDisconnected:(NSNotification*)notification{NSLog(@"设备已断开.");}/**
* 捕获区域改变
*
* @param notification 通知对象
*/-(void)areaChange:(NSNotification*)notification{NSLog(@"捕获区域改变...");}/**
* 会话出错
*
* @param notification 通知对象
*/-(void)sessionRuntimeError:(NSNotification*)notification{NSLog(@"会话发生错误.");
}
私有方法
/**
* 取得指定位置的摄像头
*
* @param position 摄像头位置
*
* @return 摄像头设备
*/-(AVCaptureDevice*)getCameraDeviceWithPosition:(AVCaptureDevicePosition)position{NSArray*cameras= [AVCaptureDevicedevicesWithMediaType:AVMediaTypeVideo];for(AVCaptureDevice*cameraincameras) {if([camera position]==position) {returncamera; } }returnnil;}/**
* 改变设备属性的统一操作方法
*
* @param propertyChange 属性改变操作
*/-(void)changeDeviceProperty:(PropertyChangeBlock)propertyChange{AVCaptureDevice*captureDevice= [self.captureDeviceInputdevice];NSError*error;//注意改变设备属性前一定要首先调用lockForConfiguration:调用完之后使用unlockForConfiguration方法解锁if([captureDevice lockForConfiguration:&error]) { propertyChange(captureDevice); [captureDevice unlockForConfiguration]; }else{NSLog(@"设置设备属性过程发生错误,错误信息:%@",error.localizedDescription); }}/**
* 设置闪光灯模式
*
* @param flashMode 闪光灯模式
*/-(void)setFlashMode:(AVCaptureFlashMode)flashMode{ [selfchangeDeviceProperty:^(AVCaptureDevice*captureDevice) {if([captureDevice isFlashModeSupported:flashMode]) { [captureDevice setFlashMode:flashMode]; } }];}/**
* 设置聚焦模式
*
* @param focusMode 聚焦模式
*/-(void)setFocusMode:(AVCaptureFocusMode)focusMode{ [selfchangeDeviceProperty:^(AVCaptureDevice*captureDevice) {if([captureDevice isFocusModeSupported:focusMode]) { [captureDevice setFocusMode:focusMode]; } }];}/**
* 设置曝光模式
*
* @param exposureMode 曝光模式
*/-(void)setExposureMode:(AVCaptureExposureMode)exposureMode{ [selfchangeDeviceProperty:^(AVCaptureDevice*captureDevice) {if([captureDevice isExposureModeSupported:exposureMode]) { [captureDevice setExposureMode:exposureMode]; } }];}/**
* 设置聚焦点
*
* @param point 聚焦点
*/-(void)focusWithMode:(AVCaptureFocusMode)focusMode exposureMode:(AVCaptureExposureMode)exposureMode atPoint:(CGPoint)point{ [selfchangeDeviceProperty:^(AVCaptureDevice*captureDevice) {if([captureDevice isFocusModeSupported:focusMode]) { [captureDevice setFocusMode:AVCaptureFocusModeAutoFocus]; }if([captureDevice isFocusPointOfInterestSupported]) { [captureDevice setFocusPointOfInterest:point]; }if([captureDevice isExposureModeSupported:exposureMode]) { [captureDevice setExposureMode:AVCaptureExposureModeAutoExpose]; }if([captureDevice isExposurePointOfInterestSupported]) { [captureDevice setExposurePointOfInterest:point]; } }];}/**
* 添加点按手势,点按时聚焦
*/-(void)addGenstureRecognizer{UITapGestureRecognizer*tapGesture=[[UITapGestureRecognizeralloc]initWithTarget:selfaction:@selector(tapScreen:)]; [self.viewContaineraddGestureRecognizer:tapGesture];}-(void)tapScreen:(UITapGestureRecognizer*)tapGesture{CGPointpoint= [tapGesture locationInView:self.viewContainer];//将UI坐标转化为摄像头坐标CGPointcameraPoint= [self.captureVideoPreviewLayercaptureDevicePointOfInterestForPoint:point]; [selfsetFocusCursorWithPoint:point]; [selffocusWithMode:AVCaptureFocusModeAutoFocusexposureMode:AVCaptureExposureModeAutoExposeatPoint:cameraPoint];}/**
* 设置闪光灯按钮状态
*/-(void)setFlashModeButtonStatus{AVCaptureDevice*captureDevice=[self.captureDeviceInputdevice];AVCaptureFlashModeflashMode=captureDevice.flashMode;if([captureDevice isFlashAvailable]){self.flashAutoButton.hidden=NO;self.flashOnButton.hidden=NO;self.flashOffButton.hidden=NO;self.flashAutoButton.enabled=YES;self.flashOnButton.enabled=YES;self.flashOffButton.enabled=YES;switch(flashMode) {caseAVCaptureFlashModeAuto:self.flashAutoButton.enabled=NO;break;caseAVCaptureFlashModeOn:self.flashOnButton.enabled=NO;break;caseAVCaptureFlashModeOff:self.flashOffButton.enabled=NO;break;default:break; }}else{self.flashAutoButton.hidden=YES;self.flashOnButton.hidden=YES;self.flashOffButton.hidden=YES;}}/**
* 设置聚焦光标位置
*
* @param point 光标位置
*/-(void)setFocusCursorWithPoint:(CGPoint)point{self.focusCursor.center=point;self.focusCursor.transform=CGAffineTransformMakeScale(1.5,1.5);self.focusCursor.alpha=1.0; [UIViewanimateWithDuration:1.0animations:^{self.focusCursor.transform=CGAffineTransformIdentity; } completion:^(BOOLfinished) {self.focusCursor.alpha=0; }];}
@end
文/止于浮水(简书作者)
原文链接://www.greatytc.com/p/7c57c58c253d/comments/1184468
著作权归作者所有,转载请联系作者获得授权,并标注“简书作者”。