前言
上一篇介绍UIImagePickerController 和AVCaptureSession+AVCaptureMovieFileOutput(音视频采集学习笔记(一)), 这篇介绍AVCaptureSession + AVCaptureVideoDataOutput +AVAssetWriter的实现,效果图如下所示 (采集实现Demo)
AVCaptureSession + AVCaptureVideoDataOutput+AVAssetWriter
音视频输入和输出都是通过AVCaptureSession来管理,通过AVCaptureVideoDataOutput可以实时的获取采集的每一帧的数据进行一系列的操作(增加滤镜,根据设备方向转动视图等) ,最后通过AVAssetWriter进行编码处理,存储
流程
1.创建捕捉会话
2.设置视频的输入
3.设置音频的输入
4.设置视频的输出
5.设置音频的输出
6.添加视频预览层
7.添加滤镜
8.写入音视频数据
- 创建捕捉会话
fileprivate lazy var session : AVCaptureSession = {
let session = AVCaptureSession()
session.sessionPreset = .vga640x480
return session
}()
- 视频的输入
@available(iOS 10.2, *)
func setUpVideo(position: AVCaptureDevice.Position) {
currentDevicePosition = position
// 视频输入设备
let videoCaptureDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video,
position: position)
// 视频输入源
do {
videoInput = try AVCaptureDeviceInput.init(device: videoCaptureDevice!)
} catch {
}
if self.session.canAddInput(videoInput) {
self.session.addInput(videoInput)
}
if let dataOutPut = dataOutPut {
let connection = dataOutPut.connection(with: .video)
// 相机前置设置镜像
connection?.isVideoMirrored = currentDevicePosition == .front
}
}
- 音频的输入
func setUpAudio() {
let audioCaptureDevice = AVCaptureDevice.default(for: .audio)
do {
audioInput = try AVCaptureDeviceInput(device: audioCaptureDevice!)
} catch {
}
if session.canAddInput(audioInput) {
self.session.addInput(audioInput)
}
}
- 视频的输出
func setUpDataOutPut() {
dataOutPut = AVCaptureVideoDataOutput()
dataOutPut.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32BGRA)]
dataOutPut.alwaysDiscardsLateVideoFrames = true
if session.canAddOutput(dataOutPut) {
session.addOutput(dataOutPut)
}
dataOutPut.setSampleBufferDelegate(self, queue: DispatchQueue(label: "VideoQueue"))
}
- 音频的输出
func setUpAudioOutPut() {
let audioOutPut = AVCaptureAudioDataOutput()
if session.canAddOutput(audioOutPut) {
session.addOutput(audioOutPut)
}
audioOutPut.setSampleBufferDelegate(self, queue: DispatchQueue(label: "audioQueue"))
}
- 视频预览层
func setUpLayerView() {
previewLayer = CALayer()
previewLayer.anchorPoint = CGPoint.zero
// 默认为手机水平,home在右边的方向, 所以previewLayer的宽对应sessionPreset的高,高度同理
previewLayer.frame = CGRect.init(x: 0, y: 75, width: 480, height: 640)
self.view.contentMode = .scaleAspectFit
self.view.layer.insertSublayer(previewLayer, at: 0)
}
- 添加滤镜 (UIImage, CGImageRef, CIImage)
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// 采集后的图像是未编码的CMSampleBuffer形式,利用给定的接口函数CMSampleBufferGetImageBuffer从中提取出CVPixelBufferRef
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)CMSampleBufferGetImageBuffer(sampleBuffer)
// 实例CIImage
var outImage = CIImage(cvPixelBuffer: imageBuffer!)
// CIFilter滤镜并给滤镜设置属性(KVC)
if filter != nil {
filter.setValue(outImage, forKey: kCIInputImageKey)
outImage = filter.outputImage!
}
// 根据设备方向转动视图
if orientation == UIDeviceOrientation.portrait {
let rotationAngle = currentDevicePosition == .front ? CGFloat.pi / 2.0 : -CGFloat.pi / 2.0
t = CGAffineTransform(rotationAngle: rotationAngle)
} else if orientation == UIDeviceOrientation.portraitUpsideDown {
t = CGAffineTransform(rotationAngle: CGFloat.pi / 2.0)
} else if (orientation == UIDeviceOrientation.landscapeRight) {
t = CGAffineTransform(rotationAngle: CGFloat.pi)
} else {
t = CGAffineTransform(rotationAngle: 0)
}
outImage = outImage.transformed(by: t)
// context 转化为CGImage (非位图图片,不能保存到相册,不能转换为NSData (jpeg png))
let cgImage = context.createCGImage(outImage, from: outImage.extent)
// 赋给previewLayer进行显示
DispatchQueue.main.async {
self.previewLayer.contents = cgImage
}
}
CIContext的创建
lazy var context: CIContext = {
let eaglContext = EAGLContext(api: EAGLRenderingAPI.openGLES2)
let options = [CIContextOption.workingColorSpace : NSNull()]
return CIContext(eaglContext: eaglContext!, options: options)
}()
处理前置镜像问题(呈像相反):
默认系统前置相机是非镜像的,就是呈现的图像为镜子相反的,因为主流用户都习惯这个镜像功能,所以我们在视频的输入的时候设置了前置isVideoMirrored为true,这时候前置就为镜像了,由此也有一个问题,这时候当设备方向为垂直(home键在底部),前置呈像为右,后置为左,所以通过设备方向转动视图需要根据相机方向来逆时针转动或顺时针转动
关于转动视图的问题 :
这里写的是根据设备方向从而去转动视图显示,这样实现也可以,但是有一个弊端,录制中无法转动相机方向,否则其中部分画面反的,而且实现更麻烦,通过下一章GpuImage2的学习,发现通过AVCaptureConnection的videoOrientation属性设置portrait就可以自动实现根据设备方向而转动 (AVCaptureConnection)
if let dataOutPut = dataOutPut {
let connection = dataOutPut.connection(with: .video)
connection?.videoOrientation = .portrait
}
- 写入音视频数据
流程
1.创建AssetWriter
2.创建AssetWriterVideoInput
3.创建AssetWriterPixelBufferInput
4.创建AssetWriterAudioInput
5.添加音视频写入的输出到AssetWriter里,开始写入
6.AVCaptureVideoDataOutputSampleBufferDelegate实现音视频数据写入
(1) 创建AssetWriter
// 移除之前存储的数据
if fileManager.fileExists(atPath: self.videoStoragePath()) {
do {
try fileManager.removeItem(atPath: self.videoStoragePath())
} catch {
}
}
let path = self.videoStoragePath()
videoUrl = URL.init(fileURLWithPath: path)
do {
assetWriter = try AVAssetWriter.init(outputURL: videoUrl as URL, fileType: .mov)
} catch {
}
(2) 创建AssetWriterVideoInput (音视频开发概念篇)
let numPixels = SCREEN_WIDTH * SCREEN_HEIGHT
let bitsPerPixel = 6.0 as CGFloat
let bitsPerSecond = numPixels * bitsPerPixel
let compressionProperties = [AVVideoAverageBitRateKey : bitsPerSecond,// 比特率(码率)
AVVideoExpectedSourceFrameRateKey : 30,// 帧率
AVVideoMaxKeyFrameIntervalKey : 30,
AVVideoProfileLevelKey : AVVideoProfileLevelH264BaselineAutoLevel
] as [String : Any]
//视频属性
let videoCompressionSettings = [AVVideoCodecKey : AVVideoCodecType.h264, // 编码格式,一般选h264,硬件编码
AVVideoScalingModeKey : AVVideoScalingModeResizeAspectFill, // 填充模式
AVVideoWidthKey : 1920, // 视频宽度,以手机水平,home 在右边的方向
AVVideoHeightKey : 1080, // 视频高度,以手机水平,home 在右边的方向
AVVideoCompressionPropertiesKey : compressionProperties] as [String : Any]
assetWriterVideoInput = AVAssetWriterInput(mediaType: .video,
outputSettings: videoCompressionSettings)
//expectsMediaDataInRealTime 必须设为true,需要从capture session 实时获取数据
assetWriterVideoInput.expectsMediaDataInRealTime = true;
let rotationAngle = currentDevicePosition == .front ? CGFloat.pi / 2.0 : -CGFloat.pi / 2.0
// 视频写入输出转动方向
assetWriterVideoInput.transform = CGAffineTransform(rotationAngle: rotationAngle);
(3) 创建AssetWriterPixelBufferInput
let sourcePixelBufferAttributesDictionary = [
String(kCVPixelBufferPixelFormatTypeKey) : Int(kCVPixelFormatType_32BGRA), // 像素格式类型
String(kCVPixelBufferWidthKey) : 1920,
String(kCVPixelBufferHeightKey) : 1080,
// 允许在 OpenGL 的上下文中直接绘制解码后的图像,而不是从总线和 CPU 之间复制数据。这有时候被称为零拷贝通道,因为在绘制过程中没有解码的图像被拷贝
String(kCVPixelFormatOpenGLESCompatibility) : kCFBooleanTrue
] as [String : Any]
assetWriterPixelBufferInput = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: assetWriterVideoInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
(4) 创建AssetWriterAudioInput
let audioCompressionSettings = [AVEncoderBitRatePerChannelKey : 28000,// 编码时每个通道的比特率
AVFormatIDKey : kAudioFormatMPEG4AAC,// 音频格式
AVNumberOfChannelsKey : 1,// 通道数1为单通道2为立体通道
AVSampleRateKey : 22050]// 采样率输入的模拟音频信号每一秒的采样数是影响音频质量和音频文件大小非常重要的一个因素采样率越小文件越小质量越低如44.1kHz
assetWriterAudioInput = AVAssetWriterInput.init(mediaType: .audio,
outputSettings: audioCompressionSettings)
(5) 添加音视频写入输出到AssetWriter
if assetWriter.canAdd(assetWriterVideoInput) {
assetWriter.add(assetWriterVideoInput)
}
if assetWriter.canAdd(assetWriterAudioInput) {
assetWriter.add(assetWriterAudioInput)
}
assetWriter.startWriting()
self.assetWriter.startSession(atSourceTime: self.currentSampleTime)
self.isStart = true
(6) AVCaptureVideoDataOutputSampleBufferDelegate实现音视频数据写入, 通过媒体类型判断分别写入
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// 写入音频
if mediaType == kCMMediaType_Audio {
guard isStart else {
return
}
self.assetWriterAudioInput.append(sampleBuffer)
return
}
// 写入视频数据
if isStart {
// 缓冲区中的数据是否已经处理完成
if (self.assetWriterPixelBufferInput?.assetWriterInput.isReadyForMoreMediaData)! {
var newPixelBuffer: CVPixelBuffer? = nil
CVPixelBufferPoolCreatePixelBuffer(nil, self.assetWriterPixelBufferInput!.pixelBufferPool!, &newPixelBuffer)
self.context.render(outImage, to: newPixelBuffer!, bounds: outImage.extent, colorSpace: nil)
let success = self.assetWriterPixelBufferInput?.append(newPixelBuffer!, withPresentationTime: self.currentSampleTime!)
if success == false {
print("Pixel Buffer没有附加成功")
}
}
}
}
扩展:
AssetWriterPixelBufferInput (用法及理解)
H264视频硬件编解码说明 (CMSampleBuffer结构以及编解码使用)
特别感谢以上分享文章的朋友