Android多媒体开发(九)----Video Buffer传输与Audio Playback流程

       上一篇我们简要分析了一下播放流程,主要讲了音频、视频播放和音视频同步的问题。但是对于视频读取buffer还有音频start之后发生的流程都没有分析。本节我们就这两点再分析一番。

Video Buffer传输流程

video buffer

       上一节的play流程中,OMXCodec会在一开始的时候透过read函数来传送未解码的data给decoder,并要求decoder将解码后的data传回来。
       我们看看OMXCodec的read方法,位于framework/av/media/libstagefright/OMXCodec.cpp:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87

status_t OMXCodec::read(
MediaBuffer **buffer, const ReadOptions *options) {
status_t err = OK;
*buffer = NULL;

Mutex::Autolock autoLock(mLock);
//状态已经置为EXECUTING
if (mState != EXECUTING && mState != RECONFIGURING) {
return UNKNOWN_ERROR;
}

bool seeking = false;
int64_t seekTimeUs;
ReadOptions::SeekMode seekMode;
//如果有seek,则要设置一下seek相关参数
if (options && options->getSeekTo(&seekTimeUs, &seekMode)) {
seeking = true;
}
//这个mInitialBufferSubmit默认为true,第一次会进来
if (mInitialBufferSubmit) {
mInitialBufferSubmit = false;

if (seeking) {
CHECK(seekTimeUs >= 0);
mSeekTimeUs = seekTimeUs;
mSeekMode = seekMode;

// There's no reason to trigger the code below, there's
// nothing to flush yet.
seeking = false;
mPaused = false;
}
//这个很重要:读取需要解码的data,并送往omx解码
drainInputBuffers();

if (mState == EXECUTING) {
// Otherwise mState == RECONFIGURING and this code will trigger
// after the output port is reenabled.
//这个也很重要:从输入端口读取解码好的data
fillOutputBuffers();
}
}

if (seeking) {

...省略seek相关处理,这个不重要...
}
//当输出缓冲区mFilledBuffers为空时会等待解码器解码并填充数据,如果有数据,则直接取走数据
while (mState != ERROR && !mNoMoreOutputData && mFilledBuffers.empty()) {
if ((err = waitForBufferFilled_l()) != OK) {
return err;
}
}
//状态出错
if (mState == ERROR) {
return UNKNOWN_ERROR;
}
//如果输出缓冲区为空,判断是否是文件结尾
if (mFilledBuffers.empty()) {
return mSignalledEOS ? mFinalStatus : ERROR_END_OF_STREAM;
}
//判断输入端口设置是否发生了变化
if (mOutputPortSettingsHaveChanged) {
mOutputPortSettingsHaveChanged = false;

return INFO_FORMAT_CHANGED;
}
/*这里我们将输出缓冲区中的bufferinfo取出来,并将其中的mediabuffer赋值给传递进来的参数buffer*/
/*当decoder解码出来数据后会将存放数据的buffer放在mFilledBuffers中,因此audioplayer每次从omxcodec读取数据时,会从mFilledBuffers中取*/
size_t index = *mFilledBuffers.begin();
mFilledBuffers.erase(mFilledBuffers.begin());

BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);
//说明此info归client所有,client释放后会归还的
info->mStatus = OWNED_BY_CLIENT;

//info->mMediaBuffer->add_ref();是增加一个引用,估计release的时候用~~
info->mMediaBuffer->add_ref();
if (mSkipCutBuffer != NULL) {
mSkipCutBuffer->submit(info->mMediaBuffer);
}
*buffer = info->mMediaBuffer;

return OK;
}

       这个流程比较长,我们分步来看:
       1)设置相关参数。前面设置好参数mState 后,会经过几次回调将状态设置成EXECUTING,不会return;获取seek的拖动进度,拿到时间戳等,更改拖动参数变量。

       2)解封装读取数据,送往omx解码组件解码,并返回解码后的数据。(这一部分是整个流程的核心,我们接下来会讲到)

1
2
3
4
5
6
//省略后相关代码如下
if (mInitialBufferSubmit) {
mInitialBufferSubmit = false;
drainInputBuffers();
fillOutputBuffers();
}

       这里需要注意的是mInitialBufferSubmit默认是true。drainInputBuffers可以认为从extractor读取一包数据。fillOutputBuffers是解码一包数据并放在输出buffer中。

       3)将输出缓冲区中的bufferinfo取出来,并将其中的mediabuffer赋值给传递进来的参数buffer。

1
2
3
4
5
6
7
8
9
10
11
12
//省略后相关代码如下
size_t index = *mFilledBuffers.begin();
mFilledBuffers.erase(mFilledBuffers.begin());
BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);
info->mStatus = OWNED_BY_CLIENT;
info->mMediaBuffer->add_ref();
if (mSkipCutBuffer != NULL) {
mSkipCutBuffer->submit(info->mMediaBuffer);
}
*buffer = info->mMediaBuffer;
return OK;

       这里我们将输出缓冲区中的bufferinfo取出来,并将其中的mediabuffer赋值给传递进来的参数buffer,当decoder解码出来数据后会将存放数据的buffer放在mFilledBuffers中,因此每次从omxcodec读取数据时,会从mFilledBuffers中取。区别在于,当mFilledBuffers为空时会等待解码器解码并填充数据,如果有数据,则直接取走数据。

       在读取这一步之前,将info->mStatus 已经设置为OWNED_BY_CLIENT,说明此info归client所有,client释放后会归还的。
       通过设置mStatus可以让这一块内存由不同的模块来支配,如其角色有如下几个:

1
2
3
4
5
6
enum BufferStatus {
OWNED_BY_US,
OWNED_BY_COMPONENT,
OWNED_BY_NATIVE_WINDOW,
OWNED_BY_CLIENT,
};

       显然component是解码器的,client是外部的。

       info->mMediaBuffer->add_ref();是增加一个引用,估计release的时候用~~

       下面着重分析下如何从extractor读数据,和如何解码数据。

drainInputBuffers实现

       先找到这个方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

void OMXCodec::drainInputBuffers() {
CHECK(mState == EXECUTING || mState == RECONFIGURING);
//DRM相关,忽略
if (mFlags & kUseSecureInputBuffers) {
Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];
for (size_t i = 0; i < buffers->size(); ++i) {
if (!drainAnyInputBuffer()
|| (mFlags & kOnlySubmitOneInputBufferAtOneTime)) {
break;
}
}
} else {
//kPortIndexInput为0,kPortIndexOutput为1,一个输入一个输出
Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];
//我们可能申请了多个输入缓冲区,因此是一个循环
for (size_t i = 0; i < buffers->size(); ++i) {
BufferInfo *info = &buffers->editItemAt(i);
//先检查我们有没有权限使用即OWNED_BY_US
if (info->mStatus != OWNED_BY_US) {
continue;
}

if (!drainInputBuffer(info)) {
break;
}
//kOnlySubmitOneInputBufferAtOneTime即每次只允许读一个包,否则循环都读满
if (mFlags & kOnlySubmitOneInputBufferAtOneTime) {
break;
}
}
}
}

       这里解释下,我们可能申请了多个输入缓冲区,因此是一个循环,先检查我们有没有权限使用即OWNED_BY_US,这一缓冲区获取完数据后会检测。
       kOnlySubmitOneInputBufferAtOneTime即每次只允许读一个包,否则循环都读满。

       我们继续看drainInputBuffer实现。这一段代码很长,我们只能分部贴出分析:

Part.1:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
bool OMXCodec::drainInputBuffer(BufferInfo *info) {
if (info != NULL) {
CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);
}

if (mSignalledEOS) {
return false;
}
//如果有未处理的mCodecSpecificData则先mOMX->emptyBuffer(info->Buffer,OMX_BUFFERFLAG_CODECCONFIG)处理这些配置数据
if (mCodecSpecificDataIndex < mCodecSpecificData.size()) {
CHECK(!(mFlags & kUseSecureInputBuffers));

const CodecSpecificData *specific =
mCodecSpecificData[mCodecSpecificDataIndex];

size_t size = specific->mSize;
//如果是avc/h264或者hevc/h265,则要处理NAL头部
if ((!strcasecmp(MEDIA_MIMETYPE_VIDEO_AVC, mMIME) ||
!strcasecmp(MEDIA_MIMETYPE_VIDEO_HEVC, mMIME))
&& !(mQuirks & kWantsNALFragments)) {
static const uint8_t kNALStartCode[4] =
{ 0x00, 0x00, 0x00, 0x01 };

CHECK(info->mSize >= specific->mSize + 4);

size += 4;

memcpy(info->mData, kNALStartCode, 4);
memcpy((uint8_t *)info->mData + 4,
specific->mData, specific->mSize);
} else {
CHECK(info->mSize >= specific->mSize);
memcpy(info->mData, specific->mData, specific->mSize);
}

mNoMoreOutputData = false;

CODEC_LOGV("calling emptyBuffer with codec specific data");
//处理mCodecSpecificData,mOMX->emptyBuffer我们后面会讲到
status_t err = mOMX->emptyBuffer(
mNode, info->mBuffer, 0, size,
OMX_BUFFERFLAG_ENDOFFRAME | OMX_BUFFERFLAG_CODECCONFIG,
0);
CHECK_EQ(err, (status_t)OK);

info->mStatus = OWNED_BY_COMPONENT;

++mCodecSpecificDataIndex;
return true;
}

if (mPaused) {
return false;
}

...未完,待续...
}

       如果有未处理的mCodecSpecificData则先mOMX->emptyBuffer(info->Buffer,OMX_BUFFERFLAG_CODECCONFIG)处理这些配置数据。(mOMX->emptyBuffer我们后边会讲到)
       如果是avc/h264或者hevc/h265,则要处理NAL头部。

       在H.264/AVC视频编码标准中,整个系统框架被分为了两个层面:视频编码层面(VCL)和网络抽象层面(NAL)。其中,前者负责有效表示视频数据的内容,而后者则负责格式化数据并提供头信息,以保证数据适合各种信道和存储介质上的传输。因此我们平时的每帧数据就是一个NAL单元(SPS与PPS除外)。在实际的H264数据帧中,往往帧前面带有00 00 00 01 或 00 00 01分隔符,一般来说编码器编出的首帧数据为PPS与SPS,接着为I帧……

Part.2:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
bool OMXCodec::drainInputBuffer(BufferInfo *info) {

...Part.1...

status_t err;

bool signalEOS = false;
int64_t timestampUs = 0;

size_t offset = 0;
int32_t n = 0;


for (;;) {
MediaBuffer *srcBuffer;
//如果有seek,处理seek相关
if (mSeekTimeUs >= 0) {
if (mLeftOverBuffer) {
mLeftOverBuffer->release();
mLeftOverBuffer = NULL;
}

MediaSource::ReadOptions options;
options.setSeekTo(mSeekTimeUs, mSeekMode);

mSeekTimeUs = -1;
mSeekMode = ReadOptions::SEEK_CLOSEST_SYNC;
mBufferFilled.signal();

//这里的mSource是在AwesomePlayer里面设置的mVideoTrack,从extractor读取数据,用于不同数据源的解封装
err = mSource->read(&srcBuffer, &options);

if (err == OK) {
int64_t targetTimeUs;
if (srcBuffer->meta_data()->findInt64(
kKeyTargetTime, &targetTimeUs)
&& targetTimeUs >= 0) {
CODEC_LOGV("targetTimeUs = %lld us", targetTimeUs);
mTargetTimeUs = targetTimeUs;
} else {
mTargetTimeUs = -1;
}
}
} else if (mLeftOverBuffer) {//如果是读取溢出了,下面代码会有判断逻辑
srcBuffer = mLeftOverBuffer;
mLeftOverBuffer = NULL;

err = OK;
} else {
//同上,只不过没有seek的进度
err = mSource->read(&srcBuffer);
}

if (err != OK) {
signalEOS = true;
mFinalStatus = err;
mSignalledEOS = true;
mBufferFilled.signal();
break;
}
//DRM相关,忽略
if (mFlags & kUseSecureInputBuffers) {
info = findInputBufferByDataPointer(srcBuffer->data());
CHECK(info != NULL);
}
/*下面是判断从extractor读取到的数据是不是超过了总大小*/
//计算输入缓冲区剩余容量大小
size_t remainingBytes = info->mSize - offset;
//如果剩余容量小于解封装后读取的data大小
if (srcBuffer->range_length() > remainingBytes) {
//如果是每次读取的开始
if (offset == 0) {
CODEC_LOGE(
"Codec's input buffers are too small to accomodate "
"buffer read from source (info->mSize = %d, srcLength = %d)",
info->mSize, srcBuffer->range_length());
//解码申请的输入缓冲区大小太小了,无法容纳解封装读取的数据大小
//释放读取的data
srcBuffer->release();
srcBuffer = NULL;
//设置错误状态
setState(ERROR);
return false;
}

mLeftOverBuffer = srcBuffer;
break;
}

bool releaseBuffer = true;
if (mFlags & kStoreMetaDataInVideoBuffers) {
releaseBuffer = false;
info->mMediaBuffer = srcBuffer;
}

if (mFlags & kUseSecureInputBuffers) {//DRM,忽略
// Data in "info" is already provided at this time.

releaseBuffer = false;

CHECK(info->mMediaBuffer == NULL);
info->mMediaBuffer = srcBuffer;
} else {//将读取到的数据拷贝到申请的读取缓冲区中
CHECK(srcBuffer->data() != NULL) ;
memcpy((uint8_t *)info->mData + offset,
(const uint8_t *)srcBuffer->data()
+ srcBuffer->range_offset(),
srcBuffer->range_length());
}

......
//读取,拷贝后,将读取位置偏移量加上读取大小
offset += srcBuffer->range_length();
//如果是OGG Vobis格式的音频
if (!strcasecmp(MEDIA_MIMETYPE_AUDIO_VORBIS, mMIME)) {
CHECK(!(mQuirks & kSupportsMultipleFramesPerInputBuffer));
CHECK_GE(info->mSize, offset + sizeof(int32_t));

int32_t numPageSamples;
if (!srcBuffer->meta_data()->findInt32(
kKeyValidSamples, &numPageSamples)) {
numPageSamples = -1;
}

memcpy((uint8_t *)info->mData + offset,
&numPageSamples,
sizeof(numPageSamples));

offset += sizeof(numPageSamples);
}

if (releaseBuffer) {
srcBuffer->release();
srcBuffer = NULL;
}
//读取次数,读了几帧
++n;
//如果不支持每次读取多帧,则一次就直接跳出
if (!(mQuirks & kSupportsMultipleFramesPerInputBuffer)) {
break;
}
//处理本次读取时间
int64_t coalescedDurationUs = lastBufferTimeUs - timestampUs;
//如果时间超过250毫秒则舍弃
if (coalescedDurationUs > 250000ll) {
// Don't coalesce more than 250ms worth of encoded data at once.
break;
}
}

...未完,待续...
}

       Part2的代码虽然多,但核心不多。
       1)从extractor读取数据,用于不同数据源的解封装。这里的mSource是在AwesomePlayer里面设置的mVideoTrack,我们可以回到以前设置数据源那里查看,如果不记得了可回顾一下以前的Android多媒体开发(四)—-AwesomePlayer数据源处理。我们假设是MEDIA_MIMETYPE_CONTAINER_MPEG2TS封装格式,取得它的视频流信息,我们要查看MPEG2TSExtractor这个类的getTrack函数,位于frameworks/av/media/libstagefright/mpeg2ts/MPEG2TSExtractor.cpp中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
sp<MediaSource> MPEG2TSExtractor::getTrack(size_t index) {
if (index >= mSourceImpls.size()) {
return NULL;
}

bool seekable = true;
if (mSourceImpls.size() > 1) {
CHECK_EQ(mSourceImpls.size(), 2u);

sp<MetaData> meta = mSourceImpls.editItemAt(index)->getFormat();
const char *mime;
CHECK(meta->findCString(kKeyMIMEType, &mime));

if (!strncasecmp("audio/", mime, 6)) {
seekable = false;
}
}

return new MPEG2TSSource(this, mSourceImpls.editItemAt(index), seekable);
}

       要解封装读取数据,是MPEG2TSSource的read方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
status_t MPEG2TSSource::read(
MediaBuffer **out, const ReadOptions *options) {
*out = NULL;

int64_t seekTimeUs;
ReadOptions::SeekMode seekMode;
if (mSeekable && options && options->getSeekTo(&seekTimeUs, &seekMode)) {
return ERROR_UNSUPPORTED;
}

status_t finalResult;
while (!mImpl->hasBufferAvailable(&finalResult)) {
if (finalResult != OK) {
return ERROR_END_OF_STREAM;
}

status_t err = mExtractor->feedMore();
if (err != OK) {
mImpl->signalEOS(err);
}
}

return mImpl->read(out, options);
}

       不过遗憾的是这里涉及MPEG-TS文件格式问题,并且上述read方法主要是使用mImpl来实现的。由于文件格式比较复杂,这里就止步了,有兴趣的同学可以下去自行研究。

       2)判断从extractor读取到的数据是不是超过了总大小 。先计算申请的缓冲区剩余容量,然后根据上一步读取的大小和她进行比大小,从而判断是否读取溢出。

       3)将读取的数据拷贝进申请的读入缓冲区中。这里读取完毕后将缓冲区的状态设置成OWNED_BY_COMPONENT 解码器就可以解码了。这里可以看出来读取数据时实现了一次拷贝~~,而不是用的同一块缓冲区(省略一些细节)

Part.3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
bool OMXCodec::drainInputBuffer(BufferInfo *info) {

...Part.1...
...Part.2...

if (n > 1) {
ALOGV("coalesced %d frames into one input buffer", n);
}

OMX_U32 flags = OMX_BUFFERFLAG_ENDOFFRAME;

if (signalEOS) {
flags |= OMX_BUFFERFLAG_EOS;
} else {
mNoMoreOutputData = false;
}

......
//将解封装的数据送往OMX去解码
err = mOMX->emptyBuffer(
mNode, info->mBuffer, 0, offset,
flags, timestampUs);

if (err != OK) {
setState(ERROR);
return false;
}

info->mStatus = OWNED_BY_COMPONENT;

return true;
}

       这里读取完毕后将缓冲区的状态设置成OWNED_BY_COMPONENT 解码器就可以解码了。
       下面看读取数据完毕后调用mOMX->emptyBuffer都干了些啥。位于frameworks/av/media/libstagefright/omx/OMX.cpp中

1
2
3
4
5
6
7
8
status_t OMX::emptyBuffer(
node_id node,
buffer_id buffer,
OMX_U32 range_offset, OMX_U32 range_length,
OMX_U32 flags, OMX_TICKS timestamp) {
return findInstance(node)->emptyBuffer(
buffer, range_offset, range_length, flags, timestamp);
}

       然后根绝node节点找到自己的OMXNodeInstance,找到emptyBuffer方法,位于frameworks/av/media/libstagefright/omx/OMXNodeInstance.cpp中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
status_t OMXNodeInstance::emptyBuffer(
OMX::buffer_id buffer,
OMX_U32 rangeOffset, OMX_U32 rangeLength,
OMX_U32 flags, OMX_TICKS timestamp) {
Mutex::Autolock autoLock(mLock);

OMX_BUFFERHEADERTYPE *header = findBufferHeader(buffer);
header->nFilledLen = rangeLength;
header->nOffset = rangeOffset;
header->nFlags = flags;
header->nTimeStamp = timestamp;

BufferMeta *buffer_meta =
static_cast<BufferMeta *>(header->pAppPrivate);
buffer_meta->CopyToOMX(header);
//此处mHandle对应相应解码组件的
OMX_ERRORTYPE err = OMX_EmptyThisBuffer(mHandle, header);

return StatusFromOMXError(err);
}

       这时候就是调用OMX的方法了,我们在Android多媒体开发(七)—-Android中OpenMax的实现 讲过,OMX适配层会去查找匹配的相应解码组件,如果忘掉的可以查看这这一节。

       以前我们分析的都是硬解,比如高通、TI的平台,但这里我们为了更清晰分析这个流程,我们选择软解组件,及OMX.google.XX.XX.Decoder。
       我们假设视频编码格式hevc/h265的,因此找到对应的软解组件SoftHEVC,位于frameworks/av/media/libstagefright/codecs/hevcdec/SoftHEVC.cpp。但是我们没有找到emptyThisBuffer方法,所以得去它父类的父类SimpleSoftOMXComponent中去查找,位于frameworks/av/media/libstagefright/omx/SimpleSoftOMXComponent.cpp中:

1
2
3
4
5
6
7
8
OMX_ERRORTYPE SimpleSoftOMXComponent::emptyThisBuffer(
OMX_BUFFERHEADERTYPE *buffer) {
sp<AMessage> msg = new AMessage(kWhatEmptyThisBuffer, mHandler->id());
msg->setPointer("header", buffer);
msg->post();

return OMX_ErrorNone;
}

       可以看到就是发了一条命令kWhatEmptyThisBuffer,通过handler->id确定了自己发的还得自己收,处理函数如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
void SimpleSoftOMXComponent::onMessageReceived(const sp<AMessage> &msg) {
Mutex::Autolock autoLock(mLock);
uint32_t msgType = msg->what();
ALOGV("msgType = %d", msgType);
switch (msgType) {

......

case kWhatEmptyThisBuffer:
case kWhatFillThisBuffer:
{
OMX_BUFFERHEADERTYPE *header;
CHECK(msg->findPointer("header", (void **)&header));

CHECK(mState == OMX_StateExecuting && mTargetState == mState);

bool found = false;
size_t portIndex = (kWhatEmptyThisBuffer == msgType)?
header->nInputPortIndex: header->nOutputPortIndex;
PortInfo *port = &mPorts.editItemAt(portIndex);

for (size_t j = 0; j < port->mBuffers.size(); ++j) {
BufferInfo *buffer = &port->mBuffers.editItemAt(j);

if (buffer->mHeader == header) {
CHECK(!buffer->mOwnedByUs);

buffer->mOwnedByUs = true;

CHECK((msgType == kWhatEmptyThisBuffer
&& port->mDef.eDir == OMX_DirInput)
|| (port->mDef.eDir == OMX_DirOutput));

port->mQueue.push_back(buffer);
onQueueFilled(portIndex);

found = true;
break;
}
}

CHECK(found);
break;
}

default:
TRESPASS();
break;
}
}

       从代码这里来看这两个case都走同一套代码,而且都是通过onQueueFilled来处理,这样我们就引出了实际的处理函数,也就是onQueueFilled。此时我们就得去子类SoftHEVC.cpp中查找:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
void SoftHEVC::onQueueFilled(OMX_U32 portIndex) {

UNUSED(portIndex);

if (mOutputPortSettingsChange != NONE) {
return;
}
//获取输入输出链表
List<BufferInfo *> &inQueue = getPortQueue(kInputPortIndex);
List<BufferInfo *> &outQueue = getPortQueue(kOutputPortIndex);

......

while (!outQueue.empty()) {
//输入缓冲区
BufferInfo *inInfo;
OMX_BUFFERHEADERTYPE *inHeader;
//输出缓冲区
BufferInfo *outInfo;
OMX_BUFFERHEADERTYPE *outHeader;
size_t timeStampIx;

inInfo = NULL;
inHeader = NULL;

//各自取输入输出缓冲区中的第一个缓冲区
outInfo = *outQueue.begin();
outHeader = outInfo->mHeader;
outHeader->nFlags = 0;
outHeader->nTimeStamp = 0;
outHeader->nOffset = 0;
//判断缓冲区是不是没有数据,若果第一个都没有那就是没有
if (inHeader != NULL && (inHeader->nFlags & OMX_BUFFERFLAG_EOS)) {
ALOGD("EOS seen on input");
mReceivedEOS = true;
if (inHeader->nFilledLen == 0) {
inQueue.erase(inQueue.begin());
inInfo->mOwnedByUs = false;
//如果输入缓冲区数据没有了,则调用notifyEmptyBufferDone
notifyEmptyBufferDone(inHeader);
inHeader = NULL;
setFlushMode();
}
}

......

/***************************省略解码相关细节******************************/

if (s_dec_op.u4_output_present) {
outHeader->nFilledLen = (mWidth * mHeight * 3) / 2;

outHeader->nTimeStamp = mTimeStamps[s_dec_op.u4_ts];
mTimeStampsValid[s_dec_op.u4_ts] = false;

outInfo->mOwnedByUs = false;
outQueue.erase(outQueue.begin());
outInfo = NULL;
//这是将解码出来的数据告诉外部,通过调用notifyFillBufferDone
notifyFillBufferDone(outHeader);
outHeader = NULL;
} else {
/* If in flush mode and no output is returned by the codec,
* then come out of flush mode */

mIsInFlush = false;

/* If EOS was recieved on input port and there is no output
* from the codec, then signal EOS on output port */

if (mReceivedEOS) {
outHeader->nFilledLen = 0;
outHeader->nFlags |= OMX_BUFFERFLAG_EOS;

outInfo->mOwnedByUs = false;
outQueue.erase(outQueue.begin());
outInfo = NULL;
//这是将解码出来的数据告诉外部,通过调用notifyFillBufferDone
notifyFillBufferDone(outHeader);
outHeader = NULL;
resetPlugin();
}
}
}

// TODO: Handle more than one picture data
if (inHeader != NULL) {
inInfo->mOwnedByUs = false;
inQueue.erase(inQueue.begin());
inInfo = NULL;
// 如果输入缓冲区数据都解码完了,则调用notifyEmptyBufferDone
notifyEmptyBufferDone(inHeader);
inHeader = NULL;
}
}
}

       以上就是输入缓冲区数据解码再从输出缓冲区传递出去部分。流程大概是:
       1)读取输入缓冲区数据,如果空了则表示结束了,则调用notifyEmptyBufferDone,输入部分清空;
       2)如果读取到了数据,则送去解码(因为这里设计编码格式等等,因此省略解码细节);
       3)然后将解码出来的数据告诉外部,通过调用notifyFillBufferDone ;
       4)循环上述过程。

       所以上述过程的重点就是notifyEmptyBufferDone 和notifyFillBufferDone ,如何将输入缓冲区释放和将输出缓冲区中的数据传递出去。接下来我们分析这两个过程。

输入部分的清空notifyEmptyBufferDone

       notifyEmptyBufferDone位于它父类的父类的父类,位于frameworks/av/media/libstagefright/omx/SoftOMXComponent.cpp中:

1
2
3
4
void SoftOMXComponent::notifyEmptyBufferDone(OMX_BUFFERHEADERTYPE *header) {
(*mCallbacks->EmptyBufferDone)(
mComponent, mComponent->pApplicationPrivate, header);
}

       我们在Android多媒体开发(七)—-Android中OpenMax的实现 讲过,这里最后还是会回到OMXNodeInstance。通知外面我们emptythisbuffer完工了,具体回调的是OMXNodeInstance中的方法OnEmptyBufferDone,所以看看它的实现:

1
2
3
4
5
6
7
8
9
10
11
12
// static
OMX_ERRORTYPE OMXNodeInstance::OnEmptyBufferDone(
OMX_IN OMX_HANDLETYPE /* hComponent */,
OMX_IN OMX_PTR pAppData,
OMX_IN OMX_BUFFERHEADERTYPE* pBuffer) {
OMXNodeInstance *instance = static_cast<OMXNodeInstance *>(pAppData);
if (instance->mDying) {
return OMX_ErrorNone;
}
return instance->owner()->OnEmptyBufferDone(instance->nodeID(),
instance->findBufferID(pBuffer), pBuffer);
}

       OMXNodeInstance的ownner是OMX,因此代码在OMX.cpp中:

1
2
3
4
5
6
7
8
9
10
11
12
13
OMX_ERRORTYPE OMX::OnEmptyBufferDone(
node_id node, buffer_id buffer, OMX_IN OMX_BUFFERHEADERTYPE *pBuffer) {
ALOGV("OnEmptyBufferDone buffer=%p", pBuffer);

omx_message msg;
msg.type = omx_message::EMPTY_BUFFER_DONE;
msg.node = node;
msg.u.buffer_data.buffer = buffer;

findDispatcher(node)->post(msg);

return OMX_ErrorNone;
}

       其中findDispatcher定义如下:

1
2
3
4
5
6
7
sp<OMX::CallbackDispatcher> OMX::findDispatcher(node_id node) {
Mutex::Autolock autoLock(mLock);

ssize_t index = mDispatchers.indexOfKey(node);

return index < 0 ? NULL : mDispatchers.valueAt(index);
}

       这里mDispatcher在之前allocateNode中通过mDispatchers.add(*node, new CallbackDispatcher(instance)); 创建的,看下实际的实现可知道,CallbackDispatcher的post方法最终会调用dispatch:

1
2
3
4
5
6
7
void OMX::CallbackDispatcher::dispatch(const omx_message &msg) {
if (mOwner == NULL) {
ALOGV("Would have dispatched a message to a node that's already gone.");
return;
}
mOwner->onMessage(msg);
}

       而owner是OMXNodeInstance,因此消息饶了一圈还是到了OMXNodeInstance的OnMessage方法接收了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
void OMXNodeInstance::onMessage(const omx_message &msg) {
const sp<GraphicBufferSource>& bufferSource(getGraphicBufferSource());

if (msg.type == omx_message::FILL_BUFFER_DONE) {
......
} else if (msg.type == omx_message::EMPTY_BUFFER_DONE) {
if (bufferSource != NULL) {
// This is one of the buffers used exclusively by
// GraphicBufferSource.
// Don't dispatch a message back to ACodec, since it doesn't
// know that anyone asked to have the buffer emptied and will
// be very confused.

OMX_BUFFERHEADERTYPE *buffer =
findBufferHeader(msg.u.buffer_data.buffer);

bufferSource->codecBufferEmptied(buffer);
return;
}
}

mObserver->onMessage(msg);
}

       而onMessage又将消息传递到 mObserver中,也就是在OMXCodec::Create中构造的OMXCodecObserver对象,其OnMessage实现如下:

1
2
3
4
5
6
7
8
9
10
// from IOMXObserver
virtual void onMessage(const omx_message &msg) {
sp<OMXCodec> codec = mTarget.promote();

if (codec.get() != NULL) {
Mutex::Autolock autoLock(codec->mLock);
codec->on_message(msg);
codec.clear();
}
}

       最终还是传递给了OMXCodec里,具体看下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
void OMXCodec::on_message(const omx_message &msg) {
if (mState == ERROR) {
/*
* only drop EVENT messages, EBD and FBD are still
* processed for bookkeeping purposes
*/

if (msg.type == omx_message::EVENT) {
ALOGW("Dropping OMX EVENT message - we're in ERROR state.");
return;
}
}

switch (msg.type) {
......
case omx_message::EMPTY_BUFFER_DONE:
{
IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;

CODEC_LOGV("EMPTY_BUFFER_DONE(buffer: %u)", buffer);

Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];
size_t i = 0;
while (i < buffers->size() && (*buffers)[i].mBuffer != buffer) {
++i;
}

CHECK(i < buffers->size());
if ((*buffers)[i].mStatus != OWNED_BY_COMPONENT) {
ALOGW("We already own input buffer %u, yet received "
"an EMPTY_BUFFER_DONE.", buffer);
}

BufferInfo* info = &buffers->editItemAt(i);
info->mStatus = OWNED_BY_US;

// Buffer could not be released until empty buffer done is called.
if (info->mMediaBuffer != NULL) {
//此处虽然调用了info->mMediaBuffer->release();但是由于其引用始终大于0,因此不会真正的release
info->mMediaBuffer->release();
info->mMediaBuffer = NULL;
}

if (mPortStatus[kPortIndexInput] == DISABLING) {
CODEC_LOGV("Port is disabled, freeing buffer %u", buffer);

status_t err = freeBuffer(kPortIndexInput, i);
CHECK_EQ(err, (status_t)OK);
} else if (mState != ERROR
&& mPortStatus[kPortIndexInput] != SHUTTING_DOWN) {
CHECK_EQ((int)mPortStatus[kPortIndexInput], (int)ENABLED);

if (mFlags & kUseSecureInputBuffers) {
drainAnyInputBuffer();
} else {
//重点在这里,会调用drainInputBuffer(&buffers->editItemAt(i));来填充数据
//也就是说当我们启动一次解码播放后,会在此处循环读取数和据解码数据。而输出数据在后面的filloutbuffer中。
drainInputBuffer(&buffers->editItemAt(i));
}
}
break;
}
......
}
}

       此处虽然调用了info->mMediaBuffer->release();但是由于其引用始终大于0,因此不会真正的release。
       二是当release完毕后,会调用drainInputBuffer(&buffers->editItemAt(i));来填充数据。也就是说当我们启动一次解码播放后,会在此处循环读取数和据解码数据。而输出数据在后面的filloutbuffer中

       输入部分的清空notifyEmptyBufferDone就分析完了这部分很绕,但搞清楚就好了,请大家仔细阅读。接着我们分析输出数据的清空notifyFillBufferDone(outHeader)。

输出数据的清空notifyFillBufferDone(outHeader)

       notifyFillBufferDone同样位于SoftOMXComponent.cpp中:

1
2
3
4
void SoftOMXComponent::notifyFillBufferDone(OMX_BUFFERHEADERTYPE *header) {
(*mCallbacks->FillBufferDone)(
mComponent, mComponent->pApplicationPrivate, header);
}

       这个和上一步流程分析方法一样,最后回到OMX的OnFillBufferDone方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

OMX_ERRORTYPE OMX::OnFillBufferDone(
node_id node, buffer_id buffer, OMX_IN OMX_BUFFERHEADERTYPE *pBuffer) {
ALOGV("OnFillBufferDone buffer=%p", pBuffer);

omx_message msg;
msg.type = omx_message::FILL_BUFFER_DONE;
msg.node = node;
msg.u.extended_buffer_data.buffer = buffer;
msg.u.extended_buffer_data.range_offset = pBuffer->nOffset;
msg.u.extended_buffer_data.range_length = pBuffer->nFilledLen;
msg.u.extended_buffer_data.flags = pBuffer->nFlags;
msg.u.extended_buffer_data.timestamp = pBuffer->nTimeStamp;

findDispatcher(node)->post(msg);

return OMX_ErrorNone;
}

       最终处理在OMXCodec.cpp中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
void OMXCodec::on_message(const omx_message &msg) {
if (mState == ERROR) {
/*
* only drop EVENT messages, EBD and FBD are still
* processed for bookkeeping purposes
*/

if (msg.type == omx_message::EVENT) {
ALOGW("Dropping OMX EVENT message - we're in ERROR state.");
return;
}
}

switch (msg.type) {
......
case omx_message::FILL_BUFFER_DONE:
{
IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;
OMX_U32 flags = msg.u.extended_buffer_data.flags;

......

Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexOutput];
size_t i = 0;
while (i < buffers->size() && (*buffers)[i].mBuffer != buffer) {
++i;
}

CHECK(i < buffers->size());
BufferInfo *info = &buffers->editItemAt(i);

if (info->mStatus != OWNED_BY_COMPONENT) {
ALOGW("We already own output buffer %u, yet received "
"a FILL_BUFFER_DONE.", buffer);
}
//先将mStatus设置成OWNED_BY_US,这样component便不能操作了
info->mStatus = OWNED_BY_US;

if (mPortStatus[kPortIndexOutput] == DISABLING) {
......
} else if (mPortStatus[kPortIndexOutput] != SHUTTING_DOWN) {
CHECK_EQ((int)mPortStatus[kPortIndexOutput], (int)ENABLED);

MediaBuffer *buffer = info->mMediaBuffer;
bool isGraphicBuffer = buffer->graphicBuffer() != NULL;

......
/*buffer相关参数设置,可以忽略*/
buffer->set_range(
msg.u.extended_buffer_data.range_offset,
msg.u.extended_buffer_data.range_length);

buffer->meta_data()->clear();

buffer->meta_data()->setInt64(
kKeyTime, msg.u.extended_buffer_data.timestamp);

if (msg.u.extended_buffer_data.flags & OMX_BUFFERFLAG_SYNCFRAME) {
buffer->meta_data()->setInt32(kKeyIsSyncFrame, true);
}
bool isCodecSpecific = false;
if (msg.u.extended_buffer_data.flags & OMX_BUFFERFLAG_CODECCONFIG) {
buffer->meta_data()->setInt32(kKeyIsCodecConfig, true);
isCodecSpecific = true;
}

if (isGraphicBuffer || mQuirks & kOutputBuffersAreUnreadable) {
buffer->meta_data()->setInt32(kKeyIsUnreadable, true);
}

buffer->meta_data()->setInt32(
kKeyBufferID,
msg.u.extended_buffer_data.buffer);

if (msg.u.extended_buffer_data.flags & OMX_BUFFERFLAG_EOS) {
CODEC_LOGV("No more output data.");
mNoMoreOutputData = true;
}

if (mIsEncoder && mIsVideo) {
int64_t decodingTimeUs = isCodecSpecific? 0: getDecodingTimeUs();
buffer->meta_data()->setInt64(kKeyDecodingTime, decodingTimeUs);
}

......
//核心是下面几句,将这个buffer push到mFilledBuffers中。
mFilledBuffers.push_back(i);
mBufferFilled.signal();
if (mIsEncoder) {
sched_yield();
}
}
break;
}
......
}
}

       上述代码主体也不多:
       1)先将mStatus设置成OWNED_BY_US,这样component便不能操作了;
       2)对于解码好的buffer进行相关参数设置;
       2)后面将这个buffer push到mFilledBuffers中。

       输出数据的清空notifyFillBufferDone就分析完了,这里我们的mFilledBuffers就有数据了,就能为Video Buffer传输流程的下一步fillOutputBuffers做准备了。

fillOutputBuffers实现

       回到开始的步骤,OMXCodec的read第一步drainInputBuffers实现完成了数据的解封装和送往OMX去解码,完成后返回给mFilledBuffers。
       所以我们这一步fillOutputBuffers就是读取这些返回解码数据。我们先看看fillOutputBuffers函数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

void OMXCodec::fillOutputBuffers() {
CHECK_EQ((int)mState, (int)EXECUTING);

......

Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexOutput];
for (size_t i = 0; i < buffers->size(); ++i) {
BufferInfo *info = &buffers->editItemAt(i);
if (info->mStatus == OWNED_BY_US) {
//找到一个输出缓冲区bufferinfo,启动输出
fillOutputBuffer(&buffers->editItemAt(i));
}
}
}

       找到一个输出缓冲区bufferinfo,启动输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
 void OMXCodec::fillOutputBuffer(BufferInfo *info) {
CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);

......
CODEC_LOGV("Calling fillBuffer on buffer %p", info->mBuffer);
//依旧可以参考上一步的步骤,最后进入解码器组件里面
status_t err = mOMX->fillBuffer(mNode, info->mBuffer);

if (err != OK) {
CODEC_LOGE("fillBuffer failed w/ error 0x%08x", err);

setState(ERROR);
return;
}

info->mStatus = OWNED_BY_COMPONENT;
}

       这一步和上面分析过的步骤相同,最后还是会进入解码器的组件内部,依然以hevc/h265为例子。同样还是如下步骤:
       OMXNodeInstance::fillBuffer—->SimpleSoftOMXComponent::fillThisBuffer—->SimpleSoftOMXComponent::onMessageReceived—->SoftHEVC::onQueueFilled。然后通过notifyEmptyBufferDone(inHeader);和notifyFillBufferDone(outHeader);两个函数来推进播放进度。只不过这次我们最后回到了OMXCodec的on_message回到函数,走了FILL_BUFFER_DONE这个case里的notifyFillBufferDone部分,这个我们上一步最后也分析过这个流程了

       到这里我们fillOutputBuffers实现部分也分析完了,主要就是传输输出缓冲区的数据。

Video Buffer传输流程总结

       如果把这一流程总结一下,详细流程如下:
1 调用OMXCodec:read()
1.1 先读取ReadOptions看看是不是seek
1.2 如果不是seek,则当前mState必须是EXECUTING或者RECONFIGURING
1.3 如果是提交的第一帧,先调用drainInputBuffers()
1.3.1 执行drainInputBuffers()必须是在mState为EXECUTING,RECONFIGURING和FLUSHING的状态
1.3.2 从mPortBuffers[input]里循环取出每个BufferInfo,并对每个info->mStatus等于OWNED_BY_US的buffer调用drainInputBuffer(info)
1.3.2.1 drainInputBuffer(BufferInfo)要求buffer必须是OWNED_BY_US
1.3.2.2 如果有未处理的mCodecSpecificData则先mOMX->emptyBuffer(info->Buffer,OMX_BUFFERFLAG_CODECCONFIG)处理这些配置数据
1.3.2.3 如果mSignalledEOS或者mPaused为true则停止drain并return false
1.3.2.4 循环调用mSource->read()读取压缩源srcBuffer,读取失败的话则设置mSignalledEOS为true,如果成功则将其copy进info->mData里去
1.3.2.5 结果是info->mData里装着从mSource中多次读取出来的数据总和,并将timestampUs设为第一块数据的kKeyTime值
1.3.2.6 设置flags为OMX_BUFFERFLAG_ENDOFFRAME,如果刚才遇到EOS则再并一个OMX_BUFFERFLAG_EOS
1.3.2.7 调用mOMX->emptyBuffer(mNode, info->mBuffer, 0, offset,flags, timestampUs);
1.3.2.8 如果emptyBuffer返回OK,则设置info->mStatus为OWNED_BY_COMPONENT并return true,否则设置mState为ERROR并返回false
1.4 再调用fillOutputBuffers()
1.4.1 执行fillOutputBuffers()必须是在mState为EXECUTING,FLUSHING的状态
1.4.2 从mPortBuffers[output]里循环取出每个BufferInfo,并对每个info->mStatus等于OWNED_BY_US的buffer调用fillOutputBuffer(info)
1.4.2.1 如果mNoMoreOutputData为true则return
1.4.2.2 如果info->mMediaBuffer不为空,则取出其中的GraphicBuffer,并调用mNativeWindow->lockBuffer(mNativeWindow,graphicBuffer)来锁定该buffer,出错则设置mState为ERROR
1.4.2.3 调用mOMX->fillBuffer(mNode, info->mBuffer)
1.4.2.4 如果emptyBuffer返回OK,则设置info->mStatus为OWNED_BY_COMPONENT,否则设置mState为ERROR,最后return
1.5 如果mState不为ERROR,并且mNoMoreOutputData为false,并且mFilledBuffers为空,并且mOutputPortSettingsChangedPending为false的情况下,则调用waitForBufferFilled_l(),让mBufferFilled去wait lock
1.6 从waitForBufferFilled_l()释放出来后,判断mFilledBuffers为空的话,如果mOutputPortSettingsChangedPending为true则去调用之前延迟执行的onPortSettingsChanged(),否则return EOS
1.7 取出mFilledBuffers的第一个buffer,该buffer的mStatus此时必须为OWNED_BY_US
1.8 然后设置其为OWNED_BY_CLIENT,给该buffer的mMediaBuffer->add_ref()增加一个引用,并把该mMediaBuffer赋值给buffer并返回给AwesomePlayer

Audio Playback 流程(简要分析)

audio playback

       从上一篇可以看到,Audio播放是从AudioPlayer的start函数开始,位于frameworks/av/media/libstagefright/AudioPlayer.cpp中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89

status_t AudioPlayer::start(bool sourceAlreadyStarted) {

......
//先处理seek操作
MediaSource::ReadOptions options;
if (mSeeking) {
options.setSeekTo(mSeekTimeUs);
mSeeking = false;
}
//从mAudioTrack中读取音频数据,read方法我们在将Video Buffer时分析过了
mFirstBufferResult = mSource->read(&mFirstBuffer, &options);
if (mFirstBufferResult == INFO_FORMAT_CHANGED) {
ALOGV("INFO_FORMAT_CHANGED!!!");

CHECK(mFirstBuffer == NULL);
mFirstBufferResult = OK;
mIsFirstBuffer = false;
} else {
mIsFirstBuffer = true;
}

...省略一些判断处理...
//如果mAudioSink存在,这个我们在setDataSource已经设置过了,位于MediaPlayerService的AudioOutput
if (mAudioSink.get() != NULL) {
......
//调用AudioOutput的open函数,注意参数里有AudioPlayer::AudioSinkCallback,我们会用到这个回调
status_t err = mAudioSink->open(
mSampleRate, numChannels, channelMask, audioFormat,
DEFAULT_AUDIOSINK_BUFFERCOUNT,
&AudioPlayer::AudioSinkCallback,
this,
(audio_output_flags_t)flags,
useOffload() ? &offloadInfo : NULL);

if (err == OK) {
mLatencyUs = (int64_t)mAudioSink->latency() * 1000;
mFrameSize = mAudioSink->frameSize();

if (useOffload()) {
// If the playback is offloaded to h/w we pass the
// HAL some metadata information
// We don't want to do this for PCM because it will be going
// through the AudioFlinger mixer before reaching the hardware
sendMetaDataToHal(mAudioSink, format);
}

err = mAudioSink->start();
// do not alter behavior for non offloaded tracks: ignore start status.
if (!useOffload()) {
err = OK;
}
}

if (err != OK) {
if (mFirstBuffer != NULL) {
mFirstBuffer->release();
mFirstBuffer = NULL;
}

if (!sourceAlreadyStarted) {
mSource->stop();
}

return err;
}

} else {//如果mAudioSink不存在

......
//需要创建一个AudioTrack
mAudioTrack = new AudioTrack(
AUDIO_STREAM_MUSIC, mSampleRate, AUDIO_FORMAT_PCM_16_BIT, audioMask,
0 /*frameCount*/, AUDIO_OUTPUT_FLAG_NONE, &AudioCallback, this,
0 /*notificationFrames*/);

......
mLatencyUs = (int64_t)mAudioTrack->latency() * 1000;
mFrameSize = mAudioTrack->frameSize();
//然后调用AudioTrack的start函数
mAudioTrack->start();
}

mStarted = true;
mPlaying = true;
mPinnedTimeUs = -1ll;

return OK;
}

       依然是分部查看:
       1)从mAudioTrack中读取音频数据,read方法我们在将Video Buffer时分析过了;
       2)如果mAudioSink存在,这个我们在setDataSource已经设置过了,位于MediaPlayerService的AudioOutput,如果忘记了可以查看上一篇设置同步时钟部分
       3)如果mAudioSink不存在,则创建一个AudioTrack,然后start(这一步我们忽略,因为mAudioSink我们在setDataSource已经创建了)。

       AudioPlayer在启动过程中会先去取第一帧解码的资料,并且开启audio output。这个过程和video buffer很相似。

       开启audio output的同时,AudioPlayer会将callback回调函数指针设给他,之后每次callback函数都被回调,AudioPlayer便去audio decoder读取解码后的数据。我们可以看看这个回调函数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
// static
void AudioPlayer::AudioCallback(int event, void *user, void *info) {
static_cast<AudioPlayer *>(user)->AudioCallback(event, info);
}

// static
size_t AudioPlayer::AudioSinkCallback(
MediaPlayerBase::AudioSink * /* audioSink */,
void *buffer, size_t size, void *cookie,
MediaPlayerBase::AudioSink::cb_event_t event) {
AudioPlayer *me = (AudioPlayer *)cookie;

switch(event) {
case MediaPlayerBase::AudioSink::CB_EVENT_FILL_BUFFER:
return me->fillBuffer(buffer, size);

case MediaPlayerBase::AudioSink::CB_EVENT_STREAM_END:
ALOGV("AudioSinkCallback: stream end");
me->mReachedEOS = true;
me->notifyAudioEOS();
break;

case MediaPlayerBase::AudioSink::CB_EVENT_TEAR_DOWN:
ALOGV("AudioSinkCallback: Tear down event");
me->mObserver->postAudioTearDown();
break;
}

return 0;
}

       上面两个回调分别对应上一步的mAudioSink存在和不存在的情况。但是最后都会调用到自己的fillBuffer方法,我们继续查看(省略大部分逻辑):

1
2
3
4
5
6
7
8
size_t AudioPlayer::fillBuffer(void *data, size_t size) {
...省略大部分逻辑...
err = mSource->read(&mInputBuffer, &options);
...省略大部分逻辑...
memcpy((char *)data + size_done,
(const char *)mInputBuffer->data() + mInputBuffer->range_offset(),
copy);
}

       解码后audio数据的读取就是有callback回调函数所驱动。另一方面,fillBuffer数据(mInputBuffer)复制到data之后,audio output会去取这些data。

       至于audio decoder的工作流程和video decoder相同,可以参照上面的video buffer流程。

结语

       本节的重点其实是Video Buffer传输流程,它重点是输入缓冲区释放和将输出缓冲区中的数据传递出去。也算马马虎虎的,请恕我对多媒体的了解也是个未入门的渣渣,还正在自学途中,如有错误麻烦大家及时指出,我会在第一时间修正。
妹子

坚持技术分享,您的支持将鼓励我继续创作!