Android多媒体开发(四)----AwesomePlayer数据源处理

       我们追着setDataSource都能挖出这么多内容,这次我们分析AwesomePlayer的setDataSource流程。

设置数据源

       通过前几篇的了解,我们知道StageFright框架在整个playback的位置:
StageFright角色

       前一个章节我们了解了播放器的大概流程。如果要分析数据源设置,就要在前一章节的图上稍作改动。
数据源设置改动

       通过setDataSource 指定播放器的数据源。可以是URI或者fd.可以是http:// 、rtsp://、本地地址或者本地文件描述符fd。其最终调用是将上层传递来的参数转化为DataSource,为下一步的demux提供数据支持。

       继续跟进AwesomePlayer的setDataSource,我们依然选取Fd类型,方便分析。位于framework/av/media/libstagefright/AwesomePlayer.cpp:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
status_t AwesomePlayer::setDataSource(
int fd, int64_t offset, int64_t length) {
Mutex::Autolock autoLock(mLock);

reset_l();
//创建了一个FileSource的类对象,主要提供一些文件操作的方法
sp<DataSource> dataSource = new FileSource(fd, offset, length);
//检查文件是否ok
status_t err = dataSource->initCheck();
if (err != OK) {
return err;
}
//赋给全局变量
mFileSource = dataSource;

{
Mutex::Autolock autoLock(mStatsLock);
mStats.mFd = fd;
mStats.mURI = String8();
}
//最后会调用setDataSource_l(dataSource)方法
return setDataSource_l(dataSource);
}

       这里主要讲外部传入的fd封装了一个FileSource的对象,FileSource.cpp主要实现了一些对文件的操作方法,位于framework/av/media/libstagefright/FileSource.cpp,有兴趣的可以看看。我们继续查看setDataSource_l(dataSource)方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
status_t AwesomePlayer::setDataSource_l(
const sp<DataSource> &dataSource) {
//这里根据文件类型创建不同的MediaExtractor对象
sp<MediaExtractor> extractor = MediaExtractor::Create(dataSource);

if (extractor == NULL) {
return UNKNOWN_ERROR;
}
//检查是否是DRM加密文件
if (extractor->getDrmFlag()) {
checkDrmStatus(dataSource);
}
//最后会调用到这里
return setDataSource_l(extractor);
}

       方法中又根据文件类型创建不同的MediaExtractor对象,我们看看它是怎么对文件类型进行分类的,位于framework/av/media/libstagefright/MediaExtractor.cpp中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
// static
//这个Create方法第二个参数mime是缺省参数,缺省为NULL
sp<MediaExtractor> MediaExtractor::Create(
const sp<DataSource> &source, const char *mime) {
sp<AMessage> meta;

String8 tmp;
if (mime == NULL) {//如果外部没有传入文件的mime类型(缺省为NULL)
float confidence;
//从FileSource中读取文件的mime类型
if (!source->sniff(&tmp, &confidence, &meta)) {
ALOGV("FAILED to autodetect media content.");

return NULL;
}
//将读取到的mime复制给局部变量
mime = tmp.string();
ALOGV("Autodetected media content as '%s' with confidence %.2f",
mime, confidence);
}
//是否为DRM加密文件
bool isDrm = false;
// DRM MIME type syntax is "drm+type+original" where
// type is "es_based" or "container_based" and
// original is the content's cleartext MIME type
//如果是DRM加密文件,则mime字符串有些不同,这里检查是否为"drm+type+original"这种格式
if (!strncmp(mime, "drm+", 4)) {
const char *originalMime = strchr(mime+4, '+');
if (originalMime == NULL) {
// second + not found
return NULL;
}
++originalMime;
if (!strncmp(mime, "drm+es_based+", 13)) {
// DRMExtractor sets container metadata kKeyIsDRM to 1
return new DRMExtractor(source, originalMime);
} else if (!strncmp(mime, "drm+container_based+", 20)) {
mime = originalMime;
isDrm = true;
} else {
return NULL;
}
}
//下面逻辑就是根据mime对文件类型的判断了,比如audio/wav wav,video/x-msvideo avi,然后选择创建哪一种Extractor数据解析器
MediaExtractor *ret = NULL;
if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG4)
|| !strcasecmp(mime, "audio/mp4")) {
ret = new MPEG4Extractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG)) {
ret = new MP3Extractor(source, meta);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB)
|| !strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_WB)) {
ret = new AMRExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_FLAC)) {
ret = new FLACExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_WAV)) {
ret = new WAVExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_OGG)) {
ret = new OggExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MATROSKA)) {
ret = new MatroskaExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG2TS)) {
ret = new MPEG2TSExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_WVM)) {
// Return now. WVExtractor should not have the DrmFlag set in the block below.
return new WVMExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC_ADTS)) {
ret = new AACExtractor(source, meta);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG2PS)) {
ret = new MPEG2PSExtractor(source);
}
//如果文件经过DRM处理,则设置一下标签
if (ret != NULL) {
if (isDrm) {
ret->setDrmFlag(true);
} else {
ret->setDrmFlag(false);
}
}

return ret;
}

       上面逻辑只要先获取文件的mime,然后根绝mime的类型去创造不同的文件解析器Extractor。MIME(Multipurpose Internet Mail Extensions)多用途互联网邮件扩展类型,这个应该都很熟悉了吧。列一些常见的比如:(类型/子类型 扩展名)

  • image/jpeg jpg
  • application/msword doc
  • audio/mpeg mp3
  • application/octet-stream flv //特殊处理
  • video/x-ms-wmv wmv
  • video/mp4 mp4 //特殊处理

       (如果不知道MIME类型 可以写通用的: application/octet-stream;还有一些规律是平台工具直接打开类型的,比如文本:text/扩展名,音频:audio/扩展名,视频:video/扩展名)
       常见的mime可以上w3school查询,可以点这里

       创建好了数据解析器,比如MPEG4Extractor,位于framework/av/media/libstagefright/MPEG4Extractor.cpp。
       然后我们继续往下看,进入setDataSource_l(extractor)方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor> &extractor) {
// Attempt to approximate overall stream bitrate by summing all
// tracks' individual bitrates, if not all of them advertise bitrate,
// we have to fail.
//比特率为每一路音频/视频/字幕流各自比特率的累加,如果又哪一路数据流没有拿到它单个的比特率,则数据解析失败
int64_t totalBitRate = 0;

mExtractor = extractor;
//遍历每一路数据流
for (size_t i = 0; i < extractor->countTracks(); ++i) {
//读取数据流的元数据,比如MPEG4Extractor中的获取方法,对文件头部信息的读取(也有一些类型文件元数据在尾部)
sp<MetaData> meta = extractor->getTrackMetaData(i);
//单独数据流的比特率
int32_t bitrate;
if (!meta->findInt32(kKeyBitRate, &bitrate)) {
const char *mime;
CHECK(meta->findCString(kKeyMIMEType, &mime));
ALOGV("track of type '%s' does not publish bitrate", mime);

totalBitRate = -1;
break;
}
//将单个数据流比特率累加,比特率越大,码率越高,片子质量越清晰越好
totalBitRate += bitrate;
}
sp<MetaData> fileMeta = mExtractor->getMetaData();
if (fileMeta != NULL) {
int64_t duration;
if (fileMeta->findInt64(kKeyDuration, &duration)) {
mDurationUs = duration;
}
}

mBitrate = totalBitRate;

ALOGV("mBitrate = %lld bits/sec", mBitrate);

{
Mutex::Autolock autoLock(mStatsLock);
mStats.mBitrate = mBitrate;
mStats.mTracks.clear();
mStats.mAudioTrackIndex = -1;
mStats.mVideoTrackIndex = -1;
}
//是否有音频流
bool haveAudio = false;
//是否有视频流
bool haveVideo = false;
//开始遍历文件的每一路数据流
for (size_t i = 0; i < extractor->countTracks(); ++i) {
sp<MetaData> meta = extractor->getTrackMetaData(i);

const char *_mime;
CHECK(meta->findCString(kKeyMIMEType, &_mime));

String8 mime = String8(_mime);
//根据每一路数据流的mime判断是音频还是视频,然后做轨道分离
if (!haveVideo && !strncasecmp(mime.string(), "video/", 6)) {//是否是视频流
//将视频流分离,生成mVideoTrack这个MediaSource,比如MPEG4Source:MediaSource
setVideoSource(extractor->getTrack(i));
//有视频流标签置为true
haveVideo = true;

// Set the presentation/display size
//显示宽高
int32_t displayWidth, displayHeight;
bool success = meta->findInt32(kKeyDisplayWidth, &displayWidth);
if (success) {
success = meta->findInt32(kKeyDisplayHeight, &displayHeight);
}
if (success) {
mDisplayWidth = displayWidth;
mDisplayHeight = displayHeight;
}

{
Mutex::Autolock autoLock(mStatsLock);
mStats.mVideoTrackIndex = mStats.mTracks.size();
mStats.mTracks.push();
TrackStat *stat =
&mStats.mTracks.editItemAt(mStats.mVideoTrackIndex);
stat->mMIME = mime.string();
}
} else if (!haveAudio && !strncasecmp(mime.string(), "audio/", 6)) {//音频流分离
//将视频流分离,生成mAudioTrack
setAudioSource(extractor->getTrack(i));
//重置有音频流标签
haveAudio = true;
mActiveAudioTrackIndex = i;

{
Mutex::Autolock autoLock(mStatsLock);
mStats.mAudioTrackIndex = mStats.mTracks.size();
mStats.mTracks.push();
TrackStat *stat =
&mStats.mTracks.editItemAt(mStats.mAudioTrackIndex);
stat->mMIME = mime.string();
}
//对ogg类型音频的额外处理
if (!strcasecmp(mime.string(), MEDIA_MIMETYPE_AUDIO_VORBIS)) {
// Only do this for vorbis audio, none of the other audio
// formats even support this ringtone specific hack and
// retrieving the metadata on some extractors may turn out
// to be very expensive.
sp<MetaData> fileMeta = extractor->getMetaData();
int32_t loop;
if (fileMeta != NULL
&& fileMeta->findInt32(kKeyAutoLoop, &loop) && loop != 0) {
modifyFlags(AUTO_LOOPING, SET);
}
}
} else if (!strcasecmp(mime.string(), MEDIA_MIMETYPE_TEXT_3GPP)) {//分离字幕
//对字幕流分离
addTextSource_l(i, extractor->getTrack(i));
}
}
//如果既没有视频也没有音频,则返回错误
if (!haveAudio && !haveVideo) {
if (mWVMExtractor != NULL) {
return mWVMExtractor->getError();
} else {
return UNKNOWN_ERROR;
}
}

mExtractorFlags = extractor->flags();

return OK;
}

       上述方法比较长,总结如下:

  • 根据MediaExtractor,读取数据源文件的元数据信息,将每一路数据流的比特率进行累加;
  • 分离数据流,音频/视频/字幕分离:设置视频源mVideoTrack ;设置音频源mAudioTrack;分离字幕。mVideoTrack和mAudioTrack的做为创建的AwesomePlayer的成员函数,举个例子,比如其类型为MPEG4Source,继承了MediaSource。

       注释已经写得比较清楚了。到这里文件类型的setDataSource就分析完了,这个自己挖的大坑也算填的差不多了。

网络类型数据源

       如果我们给MediaPlayer里面setDataSource设置的路径是网络路径,比如 http://xxxxxx......xx.mp4 ,那么在setDataSource时候是不能直接获取数据流的,要到prepareAsync才能得到数据信息。所以我们继续顺着思路,看看网络类型数据信息怎么获取。

       其实这里涉及另一个知识点:MediaHTTPService ,这里给出模块图,具体细节有兴趣的同学可以自己研究,限于篇幅就不详细赘述。
网络模型

java:
\frameworks2\media\java\android\media
IMediaHTTPService.aidl
IMediaHTTPConnection.aidl

jni:
android_media_MediaHTTPConnection.cpp (.\framework\media\jni)
android_media_MediaHTTPConnection.h (.\framework\media\jni)

native:
Lib: libstagefright_http_support.so
frameworks2\av\media\libstagefright\http
IMediaHTTPConnection.aidl (.\framework\media\java\android\media)
IMediaHTTPService.aidl (.\framework\media\java\android\media)
MediaHTTPConnection.java (.\framework\media\java\android\media)
MediaHTTPService.java (.\framework\media\java\android\media)
IMediaHTTPConnection.h (.\framework\av\include\media)
IMediaHTTPService.h (.\framework\av\include\media)
IMediaHTTPConnection.aidl (.\framework\av\media\libstagefright)
IMediaHTTPService.aidl (.\framework\av\media\libstagefright)
MediaHTTP.cpp (.\framework\av\media\libstagefright\http)
MediaHTTP.h (.\framework\av\include\media\stagefright)
IMediaHTTPConnection.cpp (.\framework\av\media\libmedia)
IMediaHTTPService.cpp (.\framework\av\media\libmedia)

       依然先看AwesomePlayer的另一个重载方法setDataSource:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
status_t AwesomePlayer::setDataSource(
const sp<IMediaHTTPService> &httpService,
const char *uri,
const KeyedVector<String8, String8> *headers) {
Mutex::Autolock autoLock(mLock);
//会调用下一个方法
return setDataSource_l(httpService, uri, headers);
}

status_t AwesomePlayer::setDataSource_l(
const sp<IMediaHTTPService> &httpService,
const char *uri,
const KeyedVector<String8, String8> *headers) {
reset_l();
//仅仅将httpservice保存到全局变量而已
mHTTPService = httpService;
//保存了上层传入的uri
mUri = uri;

if (headers) {
mUriHeaders = *headers;

ssize_t index = mUriHeaders.indexOfKey(String8("x-hide-urls-from-log"));
if (index >= 0) {
// Browser is in "incognito" mode, suppress logging URLs.

// This isn't something that should be passed to the server.
mUriHeaders.removeItemsAt(index);

modifyFlags(INCOGNITO, SET);
}
}

ALOGI("setDataSource_l(%s)", uriDebugString(mUri, mFlags & INCOGNITO).c_str());

// The actual work will be done during preparation in the call to
// ::finishSetDataSource_l to avoid blocking the calling thread in
// setDataSource for any significant time.

{
Mutex::Autolock autoLock(mStatsLock);
mStats.mFd = -1;
mStats.mURI = mUri;
}

return OK;
}

       上面代码仅仅保存了httpService变量,这个用于请求网络;还有上层传入的uri。那么获取数据应该在prepareAsync中。我们知道prepare和prepareAsync不同在于一个是同步操作,一个是异步回调。那么我们先看看prepare:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
status_t AwesomePlayer::prepare() {
ATRACE_CALL();
Mutex::Autolock autoLock(mLock);
//会调用prepare_l方法
return prepare_l();
}

status_t AwesomePlayer::prepare_l() {
if (mFlags & PREPARED) {//已经prepare完成了,则返回
return OK;
}

if (mFlags & PREPARING) {//如果正在preparing,则返回错误
return UNKNOWN_ERROR;
}

mIsAsyncPrepare = false;//将异步标签置为false
//最终还是会调用prepareAsync_l,只不过在同一线程
status_t err = prepareAsync_l();

if (err != OK) {
return err;
}
//因此prepare会等待在同步线程
while (mFlags & PREPARING) {
mPreparedCondition.wait(mLock);
}

return mPrepareResult;
}

       同步prepare很简单,执行方法最终还是会调用prepareAsync_l方法,只不过在同一线程等待。
       那么我们看看异步prepareAsync:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
status_t AwesomePlayer::prepareAsync() {
ATRACE_CALL();
Mutex::Autolock autoLock(mLock);
//如果正在preparing,则返回错误
if (mFlags & PREPARING) {
return UNKNOWN_ERROR; // async prepare already pending
}

mIsAsyncPrepare = true;
//调用prepareAsync_l方法
return prepareAsync_l();
}

status_t AwesomePlayer::prepareAsync_l() {
if (mFlags & PREPARING) {//同上
return UNKNOWN_ERROR; // async prepare already pending
}
//这一步就是我们上一节分析的事件调度线程启动,可以参考上一节
if (!mQueueStarted) {
mQueue.start();
mQueueStarted = true;
}
//修改标志位,加上PREPARING标志
modifyFlags(PREPARING, SET);
//将封装onPrepareAsyncEvent方法的时间加入事件队列中
mAsyncPrepareEvent = new AwesomeEvent(
this, &AwesomePlayer::onPrepareAsyncEvent);

mQueue.postEvent(mAsyncPrepareEvent);

return OK;
}

       这里我们可以看到它启动了事件调度队列,这个我们上一节分析过了,如果不太明白可以回去看看:Android多媒体开发(三)—-从StageFright到AwesomePlayer
       然后给事件队列加入封装onPrepareAsyncEvent方法的事件,我们接着看这个方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
void AwesomePlayer::onPrepareAsyncEvent() {
Mutex::Autolock autoLock(mLock);
//调用beginPrepareAsync_l
beginPrepareAsync_l();
}

void AwesomePlayer::beginPrepareAsync_l() {
if (mFlags & PREPARE_CANCELLED) {
ALOGI("prepare was cancelled before doing anything");
abortPrepare(UNKNOWN_ERROR);
return;
}
//主要在这一步,设置网络数据源
if (mUri.size() > 0) {
status_t err = finishSetDataSource_l();

if (err != OK) {
abortPrepare(err);
return;
}
}
//初始化视频解码器
if (mVideoTrack != NULL && mVideoSource == NULL) {
status_t err = initVideoDecoder();

if (err != OK) {
abortPrepare(err);
return;
}
}
//初始化音频解码器
if (mAudioTrack != NULL && mAudioSource == NULL) {
status_t err = initAudioDecoder();

if (err != OK) {
abortPrepare(err);
return;
}
}

modifyFlags(PREPARING_CONNECTED, SET);
//是否是http的网络数据流
if (isStreamingHTTP()) {
postBufferingEvent_l();//开始onBufferingUpdate回调,开始缓冲视频
} else {
finishAsyncPrepare_l();//完成prepare
}
}

bool AwesomePlayer::isStreamingHTTP() const {
return mCachedSource != NULL || mWVMExtractor != NULL;
}

       我们这一步的重点是在设置网络数据源,调用finishSetDataSource_l方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
status_t AwesomePlayer::finishSetDataSource_l() {
ATRACE_CALL();
sp<DataSource> dataSource;

bool isWidevineStreaming = false;
//Widevine是google在ICS版本上新推出的一种DRM数字版权管理功能,有这个功能的话,就能从google指定的服务器上,下载经过google加密的版权文件,例如视频、应用等等。
if (!strncasecmp("widevine://", mUri.string(), 11)) {
isWidevineStreaming = true;
//如果是google自家的私货,就要在uri加上http://的前缀
String8 newURI = String8("http://");
newURI.append(mUri.string() + 11);

mUri = newURI;
}
//mime
AString sniffedMIME;
//如果是http、https的网络数据流,或者是谷歌自家的私货widevine
if (!strncasecmp("http://", mUri.string(), 7)
|| !strncasecmp("https://", mUri.string(), 8)
|| isWidevineStreaming) {
//这个就是我们在setDataSource里面取到的httpservice,用于发送http请求
if (mHTTPService == NULL) {
ALOGE("Attempt to play media from http URI without HTTP service.");
return UNKNOWN_ERROR;
}
//建立http链接
sp<IMediaHTTPConnection> conn = mHTTPService->makeHTTPConnection();
mConnectingDataSource = new MediaHTTP(conn);

String8 cacheConfig;
bool disconnectAtHighwatermark;
NuCachedSource2::RemoveCacheSpecificHeaders(
&mUriHeaders, &cacheConfig, &disconnectAtHighwatermark);

mLock.unlock();
//开始链接网络
status_t err = mConnectingDataSource->connect(mUri, &mUriHeaders);
// force connection at this point, to avoid a race condition between getMIMEType and the
// caching datasource constructed below, which could result in multiple requests to the
// server, and/or failed connections.
String8 contentType = mConnectingDataSource->getMIMEType();
mLock.lock();

if (err != OK) {
mConnectingDataSource.clear();

ALOGI("mConnectingDataSource->connect() returned %d", err);
return err;
}

if (!isWidevineStreaming) {//非google的私货,正常的http网络请求读取数据
// The widevine extractor does its own caching.

#if 0
mCachedSource = new NuCachedSource2(
new ThrottledSource(
mConnectingDataSource, 50 * 1024 /* bytes/sec */));
#else
// NuCachedSource2,带缓存的DataSource,不包含媒体信息,只管理缓存以及调用底层的DataSource读取和缓存数据。可以获取缓存的信息以及操作缓存。
mCachedSource = new NuCachedSource2(
mConnectingDataSource,
cacheConfig.isEmpty() ? NULL : cacheConfig.string(),
disconnectAtHighwatermark);
#endif
//获取缓存数据源
dataSource = mCachedSource;
} else {//是google的私货
dataSource = mConnectingDataSource;
}

mConnectingDataSource.clear();
//如果不是纯音频的文件类型
if (strncasecmp(contentType.string(), "audio/", 6)) {
// We're not doing this for streams that appear to be audio-only
// streams to ensure that even low bandwidth streams start
// playing back fairly instantly.

...省略一些代码...
if (!dataSource->sniff(&tmp, &confidence, &meta)) {
mLock.lock();
return UNKNOWN_ERROR;
}

// We successfully identified the file's extractor to
// be, remember this mime type so we don't have to
// sniff it again when we call MediaExtractor::Create()
// below.
//获取mime
sniffedMIME = tmp.string();

...省略一些代码...

}
} else {//如果是不http、https的网络数据流、谷歌自家的私货widevine
//请求网络数据
dataSource = DataSource::CreateFromURI(
mHTTPService, mUri.string(), &mUriHeaders);
}

if (dataSource == NULL) {
return UNKNOWN_ERROR;
}

sp<MediaExtractor> extractor;
//是google的私货
if (isWidevineStreaming) {
String8 mimeType;
float confidence;
sp<AMessage> dummy;
bool success;

// SniffWVM is potentially blocking since it may require network access.
// Do not call it with mLock held.
mLock.unlock();
//嗅探wvm类型资源
success = SniffWVM(dataSource, &mimeType, &confidence, &dummy);
mLock.lock();

if (!success
|| strcasecmp(
mimeType.string(), MEDIA_MIMETYPE_CONTAINER_WVM)) {
return ERROR_UNSUPPORTED;
}
//创建wvm的数据解析器
mWVMExtractor = new WVMExtractor(dataSource);
mWVMExtractor->setAdaptiveStreamingMode(true);
if (mUIDValid)
mWVMExtractor->setUID(mUID);
extractor = mWVMExtractor;
} else {//如果不是google的私货,则根据MIME创建相应的数据解析器
extractor = MediaExtractor::Create(
dataSource, sniffedMIME.empty() ? NULL : sniffedMIME.c_str());

if (extractor == NULL) {
return UNKNOWN_ERROR;
}
}
//检查是否有DRM处理
if (extractor->getDrmFlag()) {
checkDrmStatus(dataSource);
}
//setDataSource_l(extractor)我们本节分析,主要是分流音频/视频/字幕
status_t err = setDataSource_l(extractor);

if (err != OK) {
mWVMExtractor.clear();

return err;
}

return OK;
}

       上述就是网络获取数据源的过程:1)setDataSource获取一个网络连接服务;2)prepareAsync中请求网络,读取缓存,然后得到文件类型mime;3)根据mime创建MediaExtractor,然后分离音频/视频/字幕流数据。

初窥解码器

       上一步我们分析完了setDataSource,并且进入了prepare环节。走完了finishSetDataSource_l,但是beginPrepareAsync_l方法还没有走完。接下来应该是初始化音/视频解码器了。

视频解码器入口

       进入beginPrepareAsync_l方法,执行到initVideoDecoder方法位置,我们继续查看:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
status_t AwesomePlayer::initVideoDecoder(uint32_t flags) {
ATRACE_CALL();

// Either the application or the DRM system can independently say
// that there must be a hardware-protected path to an external video sink.
// For now we always require a hardware-protected path to external video sink
// if content is DRMed, but eventually this could be optional per DRM agent.
// When the application wants protection, then
// (USE_SURFACE_ALLOC && (mSurface != 0) &&
// (mSurface->getFlags() & ISurfaceComposer::eProtectedByApp))
// will be true, but that part is already handled by SurfaceFlinger.

......
if (mDecryptHandle != NULL) {
flags |= OMXCodec::kEnableGrallocUsageProtected;
}
......

ALOGV("initVideoDecoder flags=0x%x", flags);
//和视频解码关,视频解码器
mVideoSource = OMXCodec::Create(
mClient.interface(), mVideoTrack->getFormat(),
false, // createEncoder
mVideoTrack,
NULL, flags, USE_SURFACE_ALLOC ? mNativeWindow : NULL);

if (mVideoSource != NULL) {
int64_t durationUs;
//获取视频轨道格式,然后读取时长
if (mVideoTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) {
Mutex::Autolock autoLock(mMiscStateLock);
if (mDurationUs < 0 || durationUs > mDurationUs) {
mDurationUs = durationUs;
}
}
//读取原始视频数据
status_t err = mVideoSource->start();

if (err != OK) {
ALOGE("failed to start video source");
mVideoSource.clear();
return err;
}
}
//一些检查逻辑
if (mVideoSource != NULL) {
const char *componentName;
CHECK(mVideoSource->getFormat()
->findCString(kKeyDecoderComponent, &componentName));

{
Mutex::Autolock autoLock(mStatsLock);
TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mVideoTrackIndex);

stat->mDecoderName = componentName;
}

static const char *kPrefix = "OMX.Nvidia.";
static const char *kSuffix = ".decode";
static const size_t kSuffixLength = strlen(kSuffix);

size_t componentNameLength = strlen(componentName);

if (!strncmp(componentName, kPrefix, strlen(kPrefix))
&& componentNameLength >= kSuffixLength
&& !strcmp(&componentName[
componentNameLength - kSuffixLength], kSuffix)) {
modifyFlags(SLOW_DECODER_HACK, SET);
}
}

return mVideoSource != NULL ? OK : UNKNOWN_ERROR;
}

       视频解码器的入口,核心是OMXCodec::Create()方法,创建解码器。这个我们下一节会自信分析,本节只是入口。

音频解码器入口

       然后我们看看音频解码器:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
status_t AwesomePlayer::initAudioDecoder() {
ATRACE_CALL();
//获取音频轨道跟踪格式
sp<MetaData> meta = mAudioTrack->getFormat();

const char *mime;
CHECK(meta->findCString(kKeyMIMEType, &mime));
// Check whether there is a hardware codec for this stream
// This doesn't guarantee that the hardware has a free stream
// but it avoids us attempting to open (and re-open) an offload
// stream to hardware that doesn't have the necessary codec
audio_stream_type_t streamType = AUDIO_STREAM_MUSIC;
if (mAudioSink != NULL) {
//得到音频流类型
streamType = mAudioSink->getAudioStreamType();
}

mOffloadAudio = canOffloadStream(meta, (mVideoSource != NULL),
isStreamingHTTP(), streamType);
//如果是原始数据不用解码
if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW)) {
ALOGV("createAudioPlayer: bypass OMX (raw)");
mAudioSource = mAudioTrack;
} else {
// If offloading we still create a OMX decoder as a fall-back
// but we don't start it
//创建音频解码器。和上面的视频解码器一样,只不过参数由mVideoTrack变为mAudioTrack
mOmxSource = OMXCodec::Create(
mClient.interface(), mAudioTrack->getFormat(),
false, // createEncoder
mAudioTrack);

if (mOffloadAudio) {
ALOGV("createAudioPlayer: bypass OMX (offload)");
mAudioSource = mAudioTrack;
} else {
mAudioSource = mOmxSource;
}
}

if (mAudioSource != NULL) {
int64_t durationUs;
if (mAudioTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) {
Mutex::Autolock autoLock(mMiscStateLock);
if (mDurationUs < 0 || durationUs > mDurationUs) {
mDurationUs = durationUs;
}
}
//读取音频原始数据
status_t err = mAudioSource->start();

if (err != OK) {
mAudioSource.clear();
mOmxSource.clear();
return err;
}
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_QCELP)) {
// For legacy reasons we're simply going to ignore the absence
// of an audio decoder for QCELP instead of aborting playback
// altogether.
return OK;
}

if (mAudioSource != NULL) {
Mutex::Autolock autoLock(mStatsLock);
TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mAudioTrackIndex);
const char *component;
if (!mAudioSource->getFormat()
->findCString(kKeyDecoderComponent, &component)) {
component = "none";
}

stat->mDecoderName = component;
}

return mAudioSource != NULL ? OK : UNKNOWN_ERROR;
}

       初始化音频解码和视频解码器代码部分大同小异,核心还是OMXCodec::Create()方法,只是参数不用。

       总结一下上述流程:

  • 通过setDataSource 指定播放器的数据源。可以是URI或者fd.可以是http:// 、rtsp://、本地地址或者本地文件描述符fd。其最终调用是将上层传递来的参数转化为DataSource,为下一步的demux提供数据支持。
  • 在真正Prepare功能函数onPrepareAsyncEvent()会调用finishSetDataSource_l。通过第一步产生的DataSource来生成extractor,因为封装的格式很多,所以需要通过DataSource的信息,去创建不同的extractor。
  • 得到extractor之后,通过setVideoSource() setAudioSource()产生独立的mVideoTrack(视频)、mAudioTrack(音频)数据流,分别为音视频解码器提供有各自需要的数据流。
  • 接下来就是initVideoDecoder() initAudioDecoder().依赖上面产生的mVideoTrack(视频)、mAudioTrack(音频)数据流。生成了mVideoSource和mAudioSource这两个音视频解码器。

       最后一步创建解码器都是调用同样的接口:

1
2
3
4
5
6
7
8
9
10
mVideoSource = OMXCodec::Create(  
mClient.interface(), mVideoTrack->getFormat(),
false, // createEncoder
mVideoTrack,
NULL, flags, USE_SURFACE_ALLOC ? mNativeWindow : NULL);
mAudioSource = OMXCodec::Create(
mClient.interface(), mAudioTrack->getFormat(),
false, // createEncoder
mAudioTrack);
}

       mVideoSource、mAudioSource组成了播放器模型中的decoder部分。

       Android系统中的编解码器部分用的是openmax,以后会深入了解。openma x是一套标准接口,各家硬件厂商都可以遵循这个标准来做自己的实现,发挥自己芯片特性。然后提供给android系统来用。因为大部分的机顶盒芯片产品硬件的编解码是它的优势,可以把这种优势完全融入到android平台中。以后手机高清视频硬解码也会是个趋势。

       解码完之后的数据就要输出了。AwesomePlayer分别用了mVideoRenderer做视频输出、mAudioPlayer做音频输出。他们分别调用android图像和音频的相关服务。mVideoRenderer和mAudioPlayer就组成了播放器中output的部分。这俩部分是android平台中十分重要的2块,以后会深入了解。

结语

       综上AwesomePlayer的整体框架和流程就清晰了,其实也脱离不了DataSource、demux、decoder、output这4大部分。接下来会分别了解每个部分是怎么实现的。
       下一节我们就从openmax开始。
妹子

坚持技术分享,您的支持将鼓励我继续创作!