Android多媒体开发(一)----MediaPlayer框架开始

       The Android multimedia framework includes support for playing variety of common mediatypes, so that you can easily integrate audio, video and images into your applications. You can play audio or video from media files stored in your application’s resources (raw resources), from standalone files in the filesystem, or from a data stream arriving over a network connection, all using MediaPlayer APIs.

前言(忽略)

       做多媒体开发都离不开MediaPlayer这个类,现在android市场已经趋于饱和,在API调用方面人人基本上已是轻车熟路。
       这里不详细赘述MediaPlayer的使用,毕竟网上例子一搜一大堆,逛网也给出了详细介绍和用法。
       官网介绍:https://developer.android.com/reference/android/media/MediaPlayer.html
       官网用法:https://developer.android.com/guide/topics/media/mediaplayer.html (这里官网已经把ExoPlayer贴在导航页了,关于ExoPlayer可以点这里看Github)

       一个很简单的demo如下:

1
2
3
4
5
MediaPlayer mediaPlayer = new MediaPlayer();  
mediaPlayer.setDataSource(path);
mediaPlayer.setDisplay(surfaceView.getHolder());
mediaPlayer.prepare();
mediaPlayer.start();

       
我们就从这个很简单的例子着手,一步一步分析整个播放流程。

mediaserver 启动

       我们知道,Android系统是基于Linux内核的,而在Linux系统中,所有的进程都是init进程的子孙进程,也就是说,所有的进程都是直接或者间接地由init进程fork出来的。Zygote进程也不例外,它是在系统启动的过程,由init进程创建的。
启动流程

       在系统启动脚本system/core/rootdir/init.rc文件中,我们可以看到启动mediaserver进程的脚本命令:

1
2
3
4
5
service media /system/bin/mediaserver
class main
user media
group audio camera inet net_bt net_bt_admin net_bw_acct drmrpc mediadrm
ioprio rt 4

       mediaserver启动后会把media相关一些服务添加到servicemanager中,其中就有MediaPlayerService。这样应用启动前,系统就有了MediaPlayerService这个服务程序。位于frameworks/av/media/mediaserver/Main_mediaserver.cpp的main函数中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
int main(int argc, char** argv)  
{

......
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
ALOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
//初始化MediaPlayerService
MediaPlayerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}

       然后我们看看MediaPlayerService的初始化,位于frameworks/av/media/libmediaplayerservice/MediaPlayerService.cpp中:

1
2
3
4
void MediaPlayerService::instantiate() {  
defaultServiceManager()->addService(
String16("media.player"), new MediaPlayerService());
}

       MediaPlayerService 初始化函数,向ServiceManager注册了一个实名Binder(media.player),可以dumpsys -l查看启动了哪些实名service。

MediaPlayer 创建

       接着我们进入应用层,看看MediaPlayer创建时做了那些事情。

MediaPlayer构造

       应用层MediaPlayer mediaPlayer = new MediaPlayer()。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
  public MediaPlayer() {
//这里会构建一个EventHandler的Handler对象,用于处理一些消息回调
Looper looper;
if ((looper = Looper.myLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else if ((looper = Looper.getMainLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else {
mEventHandler = null;
}
......

/* Native setup requires a weak reference to our object.
* It's easier to create it here than in C++.
*/

//这个才是重点,jni层创建MediaPlayer,将java层弱引用传递给jni层。
native_setup(new WeakReference<MediaPlayer>(this));
}

       构造之前MediaPlayer类有一段静态代码块,加载了media_jni.so库,用于初始jni相关,早于构造方法,在加载类时就执行。一般是全局性的数据,变量,可以放在这。frameworks\base\media\java\android\media\MediaPlayer.java。

1
2
3
4
5
    static {
System.loadLibrary("media_jni");
native_init();
}
private static native final void native_init();

       这里会先加载libmedia_jni的so库,需要在这里做个标记,下一节会介绍。

       调用本地方法native_init,我们找到它的jni实现,位于frameworks/base/media/jni/android_media_MediaPlayer.cpp中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

// This function gets some field IDs, which in turn causes class initialization.
// It is called from a static block in MediaPlayer, which won't run until the
// first time an instance of this class is used.
static void android_media_MediaPlayer_native_init(JNIEnv *env)
{

jclass clazz;
//通过native层调用java层,获取MediaPlayer类
clazz = env->FindClass("android/media/MediaPlayer");
if (clazz == NULL) {
return;
}
//获取java层mNativeContext变量,是个long型变量,JNI调用经常这么搞,把jni返回结果通过强转为java层long型变量,供上层保存调用
fields.context = env->GetFieldID(clazz, "mNativeContext", "J");
if (fields.context == NULL) {
return;
}
//找到java层postEventFromNative方法
fields.post_event = env->GetStaticMethodID(clazz, "postEventFromNative",
"(Ljava/lang/Object;IIILjava/lang/Object;)V");
if (fields.post_event == NULL) {
return;
}
//找到java层mNativeSurfaceTexture变量
fields.surface_texture = env->GetFieldID(clazz, "mNativeSurfaceTexture", "J");
if (fields.surface_texture == NULL) {
return;
}

......
}

       这里我们可以看到jni层设置了java层的postEventFromNative方法,从字面意思可以看出就是被native层调用,通过这样反向调用,仅被使用EventHandler post事件回到主线程中。post开头都是post到主线程,用软引用指向java层的MediaPlayer,以便native代码是线程安全的。postEventFromNative实现如下,不难理解:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
  private static void postEventFromNative(Object mediaplayer_ref,
int what, int arg1, int arg2, Object obj)

{

//这就是传入到native层java层MediaPlayer弱引用
MediaPlayer mp = (MediaPlayer)((WeakReference)mediaplayer_ref).get();
if (mp == null) {
return;
}

......
//用EventHandler发送native的消息
if (mp.mEventHandler != null) {
Message m = mp.mEventHandler.obtainMessage(what, arg1, arg2, obj);
mp.mEventHandler.sendMessage(m);
}
}

       这里native_init的准备工作就做完了,然后就是jni层的native_setup方法的执行了。对应本地方法依然位于frameworks/base/media/jni/android_media_MediaPlayer.cpp中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
static void android_media_MediaPlayer_native_setup(JNIEnv *env, jobject thiz, jobject weak_this)
{

ALOGV("native_setup");
//创建一个C++层的MediaPlayer
sp<MediaPlayer> mp = new MediaPlayer();
if (mp == NULL) {
jniThrowException(env, "java/lang/RuntimeException", "Out of memory");
return;
}

// create new listener and give it to MediaPlayer
//创建一个listener给MediaPlayer,以便java层MediaPlayer设置一些监听能产生回调,如setPrepareListener、setOnCompleteListener等等。
sp<JNIMediaPlayerListener> listener = new JNIMediaPlayerListener(env, thiz, weak_this);
mp->setListener(listener);

// Stow our new C++ MediaPlayer in an opaque field in the Java object.
//C++层的MediaPlayer对于java层的是不透明的,大家互不关心
setMediaPlayer(env, thiz, mp);
}

       这里创建了一个C++层的MediaPlayer,还有一些Listener回调。这个模式和Android的Looper差不多,也是java层一个Looper,C++层也有一个Looper。关于Android消息机制原理,可以看看这里

setDataSource过程

       构造完MediaPlayer之后,就要设置数据源了。setDataSource有许多重载方法,我们这里就挑一个文件类型处理的方法,也方便分析。文件类型数据源上层java代码会调用本地方法,还是位于frameworks/base/media/jni/android_media_MediaPlayer.cpp中。不过这次根据名字找不到对应方法了,因为它用如下一个结构体数组做了明值映射:(这个也是jni层常用的技巧,如果看过Log系统源码的应该都了解,毕竟学习jni都是从Log系统开始的)

1
2
3
4
5
6
7
8
9
10
11
12
13

static JNINativeMethod gMethods[] = {
{
"nativeSetDataSource",
"(Landroid/os/IBinder;Ljava/lang/String;[Ljava/lang/String;"
"[Ljava/lang/String;)V",
(void *)android_media_MediaPlayer_setDataSourceAndHeaders
},

{"_setDataSource", "(Ljava/io/FileDescriptor;JJ)V", (void *)android_media_MediaPlayer_setDataSourceFD},
//省略许多方法映射
......
};

       这个结构体数组几乎映射了所有MediaPlayer方法处理,我们这里只关心setDataSource,所以照上面找到文件中的android_media_MediaPlayer_setDataSourceFD方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
static void android_media_MediaPlayer_setDataSourceFD(JNIEnv *env, jobject thiz, jobject fileDescriptor, jlong offset, jlong length)
{

//获取C++层的MediaPlayer
sp<MediaPlayer> mp = getMediaPlayer(env, thiz);
if (mp == NULL ) {
jniThrowException(env, "java/lang/IllegalStateException", NULL);
return;
}

if (fileDescriptor == NULL) {
jniThrowException(env, "java/lang/IllegalArgumentException", NULL);
return;
}
//获取数据源的文件描述符
int fd = jniGetFDFromFileDescriptor(env, fileDescriptor);
ALOGV("setDataSourceFD: fd %d", fd);
//这一步其实重点在mp->setDataSource(fd, offset, length)
process_media_player_call( env, thiz, mp->setDataSource(fd, offset, length), "java/io/IOException", "setDataSourceFD failed." );
}

       这一步先获取C++层的MediaPlayer,然后获取数据源文件描述符,最后调用process_media_player_call检查返回状态,在参数里已经让C++层的MediaPlayer调用了setDataSource方法。我们先看看process_media_player_call的实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
// If exception is NULL and opStatus is not OK, this method sends an error
// event to the client application; otherwise, if exception is not NULL and
// opStatus is not OK, this method throws the given exception to the client
// application.
static void process_media_player_call(JNIEnv *env, jobject thiz, status_t opStatus, const char* exception, const char *message)
{

if (exception == NULL) { // Don't throw exception. Instead, send an event.
//如果setDataSource过程返回状态不OK,则notify MEDIA_ERROR状态
if (opStatus != (status_t) OK) {
sp<MediaPlayer> mp = getMediaPlayer(env, thiz);
if (mp != 0) mp->notify(MEDIA_ERROR, opStatus, 0);
}
} else { // Throw exception!
if ( opStatus == (status_t) INVALID_OPERATION ) {//不合法操作
jniThrowException(env, "java/lang/IllegalStateException", NULL);
} else if ( opStatus == (status_t) PERMISSION_DENIED ) {//权限拒绝
jniThrowException(env, "java/lang/SecurityException", NULL);
} else if ( opStatus != (status_t) OK ) {
if (strlen(message) > 230) {
// if the message is too long, don't bother displaying the status code
jniThrowException( env, exception, message);
} else {
char msg[256];
// append the status code to the message
sprintf(msg, "%s: status=0x%X", message, opStatus);
jniThrowException( env, exception, msg);
}
}
}
}

       看代码和注释能看出process_media_player_call主要是做错误和异常检测工作,然后notify出去相应错误状态。

       接着就是调用C++层MediaPlayer的setData方法,这个我们下一节详细分析,本节我们知道setDataSource最后调用了C++层MediaPlayer的setData方法。

setDisplay过程

       下一步就是java层的setDisplay,依然查看java层MediaPlayer:

1
2
3
4
5
6
7
8
9
10
11
12
   public void setDisplay(SurfaceHolder sh) {
mSurfaceHolder = sh;//保存SurfaceHolder
Surface surface;
if (sh != null) {
surface = sh.getSurface();
} else {
surface = null;
}
_setVideoSurface(surface);//给视频设置surface
updateSurfaceScreenOn();//更新surface到屏幕上
}
private native void _setVideoSurface(Surface surface);

       最后会调用本地方法_setVideoSurface,我们继续找到它的jni实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
static void android_media_MediaPlayer_setVideoSurface(JNIEnv *env, jobject thiz, jobject jsurface)
{

setVideoSurface(env, thiz, jsurface, true /* mediaPlayerMustBeAlive */);
}

static void setVideoSurface(JNIEnv *env, jobject thiz, jobject jsurface, jboolean mediaPlayerMustBeAlive)
{

sp<MediaPlayer> mp = getMediaPlayer(env, thiz);//获取C++的MediaPlayer
if (mp == NULL) {
if (mediaPlayerMustBeAlive) {
jniThrowException(env, "java/lang/IllegalStateException", NULL);
}
return;
}
//将旧的IGraphicBufferProducer的强引用减一
decVideoSurfaceRef(env, thiz);
//IGraphicBufferProducer图层缓冲区合成器
sp<IGraphicBufferProducer> new_st;
if (jsurface) {
//得到java层的surface
sp<Surface> surface(android_view_Surface_getSurface(env, jsurface));
if (surface != NULL) {
//获取IGraphicBufferProducer
new_st = surface->getIGraphicBufferProducer();
if (new_st == NULL) {
jniThrowException(env, "java/lang/IllegalArgumentException",
"The surface does not have a binding SurfaceTexture!");
return;
}
//增加IGraphicBufferProducer的强引用+1
new_st->incStrong((void*)decVideoSurfaceRef);
} else {
jniThrowException(env, "java/lang/IllegalArgumentException",
"The surface has been released");
return;
}
}
//上面我们在native_init方法中将java层mNativeSurfaceTexture查找给了jni层,正好,在这里将IGraphicBufferProducer赋给它
env->SetLongField(thiz, fields.surface_texture, (jlong)new_st.get());

// This will fail if the media player has not been initialized yet. This
// can be the case if setDisplay() on MediaPlayer.java has been called
// before setDataSource(). The redundant call to setVideoSurfaceTexture()
// in prepare/prepareAsync covers for this case.
//如果MediaPlayer没有初始化,这一步会失败。原因可能是setDisplay在setDataSource之前。如果在prepare/prepareAsync 时想规避这个错误而去调用setVideoSurfaceTexture是多余的。
//最终会调用C++层的setVideoSurfaceTexture方法,下一节在分析
mp->setVideoSurfaceTexture(new_st);
}
//将旧的IGraphicBufferProducer的强引用减一
static void decVideoSurfaceRef(JNIEnv *env, jobject thiz)
{

sp<MediaPlayer> mp = getMediaPlayer(env, thiz);
if (mp == NULL) {
return;
}

sp<IGraphicBufferProducer> old_st = getVideoSurfaceTexture(env, thiz);
if (old_st != NULL) {
old_st->decStrong((void*)decVideoSurfaceRef);
}
}

       这一步主要是对图像显示的surface的保存,然后将旧的IGraphicBufferProducer强引用减一,再获得新的IGraphicBufferProducer,最后会调用C++的MediaPlayer的setVideoSurfaceTexture将它折纸进去。

       IGraphicBufferProducer是SurfaceFlinger的内容,一个UI完全显示到diplay的过程,SurfaceFlinger扮演着重要的角色但是它的职责是“Flinger”,即把系统中所有应用程序的最终的“绘图结果”进行“混合”,然后统一显示到物理屏幕上,而其他方面比如各个程序的绘画过程,就由其他东西来担任了。这个光荣的任务自然而然地落在了BufferQueue的肩膀上,它是每个应用程序“一对一”的辅导老师,指导着UI程序的“画板申请”、“作画流程”等一系列细节。下面的图描述了这三者的关系:

IGraphicBufferProducer

       虽说是三者的关系,但是他们所属的层却只有两个,app属于Java层,BufferQueue/SurfaceFlinger属于native层。也就是说BufferQueue也是隶属SurfaceFlinger,所有工作围绕SurfaceFlinger展开。
       这里IGraphicBufferProducer就是app和BufferQueue重要桥梁,GraphicBufferProducer承担着单个应用进程中的UI显示需求,与BufferQueue打交道的就是它。

       后面的prepare和start直接和C++层MediaPlayer相关,因此我们再下一节分析,这里就简单到此。

小结

       本节只是简单分析上层MediaPlayer调用的一些工作,下一节开始将详细分析后续流程,包括对不同数据源创建不同DataSource,prepare过程中对于系统播放器的选择,底层解码和厂商对OpenMax接口的实现,当然还有整个多媒体框架的C/S架构分析。
妹子

坚持技术分享,您的支持将鼓励我继续创作!