多进程的方式可以增加脚本的并发处理能力, python 支持这种多进程的编程方式
在类unix系统中, python的 os
模块内置了fork 函数用以创建子进程
import os print "Process %s start ..." %(os.getpid()) pid = os.fork() if pid == 0: print "This is child process and my pid is %d, my father process is %d" %(os.getpid(), os.getppid()) else: print "This is Fater process, And Its child pid is %d" %(pid)
执行结果
Process 4276 start ... This is Fater process, And Its child pid is 4277 This is child process and my pid is 4277, my father process is 4276
从结果可以看到, 从 pid = os.fork()
开始, 下面的部分代码运行了两次, 第一次是父进程运行, 第二次是子进程运行, 且子进程的 fork
的结果总是 0
, 所以这个也可以用来作为区分父进程或是子进程标志
那么变量在多个进程之间是否相互影响呢import os
print "Process %s start ..." %(os.getpid()) pid = os.fork() source = 10 if pid == 0: print "This is child process and my pid is %d, my father process is %d" %(os.getpid(), os.getppid()) source = source - 6 print "child process source value is "+str(source) else: print "This is Fater process, And Its child pid is %d" %(pid) source = source - 1 print "father process source value is "+str(source) print "source value is "+str(source)
执行的结果如下:
Process 4662 start ... This is Fater process, And Its child pid is 4663 This is child process and my pid is 4663, my father process is 4662 father process source value is 9 child process source value is 4 source value is 9 source value is 4
很明显, 初始值为10的source 在父进程中值 减少了 1, 为9, 而子进程明显source的初始值 是10, 也就是说 多进程之间并没有什么相互影响
fork 方式是仅在linux 下才有的接口, 在windows下并没有, 那么在windows下如何实现多进程呢, 这就用到了 multiprocessing
multiprocessing 模块的Process 对象表示的是一个进程对象, 可以创建子进程并执行制定的函数
from multiprocessing import Process import os def pro_do(name, func): print "This is child process %d from parent process %d, and name is %s which is used for %s" %(os.getpid(), os.getppid(), name, func) if __name__ == "__main__": print "Parent process id %d" %(os.getpid()) #process 对象指定子进程将要执行的操作方法(pro_do), 以及该函数的对象列表args(必须是tuple格式, 且元素与pro_do的参数一一对应) pro = Process(target=pro_do, args=("test", "dev")) print "start child process" #启动子进程 pro.start() #是否阻塞方式执行, 如果有, 则阻塞方式, 否则非阻塞 pro.join() #if has this, it's synchronous operation or asynchronous operation print "Process end"
执行结果
Parent process id 4878 start child process This is child process 4879 from parent process 4878, and name is test which is used for dev Process end
如果没有pro.join(), 则表示非阻塞方式运行, 那么最终的Process end的输出位置就有可能出现在pro_do 方法执行之前了
Parent process id 4903 start child process Process end This is child process 4904 from parent process 4903, and name is test which is used for dev
通过multiprocessing 的process对象创建多进程, 还可以从主进程中向子进程传递参数, 例如上面例子中的pro_do的参数
from multiprocessing import Pool import os, time def pro_do(process_num): print "child process id is %d" %(os.getpid()) time.sleep(6 - process_num) print "this is process %d" %(process_num) if __name__ == "__main__": print "Current process is %d" %(os.getpid()) p = Pool() for i in range(5): p.apply_async(pro_do, (i,)) #增加新的进程 p.close() # 禁止在增加新的进程 p.join() print "pool process done"
输出:
Current process is 19138 child process id is 19139 child process id is 19140 this is process 1 child process id is 19140 this is process 0 child process id is 19139 this is process 2 child process id is 19140 this is process 3 this is process 4 pool process done
其中
child process id is 19139 child process id is 19140
是立即输出的, 后面的依次在等待了sleep的时间后输出 , 之所以立即输出了上面两个是因为诶Pool 进程池默认是按照cpu的数量开启子进程的, 我是在虚拟机中运行, 只分配了两核, 所以先立即启动两个子进程, 剩下的进程要等到前面的进程执行完成后才能启动。不过也可以在p=Poo() 中使用Pool(5)来指定启动的子进程数量, 这样输出就是下面的了:
Current process is 19184 child process id is 19185 child process id is 19186 child process id is 19188 child process id is 19189 child process id is 19187 this is process 4 this is process 3 this is process 2 this is process 1 this is process 0 pool process done
且
Current process is 19184 child process id is 19185 child process id is 19186 child process id is 19188 child process id is 19189 child process id is 19187
都是立即输出的
父进程可以指定子进程执行的方法及其参数, 达到父进程向子进程传递消息的单向通信的目的, 那子进程之间或子进程怎么向父进程通信呢
Queue 是一种方式
from multiprocessing import Process, Queue import os, time def write_queue(q): for name in ["Yi_Zhi_Yu", "Tony" ,"San"]: print "put name %s to queue" %(name) q.put(name) time.sleep(2) print "write data finished" def read_queue(q): print "begin to read data" while True: name = q.get() print "get name %s from queue" %(name) if __name__ == "__main__": q = Queue() pw = Process(target=write_queue, args=(q,)) pr = Process(target=read_queue,args=(q,)) pw.start() pr.start() pw.join() #这个表示是否阻塞方式启动进程, 如果要立即读取的话, 两个进程的启动就应该是非阻塞式的, 所以pw在start后不能立即使用pw.join(), 要等pr start后方可 pr.terminate() #服务进程,强制停止
结果
put name Yi_Zhi_Yu to queue begin to read data get name Yi_Zhi_Yu from queue put name Tony to queue get name Tony from queue put name San to queue get name San from queue write data finished
另外还有Pipe
其原理参见 http://ju.outofmemory.cn/entry/106041 , 其只能作为两个进程之间的通信
#!/usr/bin/env python #encoding=utf-8 from multiprocessing import Process,Pipe import os,time,sys def send_pipe(p): names = ["Yi_Zhi_Yu", "Tony", "San"] for name in names: print "put name %s to Pipe" %(name) p.send(name) time.sleep(1) def recv_pipe(p): print "Try to read data in pipe" while True: name = p.recv() print "get name %s from pipe" %(name) if __name__ == "__main__": #pipe, one for send, one for read ps_pipe, pr_pipe = Pipe() #process ps = Process(target=send_pipe, args=(ps_pipe,)) pr = Process(target=recv_pipe, args=(pr_pipe,)) pr.start() ps.start() ps.join() pr.terminate()
在实例化Pipe的时候, 会产生两个ps_pipe(read-write Connection, handle 5), pr_pipe(read-write Connection, handle 5) , 都可以作为发送或者接受方, 一旦一个确认为 攻
, 另一个自然就是 受
了(之所以Pipe只能作为两个进程之间的通信方式, 原因也许就是这个),产生的结果如下
Try to read data in pipe put name Yi_Zhi_Yu to Pipe get name Yi_Zhi_Yu from pipe put name Tony to Pipe get name Tony from pipe put name San to Pipe get name San from pipe
还有一种Array, Value 的形式, 暂且不表, 有时间在折腾
以上均为python 学习笔记和练习, 如有错误, 欢迎指出