作者:6324upup | 来源:互联网 | 2018-07-18 12:39
本文实例讲述了python采集百度百科的方法。分享给大家供大家参考。具体如下:
#!/usr/bin/python
# -*- coding: utf-8 -*-
#encoding=utf-8
#Filename:get_baike.py
import urllib2,re
import sys
def getHtml(url,time=10):
respOnse= urllib2.urlopen(url,timeout=time)
html = response.read()
response.close()
return html
def clearBlank(html):
if len(html) == 0 : return ''
html = re.sub('\r|\n|\t','',html)
while html.find(" ")!=-1 or html.find(' ')!=-1 :
html = html.replace(' ',' ').replace(' ',' ')
return html
if __name__ == '__main__':
html = getHtml('http://baike.baidu.com/view/4617031.htm',10)
html = html.decode('gb2312','replace').encode('utf-8') #转码
title_reg = r'(.*?)
'
content_reg = r'(.*?)
'
title = re.compile(title_reg).findall(html)
cOntent= re.compile(content_reg).findall(html)
title[0] = re.sub(r'<[^>]*&#63;>', '', title[0])
content[0] = re.sub(r'<[^>]*&#63;>', '', content[0])
print title[0]
print '#######################'
print content[0]
希望本文所述对大家的Python程序设计有所帮助。