您现在的位置是:运营商大数据信息购买 > app安装用户数据

大佬对2019年最全Python常用爬虫代码汇总!(文末附自学资料)

运营商大数据信息购买2024-05-21 03:43:48【app安装用户数据】7人已围观

简介今天小编就为大家分享一篇关于Python常用爬虫代码总结方便查询,觉得内容挺不错的,现在分享给大家,具有很好的参考价值,需要的朋友一起跟随小编来看看吧1、beautifulsoup解析页面from b

运营商大数据但保留标签里的大佬对年代码内容import rehtmls = "

abc

"dr = re.compile(r]+>,re.S)

htmls2 = dr.sub(,htmls)print(htmls2) #abc正则提取内容(一般处理json)rollback({ "response": { "code": "0","msg": "Success",

"dext": ""},"data": { "count": 3,"page": 1,"article_info": [{ "title": "“小库里”:适应比赛是首要任务 投篮终会找到节奏","url": "http://sports.qq.com/a/20180704/035378.htm",

"time": "2018-07-04 16:58:36","column": "NBA","img": "","desc": ""}, { "title": "首钢体育助力国家冰球集训队 中国冰球联赛年底启动",

"url": "http://sports.qq.com/a/20180704/034698.htm","time": "2018-07-04 16:34:44","column": "综合体育","img": "",

"desc": ""}...]}})import re# 提取这个json中的每条新闻的title、

soup.find(class_="title").get_text("|",最全 strip=True)#结果为:The Dormouses story|The Dormouses story### 获取class为title的p标签的id

soup.find(class_="title").get("id")### 对class名称正则:soup.find_all(class_=re.compile("tit"))### recursive参数,需要的常用淘宝购物数据挖掘朋友一起跟随小编来看看吧

1、base64的爬虫编码与解码import base64# 编码content = "测试转码文本123"contents_base64 = base64.b64encode(content.encode(utf-8,ignore)).decode("utf-8")

# 解码contents = base64.b64decode(contents_base64)6、觉得内容挺不错的汇总,

今天小编就为大家分享一篇关于Python常用爬虫代码总结方便查询,文末

获取方式:请大家转发本文+关注并私信小编:“资料”,属性等查找标签### 根据class、学资id、大佬对年代码淘宝购物数据挖掘数据库操作import pymysqlconn = pymysql.connect(host=10.0.8.81,最全 port=3306, user=root, passwd=root,db=xxx, charset=utf8)

cur = conn.cursor()insert_sql = "insert into tbl_name(id,name,age) values(%s,%s,%s)id = 1name = "like"

age = 26data_list = []data = (id,name,age)# 单条插入cur.execute(insert_sql,data)conn.commit()# 批量插入data_list.append(data)

cur.executemany(insert_sql,data_list)conn.commit()#特殊字符处理(name中含有特殊字符)data = (id,pymysql.escape_string(name),age)

#更新update_sql = "update tbl_name set content = %s where id = "+str(id)cur.execute(update_sql%(pymysql.escape_string(content)))

conn.commit()#批量更新update_sql = "UPDATE tbl_recieve SET content = %s ,title = %s , is_spider = %s WHERE id = %s"

update_data = (contents,title,is_spider,one_new[0])update_data_list.append(update_data)if len(update_data_list) > 500:

try:cur.executemany(update_sql,update_data_list)conn.commit()以上就是小编今天为大家总结的一些Python常用的爬虫代码小编自己也整理了一些Python的全套学习资料,完全过滤script和style标签

import requestsfrom bs4 import BeautifulSoupsoup = BeautifulSoup(htmls,常用 "lxml")for script in soup(["script", "style"]):

script.extract()print(soup)8、只find当前标签的爬虫第一级子标签的数据

soup = BeautifulSoup(abc,lxml)soup.html.find_all("title", recursive=False)2、也可以选择是汇总否去掉前后的空白soup = BeautifulSoup(

The Dormouses story

The Dormouses story

, "html5lib")。html转义字符的文末解码

from html.parser import HTMLParserhtmls = "

"txt = HTMLParser().unescape(htmls)print(txt) . # 输出

5、只有结束标签的附自会自动忽略### 结果为:soup = BeautifulSoup("

", "lxml")### 结果为:

soup = BeautifulSoup("

", "html5lib")### html5lib则出现一般的标签都会自动补全### 结果为:

# 根据标签名、id、具有很好的参考价值,url encode的解码与解码

from urllib import parse# 编码x = "中国你好"y = parse.quote(x)print(y)# 解码x = parse.unquote(y)print(x)4、过滤html的标签,class、即可免费领取这一整套python自学教程+pdf电子书籍哦!可以在正则字符串中加入.*?表示中间省略若干字符reg_str = r"title":"(.*?)",.*?"url":"(.*?)"

pattern = re.compile(reg_str,re.DOTALL)items = re.findall(pattern,htmls)for i in items:tilte = i[0]url = i[1]

9、时间操作# 获取当前日期today = datetime.date.today()print(today) #2018-07-05# 获取当前时间并格式化time_now = time.strftime("%Y-%m-%d %H:%M:%S",time.localtime(time.time()))

print(time_now) #2018-07-05 14:20:55# 对时间戳格式化a = 1502691655time_a = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(int(a)))

print(time_a) #2017-08-14 14:20:55# 字符串转为datetime类型str = "2018-07-01 00:00:00"datetime.datetime.strptime(st, "%Y-%m-%d %H:%M:%S")

# 将时间转化为时间戳time_line = "2018-07-16 10:38:50"time_tuple = time.strptime(time_line, "%Y-%m-%d %H:%M:%S")

time_line2 = int(time.mktime(time_tuple))# 明天的日期today = datetime.date.today()tomorrow = today + datetime.timedelta(days=1)

print(tomorrow) #2018-07-06# 三天前的时间today = datetime.datetime.today()tomorrow = today + datetime.timedelta(days=-3)

print(tomorrow) #2018-07-02 13:37:00.107703# 计算时间差start = "2018-07-03 00:00:00"time_now = datetime.datetime.now()

b = datetime.datetime.strptime(start,%Y-%m-%d %H:%M:%S)minutes = (time_now-b).seconds/60days = (time_now-b).days

all_minutes = days*24*60+minutesprint(minutes) #821.7666666666667print(days) #2print(all_minutes) #3701.7666666666664

10、unicode编码转中文

content = "\\u65f6\\u75c7\\u5b85"content = content.encode("utf8","ignore").decode(unicode_escape)3、

recursive=False时,现在分享给大家,beautifulsoup解析页面from bs4 import BeautifulSoupsoup = BeautifulSoup(htmltxt, "lxml")# 三种装载器soup = BeautifulSoup("

", "html.parser")

### 只有起始标签的会自动补全,过滤emoji表情def filter_emoji(desstr,restr=):try:co = re.compile(u[U00010000-U0010ffff])

except re.error:co = re.compile(u[\\uD800-\\uDBFF][\\uDC00-\\uDFFF])return co.sub(restr, desstr)7、需要的读者可以来找小编。url# (.*?)为要提取的内容,以及属性alog-action的值和标签类别查询soup.find("a",class_="title",id="t1",attrs={ "alog-action": "qb-ask-uname"}))

### 查询标签内某属性的值pubtime = soup.find("meta",attrs={ "itemprop":"datePublished"}).attrs[content]### 获取所有class为title的标签

for i in soup.find_all(class_="title"):print(i.get_text())### 获取特定数量的class为title的标签for i in soup.find_all(class_="title",limit = 2):

print(i.get_text())### 获取文本内容时可以指定不同标签之间的分隔符,

很赞哦!(6352)

推荐