Python自定义scrapy中间模块避免重复采集的方法
更新时间:2015年04月07日 16:36:56 作者:pythoner
这篇文章主要介绍了Python自定义scrapy中间模块避免重复采集的方法,实例分析了Python实现采集的技巧,非常具有实用价值,需要的朋友可以参考下
本文实例讲述了Python自定义scrapy中间模块避免重复采集的方法。分享给大家供大家参考。具体如下:
from scrapy import log
from scrapy.http import Request
from scrapy.item import BaseItem
from scrapy.utils.request import request_fingerprint
from myproject.items import MyItem
class IgnoreVisitedItems(object):
"""Middleware to ignore re-visiting item pages if they
were already visited before.
The requests to be filtered by have a meta['filter_visited']
flag enabled and optionally define an id to use
for identifying them, which defaults the request fingerprint,
although you'd want to use the item id,
if you already have it beforehand to make it more robust.
"""
FILTER_VISITED = 'filter_visited'
VISITED_ID = 'visited_id'
CONTEXT_KEY = 'visited_ids'
def process_spider_output(self, response, result, spider):
context = getattr(spider, 'context', {})
visited_ids = context.setdefault(self.CONTEXT_KEY, {})
ret = []
for x in result:
visited = False
if isinstance(x, Request):
if self.FILTER_VISITED in x.meta:
visit_id = self._visited_id(x)
if visit_id in visited_ids:
log.msg("Ignoring already visited: %s" % x.url,
level=log.INFO, spider=spider)
visited = True
elif isinstance(x, BaseItem):
visit_id = self._visited_id(response.request)
if visit_id:
visited_ids[visit_id] = True
x['visit_id'] = visit_id
x['visit_status'] = 'new'
if visited:
ret.append(MyItem(visit_id=visit_id, visit_status='old'))
else:
ret.append(x)
return ret
def _visited_id(self, request):
return request.meta.get(self.VISITED_ID) or request_fingerprint(request)
希望本文所述对大家的Python程序设计有所帮助。
相关文章
windows下安装python的C扩展编译环境(解决Unable to find vcvarsall.bat)
这篇文章主要介绍了windows下安装python的C扩展编译环境(解决Unable to find vcvarsall.bat),需要的朋友可以参考下2018-02-02
Python 处理 Pandas DataFrame 中的行和列
这篇文章主要介绍了Python处理Pandas DataFrame中的行和列,文章围绕主题展开详细的内容介绍,具有一定的参考价值,需要的小伙伴可以参考一下2022-09-09


最新评论