首页 > 编程 > Python > 正文

利用Python中的pandas库对cdn日志进行分析详解

2020-02-23 04:24:24
字体:
来源:转载
供稿:网友

前言

最近工作工作中遇到一个需求,是要根据CDN日志过滤一些数据,例如流量、状态码统计,TOP IP、URL、UA、Referer等。以前都是用 bash shell 实现的,但是当日志量较大,日志文件数G、行数达数千万亿级时,通过 shell 处理有些力不从心,处理时间过长。于是研究了下Python pandas这个数据处理库的使用。一千万行日志,处理完成在40s左右。

代码

#!/usr/bin/python# -*- coding: utf-8 -*-# sudo pip install pandas__author__ = 'Loya Chen'import sysimport pandas as pdfrom collections import OrderedDict"""Description: This script is used to analyse qiniu cdn log.================================================================================日志格式IP - ResponseTime [time +0800] "Method URL HTTP/1.1" code size "referer" "UA"================================================================================日志示例 [0] [1][2]  [3]  [4]   [5]101.226.66.179 - 68 [16/Nov/2016:04:36:40 +0800] "GET http://www.qn.com/1.jpg -" [6] [7] [8]    [9]200 502 "-" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)"================================================================================"""if len(sys.argv) != 2: print('Usage:', sys.argv[0], 'file_of_log') exit() else: log_file = sys.argv[1] # 需统计字段对应的日志位置 ip  = 0url  = 5status_code = 6size = 7referer = 8ua  = 9# 将日志读入DataFramereader = pd.read_table(log_file, sep=' ', names=[i for i in range(10)], iterator=True)loop = TruechunkSize = 10000000chunks = []while loop: try: chunk = reader.get_chunk(chunkSize) chunks.append(chunk) except StopIteration: #Iteration is stopped. loop = Falsedf = pd.concat(chunks, ignore_index=True)byte_sum = df[size].sum()        #流量统计top_status_code = pd.DataFrame(df[6].value_counts())      #状态码统计top_ip  = df[ip].value_counts().head(10)      #TOP IPtop_referer = df[referer].value_counts().head(10)      #TOP Referertop_ua  = df[ua].value_counts().head(10)      #TOP User-Agenttop_status_code['persent'] = pd.DataFrame(top_status_code/top_status_code.sum()*100)top_url  = df[url].value_counts().head(10)      #TOP URLtop_url_byte = df[[url,size]].groupby(url).sum().apply(lambda x:x.astype(float)/1024/1024) /   .round(decimals = 3).sort_values(by=[size], ascending=False)[size].head(10) #请求流量最大的URLtop_ip_byte = df[[ip,size]].groupby(ip).sum().apply(lambda x:x.astype(float)/1024/1024) /   .round(decimals = 3).sort_values(by=[size], ascending=False)[size].head(10) #请求流量最多的IP# 将结果有序存入字典result = OrderedDict([("流量总计[单位:GB]:"   , byte_sum/1024/1024/1024),   ("状态码统计[次数|百分比]:"  , top_status_code),   ("IP TOP 10:"    , top_ip),   ("Referer TOP 10:"   , top_referer),   ("UA TOP 10:"    , top_ua),   ("URL TOP 10:"   , top_url),   ("请求流量最大的URL TOP 10[单位:MB]:" , top_url_byte),    ("请求流量最大的IP TOP 10[单位:MB]:" , top_ip_byte)])# 输出结果for k,v in result.items(): print(k) print(v) print('='*80)            
发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表