用户注册



邮箱:

密码:

用户登录


邮箱:

密码:
记住登录一个月忘记密码?

发表随想


还能输入:200字
云代码 - python代码库

爬取网易云课堂教程

2023-02-18 作者: Python自学举报

[python]代码库

import re
import json
import urllib
from urllib.request import urlopen
from bs4 import BeautifulSoup
import multiprocessing

base_url = 'http://study.163.com/course/introduction/'
local_url = base_url + '100125090.htm'

res = urlopen(local_url)
soup = BeautifulSoup(res, 'html.parser')
video_urls = soup.find_all('a', class_='f-thide f-fl')
for item in video_urls:
    video_url = item['href']
print(video_url)


def crawl(video_url):
    # Get video_info
    res = urlopen(video_url)
    soup = BeautifulSoup(res, 'html.parser')
    json_str = soup.find('script', type='application/json').text
    data = json.loads(json_str)
    # Get videos
    videos = data.get('result', None).get('mpath', None)
    for index, video in enumerate(videos):
        print('url:', video)
        # download
        urllib.request.urlretrieve(video, f'videos/{index}.mp4')
    # Get description
    description = data.get('result', None).get('crDescription', None)
    print('description:', description)
    # Save description
    with open('description.txt', 'w') as f:
        f.write(description)


# Create four sub-processes
p1 = multiprocessing.Process(target=crawl, args=(video_url,))
p2 = multiprocessing.Process(target=crawl, args=(video_url,))
p3 = multiprocessing.Process(target=crawl, args=(video_url,))
p4 = multiprocessing.Process(target=crawl, args=(video_url,))

# Start sub-processes
p1.start()
p2.start()
p3.start()
p4.start()

# Join sub-processes
p1.join()
p2.join()
p3.join()
p4.join()


网友评论    (发表评论)


发表评论:

评论须知:

  • 1、评论每次加2分,每天上限为30;
  • 2、请文明用语,共同创建干净的技术交流环境;
  • 3、若被发现提交非法信息,评论将会被删除,并且给予扣分处理,严重者给予封号处理;
  • 4、请勿发布广告信息或其他无关评论,否则将会删除评论并扣分,严重者给予封号处理。


扫码下载

加载中,请稍后...

输入口令后可复制整站源码

加载中,请稍后...