1
0
mirror of https://github.com/tiyn/stud.ip-crawler.git synced 2025-04-01 15:37:47 +02:00
TiynGER fa36e0f29e database: files id and chdates are stored
- mysql creates database and tables to given mysql if not existent already
- mysql reads last change values from db
- mysql saves ch_date after downloading
- run now takes care for the variables of mysql and studip
2020-06-07 12:49:08 +02:00
2020-06-05 20:03:47 +02:00

Stud.IP Crawler

This is a program that downloads all files available for a given Stud.IP user. It only downloads and searches through the courses in the current semester. If you run the program again it only downloads files that have changed since the last time running it.

Features/To-Dos

  • Downloads files of given users active semester via commandline
    • Keeping file structure of Stud.IP
    • Specify username
    • Specify password
    • Specify Stud.IP-URL
    • Specify output directory
    • Specify chunk size to download big files
    • Specify all important database variables
  • Only download files after given date
    • Save and read download date
    • Possible reset of download date
  • Incremental file download
    • Store id and chdate of downloaded files
  • Logging
    • Console log
    • Log file

Installation

  • create an instance of
  • git clone https://github.com/tiyn/studip-crawler
  • cd studip-crawler/src/
  • pip3install -r requirements - install dependencies

Usage

Just run the file via python3 run.py [options]. Alternatively to python3 run.py you can give yourself permissions using chmod +x run.py [options] and run it with ./run.py [options]. There are several options required to work. Run python3 run.py -h for a help menu and see which ones are important for you.

Tested StudIP instances

  • Carl von Ossietzky Universität Oldenburg
Description
This program downloads all files of a Stud.IP users current semester.
Readme GPL-3.0 152 KiB
Languages
Python 96.3%
Dockerfile 2.8%
Shell 0.9%