Fork of Soblow fork of Bhumbledler, tool to manage bundle from HumbleBundle https://git.bismuth.it/Soblow/Bhumbledler/
Find a file
2026-01-24 10:49:25 +01:00
.gitignore add vim files 2026-01-22 00:44:03 +01:00
cookies.py Remove redundant data.py and cookies_secret.py 2024-08-22 12:07:54 +02:00
currencies_cache.json update cache 2026-01-22 00:42:10 +01:00
data.sample.py 🔑Add the ability to add extra keys obtained outside of HumbleBundle 2025-05-05 16:10:59 +02:00
filter.py get bundle choice game list 2026-01-24 10:46:22 +01:00
Justfile Bump dependencies and fix for recent versions of scrapy 2025-05-05 16:10:59 +02:00
LICENSE Add LICENSE 2024-08-09 20:40:27 +02:00
main.js Initial commit. 2019-11-01 15:54:16 +01:00
main.py add expiration date 2026-01-22 00:43:30 +01:00
models.py add expiration date 2026-01-22 00:43:30 +01:00
pdm.lock bump dependencies 2026-01-22 00:44:39 +01:00
pyproject.toml add me 2026-01-24 10:49:25 +01:00
README.md get bundle choice game list 2026-01-24 10:46:22 +01:00
spider.py get bundle choice game list 2026-01-24 10:46:22 +01:00
template.html add expiration date 2026-01-22 00:43:30 +01:00
utils.py fix API url to convert 2026-01-18 22:03:27 +01:00

Bhumbledler

A set of Python scripts to collect Humble bundle stats about bought bundles & games, share a list of unused keys, and ease the process of claiming them.

It outputs a single HTML file holding a "ready-to-use" list, which can be sent to a website hoster along with the provided JavaScript at main.js.

Installation

As described in the pyproject.toml file, this project requires Python 3.12 or above.

pdm (backend version) is used for dependency management.

To install dependencies in a virtual environment, run:

pdm install
$(pdm venv activate)

The last command will activate the matching venv, so that tools like Scrapy can be used from the virtual environment.

Optional (for easier use, or for remote upload)

A just script is provided to ease manipulation process.

For remote uploading, the provided Just script uses rsync, which needs to be installed on the system.

Usage

Prerequisites

Several constants must be changed across the project:

  • cookies_secret.py: Replace the SESSION_TOKEN variable content with your actual session token when you visit Humble Bundle's website.
  • data.py: As shown in the sample file, you may specify some already claimed keys in the RESERVED_GAMES dictionary. These are the link for keys claiming that will be output in a file. Ensure the CURRENCY is properly set to expected currency for your country.
  • main.py: At the top of the file, the DB_URL constant should point to a valid Database address. It will be used for data importing (from the JSON collected from Humble Bundle's website) and to make easier queries. If in doubt, you may use the provided sqlite content, which will use a local SQLite database as a file.
  • template.html: This is the actual template used for generating the resulting HTML page with the keys listing. For more infos on how to use the template, refer to Mako documentation

Retrieving the game list

Run just fetch, or directly use the provided scrapper:

scrapy runspider -o ./games.json:json spider.py

This will output a games.json file with the required informations from your Humble Bundle.

If the download fails, make sure you properly updated the cookies as seen in the Prerequisites section

Cleaning the game list

Run python filter.py on a file named o.json (TBC, still a PoC). It'll generate the file o.output.json, with the merged monthly bundles.

Importing the game list into the database

Run just import for simple execution.

If you prefer running directly the commands, use the following:

# Initialize the database.
# This will wipe all previous imported data.
python main.py init
# Import the data from the `games.json` file into the database
python main.py import games.json

Please note that this step will make requests to an exchange rate API, which allows to get a better approximation of the amount of money spent in stats collection.

To avoid overloading the API, a cache is used, storing request results.

Building the HTML listing page

Run just build to generate the HTML file. You may also directly run:

python main.py gen

The default resulting file is located at /games.html.

Uploading game list to remote server

Run just upload to copy the game list (located in /games.html) along with the required JavaScript on the remote server (defined in the remote variable in the Justfile).

Retrieving some statistics

Run just stats to calculate statistics about your game collection and display them over standard output. You may also directly run:

python main.py stats

Claiming keys

Fill the requested keys in the data file, as explained in the prerequisite section.

Then run just urls. You may also directly run:

python main.py urls

The default resulting file is located at /codes.txt.

Licence

MIT