Configurations
PeerTube configuration
PeerTube configuration is loaded with node-config. You can have multiple config files that will be selected due to a specific file load order. In production, PeerTube usually loads the following configuration files, in this order:
default.yaml: all default options set by PeerTube, you copy this file on install/upgradeproduction.yaml: custom options that overridedefault.yaml, you update this file on install/upgradelocal-production.json: options set by the admin using the web interface (PeerTube reloads options set in this file on the fly, and doesn't need to be restarted)- environment variables: env variables set in Docker (if you use Docker)
The yaml configuration files (default.yaml, production.yaml) are parsed during application start, which means that PeerTube has to be restarted if you manually changed these files. PeerTube automatically reloads some configuration keys on the fly when they are updated from the web interface.
You can find an exhaustive list of the configuration options in default.yaml.
Environment variables
NODE_ENV: Specify server mode (production,devortest) to choose the appropriate configurationNODE_CONFIG_DIR: Specify PeerTube configuration directoryNODE_APP_INSTANCE: Specify application number. If set, PeerTube will use the chosen configuration app number (production-1.yamlfor example)PT_INITIAL_ROOT_PASSWORD: Set up an initial administrator password. It must be 6 characters or moreFFMPEG_PATHandFFPROBE_PATH: Use custom FFmpeg/FFprobe binariesHTTP_PROXYandHTTPS_PROXY: Use proxy for HTTP requestsYOUTUBE_DL_DOWNLOAD_BEARER_TOKEN: Token to send inAuthorizationHTTP header when downloading latest youtube-dl binary
Security
Installing PeerTube following the production guide should be secure enough by default. We list here suggestions to tighten the security of some parts of PeerTube.
Set up a HTTP proxy
With ActivityPub federation and import features, PeerTube does many HTTP requests to the external world. To prevent private network/URL access, we encourage to use a HTTP proxy using HTTP_PROXY and HTTPS_PROXY environment variables.
Systemd Unit with reduced privileges
A systemd unit template is provided at support/systemd/peertube.service. Some directives can be changed to improve security!
PrivateDevices=true: sets up a new/devmount for the Peertube process and only adds API pseudo devices like/dev/null,/dev/zero, or/dev/randombut not physical devices. This won't work on Raspberry Pi. That's why we don't enable it by defaultProtectHome=true: sandboxes Peertube such that the service can not access the/home,/root, and/run/userfolders. If your local Peertube user has its home folder in one of the restricted places, either change the home directory of the user or set this option tofalse
Scalability
Here is some advice if you plan to manage a large PeerTube platform that may have many viewers or uploaders.
Many concurrent viewers
If you plan to have many concurrent viewers (~ 1000) on a PeerTube video: we recommend to and adapt some configuration keys:
- Use the recommended installation guide with nginx serving PeerTube public static files
- Have at least 4 CPU cores and 4GB of RAM
- Use the default PeerTube configuration (in your
production.yaml) - Use
warnlog level in PeerTube configuration (log.level) to reduce log overhead or disable HTTP requests logging (log.log_http_requests) - Disable HTTP request duration metrics (
open_telemetry.metrics.http_request_duration.enabled) if you enabled OpenTelemetry metrics - Disable all PeerTube plugins
- Increase nginx worker_connections to
10000and worker_rlimit_nofile to30000 - Increase live transcoding threads if you plan to generate multiple resolutions
If you plan to have more concurrent viewers, consider in addition to:
- Disable client logs (
log.accept_client_log) - Disable OpenTelemetry metrics (
open_telemetry.metrics.enabled) - Help to distribute video static files using:
- A CDN in front of PeerTube
- Object Storage
- PeerTube redundancy
- Forbid access to
/api/v1/videos/{videoID}/viewsand/api/v1/metrics/playbackin your reverse proxy so PeerTube does not handle these API calls (you'll loose views and viewers statistics)
Many videos
To handle many videos uploaded/imported on your PeerTube platform, we recommend to:
- Setup Object Storage to store video files on a remote storage
- Setup Remote Runners to offload transcoding jobs
HTTP video imports
PeerTube uses yt-dlp for HTTP video imports and channel synchronization.
Import reliability depends on:
- the remote platform
- the
yt-dlpversion - the reputation of your server IPs
Some platforms, especially YouTube, may block imports with anti-bot checks, rate limits or IP restrictions.
Use proxies first
If imports fail because the remote platform limits or blocks your server IP, use proxies first.
PeerTube supports two proxy methods for HTTP video imports:
HTTP_PROXYandHTTPS_PROXY: generic outbound proxies used by PeerTube and, by default, byyt-dlpimport.videos.http.proxies: a dedicated proxy list for HTTP video imports. PeerTube randomly selects one proxy from this list for each import
These are outbound proxies used by PeerTube to reach remote platforms. They are different from the reverse proxy in front of your PeerTube web server.
HTTP_PROXY and HTTPS_PROXY affect all outbound HTTP(S) traffic generated by PeerTube, not only HTTP video imports.
You can also keep import.videos.http.force_ipv4: true. It is enabled by default because many supported sites strongly rate limit IPv6.
For example:
import:
videos:
http:
force_ipv4: true
proxies:
- "http://username:password@proxy-1.example:3128"
- "http://username:password@proxy-2.example:3128"HTTP_PROXY=http://username:password@proxy.example:3128
HTTPS_PROXY=http://username:password@proxy.example:3128
PEERTUBE_IMPORT_VIDEOS_HTTP_FORCE_IPV4=true
PEERTUBE_IMPORT_VIDEOS_HTTP_PROXIES=["http://username:password@proxy-1.example:3128","http://username:password@proxy-2.example:3128"]For classic installs, set HTTP_PROXY and HTTPS_PROXY in the environment used to start the PeerTube service.
Use cookies for YouTube imports when needed PeerTube >= 8.2
PeerTube can also pass a Netscape-format cookie file to yt-dlp. This can help when imports need a real browser session, when your server IP is limited, or when no suitable outbound proxies are available.
Cookies are more sensitive than proxies because imports run with the account from which the cookies were exported. That account may be rate limited or temporarily blocked.
If enabled, PeerTube expects the cookie file at:
- Classic install:
${storage.tmp_persistent}/youtube-cookies.txt - Docker:
/data/tmp-persistent/youtube-cookies.txt
For manual export instructions, see the yt-dlp FAQ section How do I pass cookies to yt-dlp?.
Exported browser cookies may include more than the target site, so protect this file accordingly.
Enable cookies in PeerTube
import:
videos:
http:
cookies:
enabled: truePEERTUBE_IMPORT_VIDEOS_HTTP_COOKIES_ENABLED=trueRestart PeerTube after changing production.yaml.
Linux auto-refresh example
WARNING
Enable import.videos.http.cookies.enabled only if you trust users who can trigger imports, because PeerTube will import videos using the account from which the cookies were exported.
This setup uses your own browser account for imports. Remote platforms may rate limit or temporarily ban this account.
You should also open YouTube in the selected browser profile every few days to refresh the cookies.
The following example is for Linux workstations only. If you already use a YouTube account in Firefox or Chrome on Linux, you can automate cookie refresh with SSH and cron.
How it works
The example uses two scripts:
export-youtube-cookies.py: reads cookies from your browser profile, keeps only YouTube cookies, writes a Netscape-format file locally, uploads it to the PeerTube server over SSH and installs it with the correct owner and permissionsrun-once-per-day.sh: optional wrapper for cron, so you can run the job every hour but only upload once per day
Workflow:
- the Python script reads cookies from the selected browser profile
- it writes
~/.cache/youtube-cookies.txt - it uploads the file to the PeerTube server
- it installs the file at the path expected by PeerTube
- cron runs the wrapper regularly, and the wrapper limits uploads to once per day
Install the tools on Ubuntu
sudo apt update
sudo apt install -y python3 python3-pip openssh-client
python3 -m pip install --user browser-cookie3
mkdir -p ~/binFind the cookie database used by your browser
Choose the browser profile you actually use to access YouTube.
For Firefox:
cd ~/.mozilla/firefox
ls
find . -maxdepth 2 -name cookies.sqliteYou will usually find a path such as xxxxxxxx.default-release/cookies.sqlite.
For Chrome:
cd ~/.config/google-chrome
ls
find . -maxdepth 2 -name CookiesYou will usually find a path such as Default/Cookies or Profile 1/Cookies.
If these commands do not find the file, search your home directory:
find ~ -name cookies.sqlite -o -name Cookies 2>/dev/nullThen update the COOKIE_FILE variable in the script below with the full path of the browser profile you want to use.
Create the scripts
Save the following Python script as ~/bin/export-youtube-cookies.py:
#!/usr/bin/env python3
from pathlib import Path
import http.cookiejar
import os
import shlex
import stat
import subprocess
import sys
import browser_cookie3
# ====== CONFIG ======
REMOTE_USER = "username"
REMOTE_HOST = "your-server.example"
# Classic install example:
REMOTE_PATH = "/var/www/peertube/storage/tmp-persistent/youtube-cookies.txt"
# Docker host example:
# REMOTE_PATH = "/srv/peertube/docker-volume/data/tmp-persistent/youtube-cookies.txt"
REMOTE_OWNER = "peertube:peertube" # Docker example: "999:999"
REMOTE_MODE = "600"
SSH_KEY = str(Path.home() / ".ssh" / "peertube-cookies")
OUTPUT_FILE = Path.home() / ".cache" / "youtube-cookies.txt"
BROWSER = "firefox" # "firefox" or "chrome"
# Firefox example:
COOKIE_FILE = str(Path.home() / ".mozilla" / "firefox" / "YOUR_PROFILE.default-release" / "cookies.sqlite")
# Chrome example:
# COOKIE_FILE = str(Path.home() / ".config" / "google-chrome" / "Default" / "Cookies")
def to_netscape_line(cookie: http.cookiejar.Cookie) -> str:
domain = cookie.domain or ""
include_subdomains = "TRUE" if domain.startswith(".") else "FALSE"
path = cookie.path or "/"
secure = "TRUE" if cookie.secure else "FALSE"
expires = str(cookie.expires or 0)
return " ".join([
domain,
include_subdomains,
path,
secure,
expires,
cookie.name,
cookie.value,
])
def load_browser_cookies():
if BROWSER == "firefox":
return browser_cookie3.firefox(cookie_file=COOKIE_FILE)
if BROWSER == "chrome":
return browser_cookie3.chrome(cookie_file=COOKIE_FILE)
raise RuntimeError("BROWSER must be 'firefox' or 'chrome'")
def load_youtube_cookies():
return [
c for c in load_browser_cookies()
if "youtube.com" in (c.domain or "")
]
def write_cookie_file(cookies, output_file: Path):
if not cookies:
raise RuntimeError("No YouTube cookies found in the configured browser profile")
output_file.parent.mkdir(parents=True, exist_ok=True)
with open(output_file, "w", encoding="utf-8", newline="\n") as f:
f.write("# Netscape HTTP Cookie File\n")
f.write("# This file was generated automatically.\n\n")
for cookie in cookies:
f.write(to_netscape_line(cookie) + "\n")
os.chmod(output_file, stat.S_IRUSR | stat.S_IWUSR)
def upload_file(local_file: Path):
remote_tmp = f"/tmp/{local_file.name}"
scp_cmd = ["scp"]
ssh_cmd = ["ssh"]
if SSH_KEY:
scp_cmd += ["-i", SSH_KEY]
ssh_cmd += ["-i", SSH_KEY]
scp_cmd += [str(local_file), f"{REMOTE_USER}@{REMOTE_HOST}:{remote_tmp}"]
remote_cmd = f"""
set -e
tmp={shlex.quote(remote_tmp)}
dst={shlex.quote(REMOTE_PATH)}
owner={shlex.quote(REMOTE_OWNER)}
mode={shlex.quote(REMOTE_MODE)}
cleanup() {{
sudo rm -f "$tmp"
}}
trap cleanup EXIT
sudo install -m "$mode" "$tmp" "$dst"
sudo chown "$owner" "$dst"
"""
ssh_cmd += [f"{REMOTE_USER}@{REMOTE_HOST}", remote_cmd]
subprocess.run(scp_cmd, check=True)
subprocess.run(ssh_cmd, check=True)
def main():
cookies = load_youtube_cookies()
write_cookie_file(cookies, OUTPUT_FILE)
upload_file(OUTPUT_FILE)
print(f"OK: {OUTPUT_FILE} -> {REMOTE_USER}@{REMOTE_HOST}:{REMOTE_PATH} ({REMOTE_OWNER})")
if __name__ == "__main__":
try:
main()
except Exception as e:
print(f"ERROR: {e}", file=sys.stderr)
sys.exit(1)Save the following shell script as ~/bin/run-once-per-day.sh:
#!/usr/bin/env bash
set -euo pipefail
BASE_DIR="$(cd "$(dirname "$0")" && pwd)"
SCRIPT="$1"
NAME="$(basename "$SCRIPT")"
TODAY="$(date +%F)"
STATE_DIR="$BASE_DIR/.state"
STAMP="$STATE_DIR/${NAME}.${TODAY}"
mkdir -p "$STATE_DIR"
[ -f "$STAMP" ] && exit 0
if "$SCRIPT"; then
touch "$STAMP"
fiThen make both scripts executable:
chmod +x ~/bin/export-youtube-cookies.py ~/bin/run-once-per-day.shAdd the SSH key to the server
Generate a dedicated SSH key for this job:
ssh-keygen -t ed25519 -f ~/.ssh/peertube-cookies -C peertube-cookiesThen install the public key on the remote server:
ssh-copy-id -i ~/.ssh/peertube-cookies.pub username@your-server.exampleIf you plan to run this from cron, you will usually want to use a dedicated key without a passphrase, or an SSH agent that is available in the cron environment.
Allow the remote user to update the cookie file
The example script uses sudo install, sudo chown and sudo rm on the remote server. If the remote user cannot already update the destination file directly, you should allow these commands without a password prompt.
For example, create a sudoers file on the server:
sudo visudo -f /etc/sudoers.d/peertube-cookiesAnd add:
username ALL=(root) NOPASSWD: /usr/bin/install, /usr/bin/chown, /usr/bin/rmThen test the SSH access and the remote sudo access before running the script:
ssh -i ~/.ssh/peertube-cookies username@your-server.example 'sudo -n true'If this command asks for a password or fails, cron will fail too.
Test the script manually
~/bin/export-youtube-cookies.pyIf you see No YouTube cookies found in the configured browser profile, open YouTube in the selected browser profile, sign in with the account you want to use for imports, and try again.
Add the cron job
Open your crontab:
crontab -eThen add:
0 * * * * /home/your-user/bin/run-once-per-day.sh /home/your-user/bin/export-youtube-cookies.py >> /home/your-user/.cache/peertube-cookies.log 2>&1This example runs every hour, but the wrapper script ensures the upload only happens once per day. Running it every hour is useful if the machine is not always powered on at the same time every day.

