Skip to content

Remote storage (S3)

If your object storage provider supports the AWS S3 API, you can configure your PeerTube platform to move files there after transcoding. The bucket you configure should be public and have CORS rules to allow traffic from anywhere.

Live videos are still stored on the disk. If replay is enabled, they will be moved in the object storage after transcoding.

PeerTube Settings

Endpoint and buckets

Here are some examples on how you can configure your PeerTube platform:

yaml
# Store all videos in one bucket on
object_storage:
  enabled: true

  # Example Backblaze b2 endpoint
  endpoint: 's3.us-west-001.backblazeb2.com'

  upload_acl:
    # Backblaze doesn't support ACL
    private: null

  web_videos:
    bucket_name: 'MyCoolBucketName'
    prefix: 'web-videos/'

  # Use the same bucket as for web videos but with a different prefix
  streaming_playlists:
    bucket_name: 'MyCoolBucketName'
    prefix: 'streaming-playlists/'

  user_exports:
    bucket_name: 'MyCoolBucketName'
    prefix: 'user-exports/'

  original_video_files:
    bucket_name: 'MyCoolBucketName'
    prefix: 'original-video-files/'

  captions:
    bucket_name: 'MyCoolBucketName'
    prefix: 'captions/'
yaml
# Use two different buckets for Web videos and HLS videos on AWS S3
object_storage:
  enabled: true

  # Example AWS endpoint in the eu-central-1 region
  endpoint: 's3.eu-central-1.amazonaws.com'
  # Needs to be set to the bucket region when using AWS S3
  region: 'eu-central-1'

  streaming_playlists:
    bucket_name: 'streaming-playlists'
    prefix: 'hls/'
    base_url: '' # Only required if using a caching server, See #cache-server
    store_live_streams: false

  web_videos:
    bucket_name: 'web-videos'
    prefix: 'web-videos/'
    base_url: '' # Only required if using a caching server, See #cache-server

  user_exports:
    bucket_name: 'user-exports'
    prefix: 'user-exports/'
    base_url: '' # Only required if using a caching server, See #cache-server

  # Same settings but for original video files
  original_video_files:
    bucket_name: 'original-video-files'
    prefix: 'original-video-files/'
    base_url: '' # Only required if using a caching server, See #cache-server

  # Video captions
  captions:
    bucket_name: 'captions'
    prefix: 'captions/'
    base_url: '' # Only required if using a caching server, See #cache-server
bash
object_storage:
  enabled: true

  endpoint: 's3.de.io.cloud.ovh.net'
  region: 'de'

  upload_acl:
    public: 'public-read'
    private: 'private'

  proxy:
    proxify_private_files: true

  credentials:
    # The access_key_id and secret_access_key are found under the bucket
    # user public-cloud => Object storage => Users => click the three dots
    # Click `View the secret key`   
    access_key_id: '$access_key_id'
    secret_access_key: 'secret_access_key'

  # Maximum amount to upload in one request to object storage
  max_upload_part: 450MB # Can do 500MB, stay save, stick to this number.

  streaming_playlists:
    bucket_name: 'MyCoolBucketName'
    prefix: 'hls/'
    base_url: '' # Only required if using a caching server, See #cache-server
    store_live_streams: false

  web_videos:
    bucket_name: 'MyCoolBucketName'
    prefix: 'web-videos/'
    base_url: '' # Only required if using a caching server, See #cache-server

  user_exports:
    bucket_name: 'MyCoolBucketName'
    prefix: 'user-exports/'
    base_url: '' # Only required if using a caching server, See #cache-server

  # Same settings but for original video files
  original_video_files:
    bucket_name: 'MyCoolBucketName'
    prefix: 'original-video-files/'
    base_url: '' # Only required if using a caching server, See #cache-server

  # Video captions
  captions:
    bucket_name: 'MyCoolBucketName'
    prefix: 'captions/'
    base_url: '' # Only required if using a caching server, See #cache-server

Credentials

You will also need to supply credentials to the S3 client. The official AWS S3 library is used in PeerTube, which supports multiple credential loading methods.

If you set the credentials in the configuration file, this will override credentials from the environment or a ~/.aws/credentials file. When loading from the environment, the usual AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables are used.

The buckets

To understand the buckets you should know the following simplest of the simple...

streaming_playlists:, web_videos:, user_exports:, original_video_files: and captions: are all individual folders or buckets that each can be configured individually to meet a matrix of possibilities. To demonstrate this we can look into the following examples for a setup.

bucket_name

The bucket_name is literally the name of the bucket as you created it on your S3 hosting provider

The prefix

In the first example we use a single S3 bucket for all content, and therefore rely on sub‑folders and the prefix to keep the file hierarchy organised.

In the second example we use individual buckets for each content type.

In the third example we use a mix of shared and individual buckets for storing and serving the content.

Note

The prefix is the literal string that is added to any files or folders created in your bucket.

This means that if you set the prefix to caption (without a trailing slash), all uploaded VTT files will be named like bucketname:caption1234‑45678‑90123.vtt and will reside in the root of your bucket.

If you add a / to the prefix, for example caption/, the captions are stored in a sub‑folder called caption, and the file’s location becomes bucketname:caption/1234‑45678‑90123.vtt.

Do you see the importance of the trailing /?

yml
  streaming_playlists:
    bucket_name: 'MyCoolBucketName'
    prefix: 'hls/'
    store_live_streams: false

  web_videos:
    bucket_name: 'MyCoolBucketName'
    prefix: 'web_videos/'

  user_exports:
    bucket_name: 'MyCoolBucketName'
    prefix: 'user_exports/'

  original_video_files:
    bucket_name: 'MyCoolBucketName'
    prefix: 'original_video_files/'

  captions:
    bucket_name: 'myPeertube'
    prefix: 'captions/'
yaml
  streaming_playlists:
    bucket_name: 'MyCoolBucketName-hls'
    prefix: ''
    store_live_streams: false

  web_videos:
    bucket_name: 'MyCoolBucketName-webvideos'
    prefix: ''

  user_exports:
    bucket_name: 'MyCoolBucketName-userexports'
    prefix: ''

  original_video_files:
    bucket_name: 'MyCoolBucketName-sourcefiles'
    prefix: ''

  captions:
    bucket_name: 'MyCoolBucketName-captions'
    prefix: ''
yaml
  streaming_playlists:
    bucket_name: 'MyCoolBucketName-videos'
    prefix: 'hls/'
    store_live_streams: false

  web_videos:
    bucket_name: 'MyCoolBucketName-videos'
    prefix: 'web_videos/'

  user_exports:
    bucket_name: 'MyCoolBucketName-userexports'
    prefix: 'user_exports/'

  original_video_files:
    bucket_name: 'MyCoolBucketName-sourcefiles'
    prefix: ''

  captions:
    bucket_name: 'MyCoolBucketName-captions'
    prefix: ''

base_url

The base_url are only to any use of your setup any type of medigator, load-balancer or other types of "CDN" to the bucket itself, like with the Cache server example below.

Cache server

To reduce object storage cost, we strongly recommend to setup a cache server (CDN/external proxy).

Set your mirror/CDN URL in object_storage.{streaming_playlists,videos}.base_url and PeerTube will replace the object storage host by this base URL on the fly (so you can easily change the base_url configuration).

Example of an nginx configuration for a cache server in front of a S3 bucket:

Click me to view the code
nginx
# Contribution from https://framacolibri.org/t/peertube-remote-storage-s3

proxy_cache_path /var/cache/nginx/s3 levels=1:2 keys_zone=CACHE-S3:100m inactive=48h max_size=10G;
proxy_cache_path /var/cache/nginx/s3-ts levels=1:2 keys_zone=CACHE-S3-TS:10m inactive=60s max_size=1G;

server {
  listen 80;
  server_name example.org;
  location / { return 301 https://$host$request_uri; }
}

server {
  listen 443 ssl;
  http2 on;
  ## Enable on Nginx version >=1.26
  # listen 443 quic;
  # http3 on;

  server_name example.org;

  ## Only enable logging when needed to reduce disk IO and save diskspace
  ## You should also only log to system log and access by using journalctl
  access_log off;
  # access_log syslog:server=unix:/dev/log quic; # Default system logging
  # access_log /var/log/nginx/medias.access.log;
  # access_log /var/log/nginx/medias.access.log quic buffer=10m flush=5m; # reduce I/0

  error_log off;
  # error_log  syslog:server=unix:/dev/log error; # Default system logging
  # error_log  /var/log/nginx/medias.error.log;

  ssl_protocols TLSv1.2 TLSv1.3;
  ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;

  ssl_certificate     /etc/letsencrypt/live/example.org/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem;

  keepalive_timeout 30;
  
  # If you come to need to specify a resolver, replace the following example
  # addresses with your required DNS resolvers.
  # https://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
  # You should by default comment out this line and use the systems default 
  # resolver /etc/resolv.conf settings
  resolver 9.9.9.9 149.112.112.112; # 2620:fe::fe 2620:fe::9;

  # Cache S3 files for a long time because their filenames change every 
  # time PeerTube updates their content
  location / {
    try_files $uri @s3;
  }

  # .ts files are live fragments
  # They can be cached, but not for too long to not break future streams 
  # in the same permanent live
  location ~ \.ts$ {
    try_files $uri @s3-ts;
  }

  # M3U8 and JSON files of live videos change regularly but keep the same filename
  # Don't cache them to not break PeerTube lives
  location ~ \.(json|m3u8)$ {
    try_files $uri @s3_nocache;
  }

  set $s3_backend 'https://my-bucket.s3.bhs.perf.cloud.ovh.net';

  location @s3 {
    limit_except GET OPTIONS {
        deny all;
    }

    proxy_set_header Host my-bucket.s3.bhs.perf.cloud.ovh.net;
    proxy_set_header Connection '';
    proxy_set_header Authorization '';
    proxy_set_header Range $slice_range;
    proxy_hide_header Set-Cookie;
    proxy_hide_header x-amz-id-2;
    proxy_hide_header x-amz-request-id;
    proxy_hide_header x-amz-meta-server-side-encryption;
    proxy_hide_header x-amz-server-side-encryption;
    proxy_hide_header x-amz-bucket-region;
    proxy_hide_header x-amzn-requestid;
    proxy_ignore_headers Set-Cookie;
    proxy_pass $s3_backend$uri;
    proxy_intercept_errors off;

    proxy_cache CACHE-S3;
    proxy_cache_valid 200 206 48h;
    proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
    slice              1m;
    proxy_cache_key    $host$uri$is_args$args$slice_range;
    proxy_http_version 1.1;

    expires 1y;
    add_header Cache-Control public;
    add_header 'Access-Control-Allow-Origin' '*';
    add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'Range,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
    add_header X-Cache-Status $upstream_cache_status;
    add_header X-Content-Type-Options nosniff;
    add_header Content-Security-Policy "default-src 'none'; form-action 'none'";
  }

  location @s3-ts {
    limit_except GET OPTIONS {
        deny all;
    }

    proxy_set_header Host my-bucket.s3.bhs.perf.cloud.ovh.net;
    proxy_set_header Connection '';
    proxy_set_header Authorization '';
    proxy_set_header Range $slice_range;
    proxy_hide_header Set-Cookie;
    proxy_hide_header x-amz-id-2;
    proxy_hide_header x-amz-request-id;
    proxy_hide_header x-amz-meta-server-side-encryption;
    proxy_hide_header x-amz-server-side-encryption;
    proxy_hide_header x-amz-bucket-region;
    proxy_hide_header x-amzn-requestid;
    proxy_ignore_headers Set-Cookie;
    proxy_pass $s3_backend$uri;
    proxy_intercept_errors off;

    proxy_cache CACHE-S3-TS;
    proxy_cache_valid 200 206 2m;
    proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
    slice              1m;
    proxy_cache_key    $host$uri$is_args$args$slice_range;
    proxy_http_version 1.1;

    expires 1y;
    add_header Cache-Control public;
    add_header 'Access-Control-Allow-Origin' '*';
    add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'Range,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
    add_header X-Cache-Status $upstream_cache_status;
    add_header X-Content-Type-Options nosniff;
    add_header Content-Security-Policy "default-src 'none'; form-action 'none'";
  }

  location @s3_nocache {
    limit_except GET OPTIONS {
        deny all;
    }

    proxy_set_header Host my-bucket.s3.bhs.perf.cloud.ovh.net;
    proxy_set_header Connection '';
    proxy_set_header Authorization '';
    proxy_set_header Range $http_range;
    proxy_hide_header Set-Cookie;
    proxy_hide_header x-amz-id-2;
    proxy_hide_header x-amz-request-id;
    proxy_hide_header x-amz-meta-server-side-encryption;
    proxy_hide_header x-amz-server-side-encryption;
    proxy_hide_header x-amz-bucket-region;
    proxy_hide_header x-amzn-requestid;
    proxy_ignore_headers Set-Cookie;
    proxy_pass $s3_backend$uri;
    proxy_intercept_errors off;

    expires 0;
    proxy_cache off;

    add_header 'Access-Control-Allow-Origin' '*';
    add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'Range,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
    add_header X-Cache-Status $upstream_cache_status;
    add_header X-Content-Type-Options nosniff;
    add_header Content-Security-Policy "default-src 'none'; form-action 'none'";
  }
}

Max upload part

If you have trouble with uploads to object storing failing, you can try lowering the part size. object_storage.max_upload_part is set to 2GB by default, you can try experimenting with this value to optimize uploading. Multiple uploads can happen in parallel, but for one video the parts are uploaded sequentially.

CORS settings

Because the browser will load the objects from object storage from a different URL than the local PeerTube platform, cross-origin resource sharing rules apply.

You can solve this either by loading the objects through some kind of caching CDN that you give access and setting object_storage.{streaming_playlists,videos}.base_url to that caching server, or by allowing access from all origins.

Allowing access from all origins on AWS S3 can be done in the permissions tab of your bucket settings. You can set the policy to this for example:

json
[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]
bash
aws s3api put-bucket-cors --bucket your-bucket-name --cors-configuration '{
    "CORSRules": [
        {
            "AllowedHeaders": [
                "*"
            ],
            "AllowedMethods": [
                "GET"
            ],
            "AllowedOrigins": [
                "*"
            ]
        }
    ]
}'
bash
b2 bucket update your-bucket-name allPublic --cors-rules '[
        {
            "allowedHeaders": [
                "range",
                "user-agent"
            ],
            "allowedOperations": [
                "b2_download_file_by_id",
                "b2_download_file_by_name"
            ],
            "allowedOrigins": [
                "*"
            ],
            "corsRuleName": "downloadFromAnyOrigin",
            "exposeHeaders": null,
            "maxAgeSeconds": 3600
        },
        {
            "allowedHeaders": [
                "range",
                "user-agent"
            ],
            "allowedOperations": [
                "s3_head",
                "s3_get"
            ],
            "allowedOrigins": [
                "*"
            ],
            "corsRuleName": "s3DownloadFromAnyOrigin",
            "exposeHeaders": null,
            "maxAgeSeconds": 3600
        }
    ]'

Migrate videos from filesystem to object storage

Use create-move-video-storage-job script.

Migrate videos from object storage to filesystem

Use create-move-video-storage-job script.