Remote storage (S3)
If your object storage provider supports the AWS S3 API, you can configure your instance to move files there after transcoding. The bucket you configure should be public and have CORS rules to allow traffic from anywhere.
Live videos are still stored on the disk. If replay is enabled, they will be moved in the object storage after transcoding.
DANGER
Your S3 provider must support Virtual hosting of buckets as PeerTube doesn't support path style requests.
PeerTube Settings
Endpoint and buckets
Here are two examples on how you can configure your instance:
# Store all videos in one bucket on
object_storage:
enabled: true
# Example Backblaze b2 endpoint
endpoint: 's3.us-west-001.backblazeb2.com'
web_videos:
bucket_name: 'peertube-videos'
prefix: 'web-videos/'
# Use the same bucket as for web videos but with a different prefix
streaming_playlists:
bucket_name: 'peertube-videos'
prefix: 'streaming-playlists/'
user_exports:
bucket_name: 'peertube-videos'
prefix: 'user-exports/'
original_video_files:
bucket_name: 'peertube-videos'
prefix: 'original-video-files/'
# Use two different buckets for Web videos and HLS videos on AWS S3
object_storage:
enabled: true
# Example AWS endpoint in the us-east-1 region
endpoint: 's3.us-east-1.amazonaws.com'
# Needs to be set to the bucket region when using AWS S3
region: 'us-east-1'
videos:
bucket_name: 'web-videos'
prefix: ''
streaming_playlists:
bucket_name: 'hls-videos'
prefix: ''
user_exports:
bucket_name: 'user-exports'
prefix: ''
original_video_files:
bucket_name: 'original-video-files'
prefix: ''
Credentials
You will also need to supply credentials to the S3 client. The official AWS S3 library is used in PeerTube, which supports multiple credential loading methods.
If you set the credentials in the configuration file, this will override credentials from the environment or a ~/.aws/credentials
file. When loading from the environment, the usual AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
variables are used.
Cache server
To reduce object storage cost, we strongly recommend to setup a cache server (CDN/external proxy).
Set your mirror/CDN URL in object_storage.{streaming_playlists,videos}.base_url
and PeerTube will replace the object storage host by this base URL on the fly (so you can easily change the base_url
configuration).
Example of an nginx configuration for a cache server in front of a S3 bucket:
Click me to view the code
# Contribution from https://framacolibri.org/t/peertube-remote-storage-s3
proxy_cache_path /var/cache/s3 levels=1:2 keys_zone=CACHE-S3:100m inactive=48h max_size=10G;
proxy_cache_path /var/cache/s3-ts levels=1:2 keys_zone=CACHE-S3-TS:10m inactive=60s max_size=1G;
server {
listen 80;
server_name peertube.tld;
root /var/www/html;
location / { return 301 https://$host$request_uri; }
}
server {
listen 443 ssl;
http2 on;
server_name peertube.tld;
access_log /var/log/nginx/medias.access.log; # reduce I/0 with buffer=10m flush=5m
error_log /var/log/nginx/medias.error.log;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_certificate /etc/letsencrypt/live/peertube.tld/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/peertube.tld/privkey.pem;
root /var/www/html;
keepalive_timeout 30;
location = / {
index index.html;
}
# Cache S3 files for a long time because their filenames change every time PeerTube updates their content
location / {
try_files $uri @s3;
}
# .ts files are live fragments
# They can be cached, but not for too long to not break future streams in the same permanent live
location ~ \.ts$ {
try_files $uri @s3-ts;
}
# M3U8 and JSON files of live videos change regularly but keep the same filename
# Don't cache them to not break PeerTube lives
location ~ \.(json|m3u8)$ {
try_files $uri @s3_nocache;
}
set $s3_backend 'https://my-bucket.s3.bhs.perf.cloud.ovh.net';
location @s3 {
limit_except GET OPTIONS {
deny all;
}
resolver 1.1.1.1 8.8.8.8 208.67.222.222 208.67.220.220;
proxy_set_header Host my-bucket.s3.bhs.perf.cloud.ovh.net;
proxy_set_header Connection '';
proxy_set_header Authorization '';
proxy_set_header Range $slice_range;
proxy_hide_header Set-Cookie;
proxy_hide_header 'Access-Control-Allow-Origin';
proxy_hide_header 'Access-Control-Allow-Methods';
proxy_hide_header 'Access-Control-Allow-Headers';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header x-amz-bucket-region;
proxy_hide_header x-amzn-requestid;
proxy_ignore_headers Set-Cookie;
proxy_pass $s3_backend$uri;
proxy_intercept_errors off;
proxy_cache CACHE-S3;
proxy_cache_valid 200 206 48h;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
slice 1m;
proxy_cache_key $host$uri$is_args$args$slice_range;
proxy_http_version 1.1;
expires 1y;
add_header Cache-Control public;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Range,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
add_header X-Cache-Status $upstream_cache_status;
add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy "default-src 'none'; form-action 'none'";
}
location @s3-ts {
limit_except GET OPTIONS {
deny all;
}
resolver 1.1.1.1 8.8.8.8 208.67.222.222 208.67.220.220;
proxy_set_header Host my-bucket.s3.bhs.perf.cloud.ovh.net;
proxy_set_header Connection '';
proxy_set_header Authorization '';
proxy_set_header Range $slice_range;
proxy_hide_header Set-Cookie;
proxy_hide_header 'Access-Control-Allow-Origin';
proxy_hide_header 'Access-Control-Allow-Methods';
proxy_hide_header 'Access-Control-Allow-Headers';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header x-amz-bucket-region;
proxy_hide_header x-amzn-requestid;
proxy_ignore_headers Set-Cookie;
proxy_pass $s3_backend$uri;
proxy_intercept_errors off;
proxy_cache CACHE-S3-TS;
proxy_cache_valid 200 206 2m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
slice 1m;
proxy_cache_key $host$uri$is_args$args$slice_range;
proxy_http_version 1.1;
expires 1y;
add_header Cache-Control public;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Range,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
add_header X-Cache-Status $upstream_cache_status;
add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy "default-src 'none'; form-action 'none'";
}
location @s3_nocache {
limit_except GET OPTIONS {
deny all;
}
resolver 1.1.1.1 8.8.8.8 208.67.222.222 208.67.220.220;
proxy_set_header Host my-bucket.s3.bhs.perf.cloud.ovh.net;
proxy_set_header Connection '';
proxy_set_header Authorization '';
proxy_set_header Range $http_range;
proxy_hide_header Set-Cookie;
proxy_hide_header 'Access-Control-Allow-Origin';
proxy_hide_header 'Access-Control-Allow-Methods';
proxy_hide_header 'Access-Control-Allow-Headers';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header x-amz-bucket-region;
proxy_hide_header x-amzn-requestid;
proxy_ignore_headers Set-Cookie;
proxy_pass $s3_backend$uri;
proxy_intercept_errors off;
expires 0;
proxy_cache off;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Range,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
add_header X-Cache-Status $upstream_cache_status;
add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy "default-src 'none'; form-action 'none'";
}
}
Max upload part
If you have trouble with uploads to object storing failing, you can try lowering the part size. object_storage.max_upload_part
is set to 2GB
by default, you can try experimenting with this value to optimize uploading. Multiple uploads can happen in parallel, but for one video the parts are uploaded sequentially.
CORS settings
Because the browser will load the objects from object storage from a different URL than the local PeerTube instance, cross-origin resource sharing rules apply.
You can solve this either by loading the objects through some kind of caching CDN that you give access and setting object_storage.{streaming_playlists,videos}.base_url
to that caching server, or by allowing access from all origins.
Allowing access from all origins on AWS S3 can be done in the permissions tab of your bucket settings. You can set the policy to this for example:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
b2 update-bucket --corsRules '[
{
"allowedHeaders": [
"range",
"user-agent"
],
"allowedOperations": [
"b2_download_file_by_id",
"b2_download_file_by_name"
],
"allowedOrigins": [
"*"
],
"corsRuleName": "downloadFromAnyOrigin",
"exposeHeaders": null,
"maxAgeSeconds": 3600
},
{
"allowedHeaders": [
"range",
"user-agent"
],
"allowedOperations": [
"s3_head",
"s3_get"
],
"allowedOrigins": [
"*"
],
"corsRuleName": "s3DownloadFromAnyOrigin",
"exposeHeaders": null,
"maxAgeSeconds": 3600
}
]' bucketname allPublic
Migrate videos from filesystem to object storage
Use create-move-video-storage-job script.
Migrate videos from object storage to filesystem
Use create-move-video-storage-job script.
Migrate to another object storage provider
PeerTube >= 6.2PeerTube stores object URLs in the database but also create signed URLs using your current object storage configuration.
It's the reason why you must use update-object-storage-url script to update internal PeerTube URLs after your object storage provider migration.