Project

General

Profile

Video Release Process » History » Version 9

lavy, 01/29/2024 06:45 AM

1 1 tnt
h1. Video Release Process
2
3
Unless the presenter has an objection, talks are recorded and published afterward. This page details the current process.
4
5
Note that the process uses "Davinci Resolve":https://www.blackmagicdesign.com/products/davinciresolve/ for the editing. It's not OSS but this is what I'm used to and have already setup for other jobs, so this is what I use.
6
7
h2. Source material
8
9
The BBB instance raw stream dump can be downloaded just by looking up the link in the page source. Or simply using the script below and feeding it the meeting ID which is in the URL when viewing the raw stream dump through the BBB interface.
10
11 9 lavy
Example, for this particular BBB URL: 
12 7 lavy
https://meeting5.franken.de/presentation/10e07e078472a1876b5c23655123bfd50bc4b187-1705518054974/video/webcams.webm
13 8 lavy
In the script below, set MEETING_ID to "10e07e078472a1876b5c23655123bfd50bc4b187-1705518054974".
14 7 lavy
15 1 tnt
Once downloaded, ffmpeg is used to repackage the video streams in `mp4` container and transcode the audio from OPUS into FLAC to bring them into containers/formats that the linux version of resolve can load. (Note newer version of resolve should support MKV now, but that hasn't been tried. Sticking with the "tried and working" process for now).
16
17
<pre><code class="shell">
18
#!/bin/bash
19
set -e
20
21
ODC_SLUG="$1"
22
MEETING_ID="$2"
23
24 5 tnt
curl -o "tmp_webcams.webm"   "https://meeting5.franken.de/presentation/${MEETING_ID}/video/webcams.webm"
25
curl -o "tmp_deskshare.webm" "https://meeting5.franken.de/presentation/${MEETING_ID}/deskshare/deskshare.webm"
26 1 tnt
27
ffmpeg -i "tmp_webcams.webm"   -vn -c:a flac -sample_fmt s16 "${ODC_SLUG}-audio.flac"
28
ffmpeg -i "tmp_webcams.webm"   -an -c:v copy                 "${ODC_SLUG}-webcam-vp9.mp4"
29
ffmpeg -i "tmp_deskshare.webm" -an -c:v copy                 "${ODC_SLUG}-screen-vp9.mp4"
30
31
rm "tmp_webcams.webm" "tmp_deskshare.webm"
32
</code></pre>
33
34
35
h2. Edit
36
37 2 tnt
Just do a normal edit, trimming start / stop and anything that shouldn't be in there (like technical issues or whatever).
38
Some notes / guidelines :
39 1 tnt
40 2 tnt
41
* Project Settings
42
** 1280x720 24fps
43
** Davinci YRGB (not color managed) 
44
** Fairlight -14 LUFS target loudness 
45
* Intro:
46
** Use a Fusion composition (two examples attached), tweak as needed (font size, ...)
47
** About 10 second long
48
** Start speaker audio intro ~ 1-2 second in
49
** Transition to slide: Dip-To-Color-Dissolve, 1s, Ease In-Out
50
* Q&A:
51
** If there is no such slide, use a Fusion composition for video
52
** Cut out any long blanks
53
* Outro:
54
** Check there is a "thanks for watching" or similar
55
** Fade to black ~ 3 second
56
* Floating Head:
57
** Magic mask with expand and feather edge. Add alpha out on color page from the magic mask node.
58
** Add power window to control the crop
59
* Audio:
60
** FX: Dialogue processor filter with default for "Male VO"
61
** EQ: Lo cut at 100 Hz, Hi cut at 10 kHz
62
** Adjust audio level in Q/A section for the individual speaker
63
* Master Render
64
** Video: Grassvalley HQX 720p
65
** Audio: Linear PCM 16 bits
66 1 tnt
67
h2. Local encode
68
69
Once the master render is done, the file is transcoded into a few formats more suitable for online viewing using ffmpeg using the script below :
70
Theses are relatively low bitrates but perfectly fine for "slides" type of content with voice-over in 720p.
71
(Note the script is design to work on a machine with a NVidia card and driver to have hw acceleration of h264 and h265 encoding)
72
73
<pre><code class="shell">
74
#!/bin/bash
75
set -e
76
77
ODC_SLUG="$1"
78
ODC_RENDER_PATH="$2"
79
ODC_RENDER_MASTER="${ODC_RENDER_PATH}/osmodevcall-${ODC_SLUG}_master.mov"
80
81
ffmpeg \
82
	-hwaccel cuda -hwaccel_output_format cuda \
83 3 tnt
	-i "${ODC_RENDER_MASTER}" \
84 1 tnt
	-c:v h264_nvenc -b:v 1M -pix_fmt yuv420p \
85
	-c:a aac -b:a 96k \
86
	"${ODC_RENDER_PATH}/osmodevcall-${ODC_SLUG}_h264_420.mp4"
87
88
ffmpeg \
89
	-hwaccel cuda -hwaccel_output_format cuda \
90 3 tnt
	-i "${ODC_RENDER_MASTER}" \
91 1 tnt
	-c:v hevc_nvenc -b:v 512k -pix_fmt yuv420p \
92
	-c:a aac -b:a 96k \
93
	"${ODC_RENDER_PATH}/osmodevcall-${ODC_SLUG}_h265_420.mp4"
94
95
ffmpeg \
96 3 tnt
	-i "${ODC_RENDER_MASTER}" \
97 1 tnt
	-c:v libvpx-vp9 -b:v 400k \
98
	-c:a libopus -b:a 80k \
99
	"${ODC_RENDER_PATH}/osmodevcall-${ODC_SLUG}_vp9.webm"
100
</code></pre>
101
102
103
h2. VOC upload
104
105
The final step is to feed the master render to the VOC rendering pipeline so the talk can be published on https://media.ccc.de .
106
107 4 tnt
* Generate schedule file
108
** Using a fork of the VOC tool : https://github.com/smunaut/voctosched in the @osmodevcall@ branch
109
** Edit the @osmodevcall.tsv@ to add the new talk
110
** Run @./schedule.py -c osmodevcall.ini@ to regenerate @schedule-extended.xml@
111
** Upload that file on some public facing HTTP server
112
* Upload master
113
** The generated master file needs to be uploaded on some public facing HTTP server for the VOC to fetch
114 6 tnt
** Currently the tool above assumes it will be at https://downloads.osmocom.org/videos/osmodevcall/ or https://downloads.osmocom.org/videos/retrodevcall/
115 4 tnt
* In the VOC tracker
116
** Use the "Import" button to import the updated XML from its source. Only import new tickets in the confirmation screen.
117
** Once imported, edit the top level ticket for the talk and change state from "staging" to "staged".
118
** The "Recording" subticket will then trigger and go to "Recording" state while its downloading the master file.
119
** When done, it'll go to 'Recorded' state at which point, you need to click the 'cut' button. No actual cutting is needed, just select language and confirm with the "I finished cutting" button.
120
** And it's done. All the rest should proceed automatically until all the version are in the "Released" state
Add picture from clipboard (Maximum size: 48.8 MB)