-
Notifications
You must be signed in to change notification settings - Fork 1
Description
I've been looking into replacing NVENC APIs with Vulkan Video for a while, but recently valve landed a couple extensions like VK_VALVE_video_encode_rgb_conversion which make VkVideo a fully suitable replacement for NVENC (we currently utilize submission of RGB frames, which to my understanding, was spotty in VkVideo until these extensions, even if the hardware natively supported RGB input or conversion to YUV in hardware) or a mistake ridden nearly unusable API like VA-API. A big pro is that, unlike VA-API, all three GPU vendors fully support VkVideo out of the box without any (broken in their own very fun ways) wrappers, which should let us only need to implement that and software fallbacks. (It also appears like open-source support for the video unit of Rockchip and other embedded SOCs are being written with the express implication of adding support to their Mesa drivers for VkVideo, so my presumption is that this will indeed expand over time to much more hardware than vdpau or VA-API)
This will require a lot of work:
-
Actually spinning up a Vulkan context (and providing helpers in
letsplay_gputo do so). In theory this could be used to also support Vulkan cores, but this context can be treated as solely for VkVideo for the time being. (I'm not against supporting Vulkan in cores, mind you, it's just that it'd probably be a lot of work alongside this and I'm intending this epic stay relatively focused) -
Implementing external texture object allocations in OpenGL. There's an OpenGL extension for this (GL_EXT_memory_object), and we can use a Vulkan memory allocator and this extension to then create a texture object that a Vulkan context and OpenGL context can freely share. We'll have to do this for all resources that need to be accessed from the Vulkan context, which thankfully for now should only be the texture backing of the OpenGL FBO we use to render cores.
-
Make GlFramebuffer actually take in a texture object (this texture object can have
new()orfrom_gpu_memory()creation methods). This is most likely the best approach, since it shouldn't require any additional GPU memory resources than what we have now, and still keeps letsplay_gpu (or at least the Gl parts) away from needing to know about Vulkan (more than a GPU memory handle at least). This will make resizing a bit annoying though, since we'll have to recreate the texture object and add functionality to rebind it to the framebuffer (or, maybe better, just manage the FBO backing texture seperately and have GlFramebuffer hold a immutable reference to it & have arebind()method to rebind it to the FBO on recreation; we can cell for interior mutability if it's a big deal) -
Ripping out the NVENC code entirely and replacing it with Vulkan Video (including rewriting the compute swap/flip kernels for vulkan compute). In theory this also means replacing the fork/patch of ffmpeg-the-third with Cargo/upstream, since we won't need any special hardware support from it anymore.