Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vulkan Video Provisional Release #1497

Open
4 tasks done
aabdelkh opened this issue Apr 12, 2021 · 38 comments
Open
4 tasks done

Vulkan Video Provisional Release #1497

aabdelkh opened this issue Apr 12, 2021 · 38 comments

Comments

@aabdelkh
Copy link

@aabdelkh aabdelkh commented Apr 12, 2021

Today the Khronos Vulkan Video Task Sub Group (TSG) is announcing the public release of the provisional Vulkan Video extensions, which consist of 3 core KHR extensions and 3 codec-operation specific EXT extensions. See this blog post for an introduction to Vulkan Video and overview of the extensions, along with this Deep Dive slide deck, NVIDIA Beta Drivers and Sample Decode App.

The task list for the release is:

We are actively working on IHV driver implementations, Vulkan CTS, samples, validation layer and other resources as we progress towards the final release of these extensions. Khronos welcomes your feedback on the Vulkan Video provisional extensions to enhance their functionality to meet your needs!

Updates for Encode H.265 extension:

@ReinUsesLisp
Copy link

@ReinUsesLisp ReinUsesLisp commented Apr 12, 2021

Hello, are there any plans to support VP9 decoding in the future? I assume popular codecs not included here either had problems that delayed their support or disallowed it for undisclosed issues.

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Apr 12, 2021

Hello, are there any plans to support VP9 decoding in the future? I assume popular codecs not included here either had problems that delayed their support or disallowed it for undisclosed issues.

Thanks for expressing interest in Vulkan Video!

We do plan to support VP9 decode and AV1 decode/encode in a future release. The first release will focus on the core extensions and h264/h265, and all the work to update CTS, samples, etc. for video. With this foundation in place, TSG will shift focus to additional codecs and features.

@nanokatze
Copy link

@nanokatze nanokatze commented Apr 13, 2021

VkVideoCodecOperationFlagsKHR, by virtue of being 32 bit, seems as if to only allow 16 encoders and 16 decoders. If we imagine that all bits were exhausted, is it expected that the future codecs will end up in the pNext chain?

@martty
Copy link
Contributor

@martty martty commented Apr 13, 2021

What is the reasoning for including 2 in VkVideoQueueFamilyProperties2KHR?

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Apr 13, 2021

VkVideoCodecOperationFlagsKHR, by virtue of being 32 bit, seems as if to only allow 16 encoders and 16 decoders. If we imagine that all bits were exhausted, is it expected that the future codecs will end up in the pNext chain?

Right, TSG discussed this and felt we probably don't need more for quite some time. But given sync2 already moved to 64-bits, it may be worth changing VkVideoCodecOperationFlagsKHR to VkFlags64 anyway. Otherwise it would be ugly whether we chain or do something similar as sync2. Thanks for pointing this out! Will take this back to the TSG.

Btw, number of encoders vs. decoders doesn't need to match; it depends on HW adoption/popularity. E.g. we plan to add VP9 decode but not encode since most IHVs have shifted focus to AV1.

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Apr 13, 2021

What is the reasoning for including 2 in VkVideoQueueFamilyProperties2KHR?

Only because it's chained to VkQueueFamilyProperties2KHR. Apparently this is the proper convention - someone pointed it out at some point.
But I understand it looks a bit weird since I'd be looking for VkVideoQueueFamilyPropertiesKHR as soon as I see the 2 version :)

@190n
Copy link

@190n 190n commented Apr 14, 2021

E.g. we plan to add VP9 decode but not encode since most IHVs have shifted focus to AV1.

Is there a chance of VP9 encode support being added? I'd like to see it both in the interest of completion and since there is hardware out there already (recent Intel iGPUs, and I believe some ARM SoCs) that can encode VP9.

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Apr 14, 2021

E.g. we plan to add VP9 decode but not encode since most IHVs have shifted focus to AV1.

Is there a chance of VP9 encode support being added? I'd like to see it both in the interest of completion and since there is hardware out there already (recent Intel iGPUs, and I believe some ARM SoCs) that can encode VP9.

There's a requirement for multiple IHVs to provide reference implementations for KHR/EXT extensions. So far IHV participation in the TSG doesn't meet this requirement for VP9 encode. That being said, any IHV may add their own vendor extension for VP9 encode, which should gracefully work with the core KHR video extensions. This way it would actually reflect the fact that only this IHV is supporting VP9 encode in Vulkan. Later the extension may be adopted as an EXT/KHR extension if additional vendors support it. I would also like that to happen!

Please also feel free to request additional features (beyond missing codec operations) as part of making Vulkan Video competitive. Video TSG has plans for some after the final release, but it's helpful to hear about your specific interests/priorities as well!

@ichlubna
Copy link

@ichlubna ichlubna commented Apr 15, 2021

From the article:

Note that some implementations may support using the same image resources for output images and DPB images while others may require or prefer decoupling output images from the decode operation from DPB images...

So there is no way to explicitly achieve zero-copy usage? So far I have noticed that usually at least one internal copy was necessary after decoding the picture and using the results in rendering using other API such as VDPAU + OpenGL. The direct usage of the result might be possible with NVDEC/CUVID and then using the result in kernels though.
I've been trying to stream multiple video frames on the GPU and use them to render the final image. Being able to directly use the decoded images would speed up the process and lower the memory requirements. the I understand that the DPB might also be limited in size so it might not be able to contain all the images. How many frames can it hold?

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Apr 15, 2021

From the article:

Note that some implementations may support using the same image resources for output images and DPB images while others may require or prefer decoupling output images from the decode operation from DPB images...

So there is no way to explicitly achieve zero-copy usage? So far I have noticed that usually at least one internal copy was necessary after decoding the picture and using the results in rendering using other API such as VDPAU + OpenGL. The direct usage of the result might be possible with NVDEC/CUVID and then using the result in kernels though.
I've been trying to stream multiple video frames on the GPU and use them to render the final image. Being able to directly use the decoded images would speed up the process and lower the memory requirements. the I understand that the DPB might also be limited in size so it might not be able to contain all the images. How many frames can it hold?

Yes, some implementations may allow using the DPB image also as an output image for zero-copy. Vulkan Video provides a way to query this support. Use vkGetPhysicalDeviceVideoFormatPropertiesKHR specifying VkPhysicalDeviceVideoFormatInfoKHR.imageUsage with decode output VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR and decode DPB VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR bits set. If supported for the specified VkVideoProfilesKHR, the implementation will report non-zero pVideoFormatPropertyCount and then call again to retrieve the supported VkFormat(s) for such usage.

The max number of pictures DPB can store is reported via VkVideoCapabilitiesKHR.maxReferencePicturesSlotsCount.

@ichlubna
Copy link

@ichlubna ichlubna commented Apr 15, 2021

Thank you @aabdelkh! That's great!
Another maybe more specific thing would be frame skipping. What I mean is that when using the DPB frames directly, I would also like to use multiple frames like that but not necessarily the consecutive ones. I see that there are those DPB slot states which might be useful but I am not sure if it's possible to achieve this. For example sending 10 packets in the decoder but wanting to keep let's say only frame 4,8 and 10 in the DPB. Even though they have to be decoded to update the decoder context it would be nice to be able to mark them as disposable so that the decoded picture would not end up in DPB. Or should we free such picture manually?
On the other hand, I guess that those frames in DPB might be necessary to decode new ones, right? Is copying the only way to put some of the DPB frames aside to avoid losing them when decoding many frames but needing only a subset of them?

@krOoze
Copy link
Contributor

@krOoze krOoze commented Apr 15, 2021

VK_VIDEO_CODEC_OPERATION_INVALID_BIT_KHR is bit weird name. Should 0 be named at all? And if it is, should it be called a bit?

PS: I see a precedent of VK_PIPELINE_STAGE_NONE_KHR, but it does not have BIT in the name. Maybe should follow up on it and name it VK_VIDEO_CODEC_OPERATION_NONE_KHR.

PPS: I think VkVideoQueueFamilyProperties2KHR should be marked returnedonly. There are bad implicit VUs generated there.

PPPS: vkGetPhysicalDeviceVideoCapabilitiesKHR returns VK_ERROR_EXTENSION_NOT_PRESENT, VK_ERROR_INITIALIZATION_FAILED, VK_ERROR_FEATURE_NOT_PRESENT, and VK_ERROR_FORMAT_NOT_SUPPORTED. Generally it is somewhat unconventional to have VkResult, and if there is they are usually just out-of-memory or VK_INCOMPLETE errors. VK_ERROR_FORMAT_NOT_SUPPORTED is reported if the profile is not supported. It is bit of a hijack of the code, as it has nothing to do with formats. Also given that, it is not clear when the other return codes are supposed to be given.

PPPPS: Sometimes vendors suffer from featureitis. So it might be nice to have a query identifying if there is some actual HW component to this, or if the implementation is same as could be achieved by other more explicit features of Vulkan (e.g. compute).

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Apr 15, 2021

Thank you @aabdelkh! That's great!
Another maybe more specific thing would be frame skipping. What I mean is that when using the DPB frames directly, I would also like to use multiple frames like that but not necessarily the consecutive ones. I see that there are those DPB slot states which might be useful but I am not sure if it's possible to achieve this. For example sending 10 packets in the decoder but wanting to keep let's say only frame 4,8 and 10 in the DPB. Even though they have to be decoded to update the decoder context it would be nice to be able to mark them as disposable so that the decoded picture would not end up in DPB. Or should we free such picture manually?
On the other hand, I guess that those frames in DPB might be necessary to decode new ones, right? Is copying the only way to put some of the DPB frames aside to avoid losing them when decoding many frames but needing only a subset of them?

No problem! The app manages the lifecycle for various resources, based on requirements for decoding the stream and how the images are used in the full pipeline. For decoding, you'll need to be careful not to remove images that may be needed for reference by future decode operations. Please refer to the spec section 39.2.6 Video Picture Subresources for some details and mention of how sparse memory may be helpful. I expect when the Vulkan Guide is updated for Video it would be helpful to describe such advanced use cases and also provide Vulkan samples for them.

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Apr 15, 2021

VK_VIDEO_CODEC_OPERATION_INVALID_BIT_KHR is bit weird name. Should 0 be named at all? And if it is, should it be called a bit?

PS: I see a precedent of VK_PIPELINE_STAGE_NONE_KHR, but it does not have BIT in the name. Maybe should follow up on it and name it VK_VIDEO_CODEC_OPERATION_NONE_KHR.

Thanks for pointing this out! There's been some back-and-forth discussion on this. Reserving zero makes all valid choices non-zero and avoids unnecessary debugging when clearing and forgetting to set a valid value. Will check if NONE without the BIT is the convention for naming, which we should certainly follow.

PPS: I think VkVideoQueueFamilyProperties2KHR should be marked returnedonly. There are bad implicit VUs generated there.

Thanks for the catch! There's a lot missing in VUs. Expect updates in the coming weeks/months as we finalize the spec and update validation layer.

PPPS: vkGetPhysicalDeviceVideoCapabilitiesKHR returns VK_ERROR_EXTENSION_NOT_PRESENT, VK_ERROR_INITIALIZATION_FAILED, VK_ERROR_FEATURE_NOT_PRESENT, and VK_ERROR_FORMAT_NOT_SUPPORTED. Generally it is somewhat unconventional to have VkResult, and if there is they are usually just out-of-memory or VK_INCOMPLETE errors. VK_ERROR_FORMAT_NOT_SUPPORTED is reported if the profile is not supported. It is bit of a hijack of the code, as it has nothing to do with formats. Also given that, it is not clear when the other return codes are supposed to be given.

For query APIs VkResult is useful, and we're trying to make more use of it to be more specific about what's going on. We should definitely document this properly, and I agree it's better to add a VK_ERROR_VIDEO_PROFILE_NOT_SUPPORTED or something rather than hijack an existing one. TSG discussed this at some point but never got around to it. Thanks for the reminder!

PPPPS: Sometimes vendors suffer from featureitis. So it might be nice to have a query identifying if there is some actual HW component to this, or if the implementation is same as could be achieved by other more explicit features of Vulkan (e.g. compute).

Hmmm, apart from verifying with some GPU profiling tool that there's actually a dedicated engine implementing video decode/encode, or driver open sourcing, I'm not sure if reporting through a query would be sufficient. Interesting question, I'll take it to the TSG/main WG.

@lionirdeadman
Copy link

@lionirdeadman lionirdeadman commented May 1, 2021

I'd like to preface this with : I don't do Vulkan, nor Video decoding nor have any relevant experience at all. I'm just a simple Linux user.

I have a relatively bad idea and this might not get much support (if any), but would it be possible to have Vulkan Video implementation without Vulkan 1.0 compliance to support old HW and have Video acceleration? The idea is to make it possible to support all hardware as to make it easier to handle video acceleration on Linux where there is currently no standard way to do so with VAAPI vs VDPAU.

Otherwise, this will still be great for current and future hardware.

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented May 5, 2021

I'd like to preface this with : I don't do Vulkan, nor Video decoding nor have any relevant experience at all. I'm just a simple Linux user.

I have a relatively bad idea and this might not get much support (if any), but would it be possible to have Vulkan Video implementation without Vulkan 1.0 compliance to support old HW and have Video acceleration? The idea is to make it possible to support all hardware as to make it easier to handle video acceleration on Linux where there is currently no standard way to do so with VAAPI vs VDPAU.

Otherwise, this will still be great for current and future hardware.

Thanks for your interest in Vulkan Video! The way Video functionality is added to Vulkan allows an implementation to only support whatever video codec-operation(s) it actually supports, without supporting other Vulkan functionality for graphics/compute/etc. This means if an IHV wants to enable legacy HW for video only they should be able to do this and be "compliant" with minimal effort. The decision is of course up to each IHV, but feel free to reach out to your preferred IHV(s) if you'd like to call out specific HW/codecs you'd like enabled.

@ibrandiay
Copy link

@ibrandiay ibrandiay commented Jun 30, 2021

first of all, thank you for your good work. Is Vulkan planning to provide raspberry pi 4 VideoCore VI gpu support?

@krOoze
Copy link
Contributor

@krOoze krOoze commented Jun 30, 2021

@ibrandiay Vulkan does not plan anything. Either vendors adopt it or not. In this case you best ask Broadcom and\or Raspberry foundation.

@eezstreet
Copy link

@eezstreet eezstreet commented Jul 26, 2021

Is there a schedule for the final release?

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Jul 28, 2021

Is there a schedule for the final release?

Unfortunately we cannot comment on a fixed schedule for the final release. We're updating the spec for h265 encode and rate control, addressing spec inconsistencies/missing/incorrect language/VUs, developing drivers & internal test applications, developing CTS for the extensions, etc. all of which will take at least several months to stabilize enough for a final release that can be supported long term. We appreciate your patience!

@HugoOsornio
Copy link

@HugoOsornio HugoOsornio commented Jul 29, 2021

Hello!

Is there a sample application for Video encoding using the Video extensions?
From the deep dive, I see that's possible:
https://www.khronos.org/assets/uploads/apis/Vulkan-Video-Deep-Dive-Apr21.pdf

But would be cool to have some samples to work as reference.

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Jul 30, 2021

Is there a sample application for Video encoding using the Video extensions?

Currently, no :( Will definitely share with everyone when one is available!

@brigazvi
Copy link

@brigazvi brigazvi commented Aug 21, 2021

is Vulkan video gonna make it possible (in the future) to encode video using the GPU even on GPUs that not made initially with those capabilities?

@NikolaTesla13
Copy link

@NikolaTesla13 NikolaTesla13 commented Sep 7, 2021

Will there be a stable release version of the Vulkan video extension in the coming months?

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Sep 9, 2021

is Vulkan video gonna make it possible (in the future) to encode video using the GPU even on GPUs that not made initially with those capabilities?

That should be possible, but I'm not sure how efficient it would be in particular for the entropy coding stage. GPUs typically have dedicated IP for video acceleration to achieve reasonable efficiency. If it's about ensuring Vulkan video extensions work regardless of GPU, we may as well hook-up a SW-based encoder to implement the extensions :)

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Sep 9, 2021

Will there be a stable release version of the Vulkan video extension in the coming months?

Yes, we're definitely working on this. Please also see related response here.

@brianpaul
Copy link

@brianpaul brianpaul commented Nov 4, 2021

Is there a sample application for Video encoding using the Video extensions?

Currently, no :( Will definitely share with everyone when one is available!

Any update/ETA on this? I'm trying to prototype a video encoder and I think it's practibally impossible without having an example to follow.

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Nov 5, 2021

Is there a sample application for Video encoding using the Video extensions?

Currently, no :( Will definitely share with everyone when one is available!

Any update/ETA on this? I'm trying to prototype a video encoder and I think it's practibally impossible without having an example to follow.

There's ongoing effort on a sample app for encode as well as CTS. We don't expect availability before Jan 2022. Sorry for the delay - there's quite some work left on the spec itself and STD headers that's still the primary focus.

@brianpaul
Copy link

@brianpaul brianpaul commented Nov 10, 2021

A few minor things I've noticed in vulkan_beta.h:

  1. VkVideoPictureResourceKHR has:
    VkOffset2D codedOffset;
    VkExtent2D codedExtent;
    Maybe those should be combined into a VkRect2D? Similarly in VkVideoDecodeInfoKHR.

  2. VkVideoDecodeInfoKHR has a codedOffset and codedExtent but VkVideoEncodeInfoKHR only has a codedExtent and no offset. Should it have an offset?

  3. VkVideoEncodeH264CapabilitiesEXT has a few fields with "Num" in the name. But I think the Vulkan convention is to use "Count". So maybe maxNumL0ReferenceForP should be maxL0ReferenceForPCount.

@brianpaul
Copy link

@brianpaul brianpaul commented Nov 10, 2021

How would one create a command buffer with a sequence of vkCmdEncodeVideoKHR() commands such that the resulting bitstream data is concatenated into a single destination buffer? I don't see an efficient way to do that.

The problem is proving the right dstBitstreamBufferOffset value to each encode command. We don't know that offset until the previous encode command is done. And AFAICT, the only way to get that value is with a VK_QUERY_TYPE_VIDEO_ENCODE_BITSTREAM_BUFFER_RANGE_KHR query. And that can't be obtained inside the command buffer.

It seems like we'd want a special "VK_APPEND" value for dstBitstreamBufferOffset to indicate that the output should follow the previous command.

Does this make sense, or is it the intention that separate destination buffers should be used for each vkCmdEncodeVideoKHR()? Or one buffer with offsets sufficiently large to prevent output overlap?

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Nov 16, 2021

Thanks for your feedback and questions! Please see responses inline.

A few minor things I've noticed in vulkan_beta.h:

  1. VkVideoPictureResourceKHR has:
    VkOffset2D codedOffset;
    VkExtent2D codedExtent;
    Maybe those should be combined into a VkRect2D? Similarly in VkVideoDecodeInfoKHR.

The original intention of codedOffset is for Decode H.264 interlaced support. For other cases it would be (0,0). @zlatinski will kindly ensure the language/VU are clear about this.

  1. VkVideoDecodeInfoKHR has a codedOffset and codedExtent but VkVideoEncodeInfoKHR only has a codedExtent and no offset. Should it have an offset?

Since we don't support H.264 interlaced encoding, we didn't add it there.

We do intend to support reuse of images for dynamic resolution changes while decoding/encoding, with the requirement of always starting from (0,0). So maybe using VkRect2D still makes sense for both decode/encode? Will let TSG decide.

The idea of decoding to or encoding from non-zero 2D offset (perhaps with some restrictions) is interesting :) But covering that is probably beyond the scope of the first release.

  1. VkVideoEncodeH264CapabilitiesEXT has a few fields with "Num" in the name. But I think the Vulkan convention is to use "Count". So maybe maxNumL0ReferenceForP should be maxL0ReferenceForPCount.

Good catch! Will rename to end with Count. Note in general we're doing quite a lot of updates for encode caps. We do appreciate all your feedback!

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Nov 16, 2021

How would one create a command buffer with a sequence of vkCmdEncodeVideoKHR() commands such that the resulting bitstream data is concatenated into a single destination buffer? I don't see an efficient way to do that.

The problem is proving the right dstBitstreamBufferOffset value to each encode command. We don't know that offset until the previous encode command is done. And AFAICT, the only way to get that value is with a VK_QUERY_TYPE_VIDEO_ENCODE_BITSTREAM_BUFFER_RANGE_KHR query. And that can't be obtained inside the command buffer.

It seems like we'd want a special "VK_APPEND" value for dstBitstreamBufferOffset to indicate that the output should follow the previous command.

Does this make sense, or is it the intention that separate destination buffers should be used for each vkCmdEncodeVideoKHR()? Or one buffer with offsets sufficiently large to prevent output overlap?

This is an interesting topic and was the subject of discussion on several occasions within the TSG.

In general, since the app needs to estimate how much buffer space it needs for individual vkCmdEncodeVideoKHR, and to enable appropriate use of memory barriers for the individual bitstream segments generated, the default usage is what you last described (separate buffers or one with sufficiently large offsets/ranges...).

Originally the bitstream buffer was provided for encode via vkCmdBeginVideoCodingKHR, with the intention of using the same buffer by multiple vkCmdEncodeVideoKHRs between the begin/end cmds (with the implementation appending to the same buffer as you described). You can still find some remnants of this thinking under the description for VkVideoEncodeH264OutputModeFlagBitsEXT for example.

We're still revising these sections, and may formalize a way for append. It's not really important for frame-based encoding, which is the target of the initial release. We may decide to postpone this for a future update post-first release just to limit the scope of development/validation as it's already quite large! :)

@Conan-Kudo
Copy link

@Conan-Kudo Conan-Kudo commented Feb 17, 2022

In general, I'm super excited about Vulkan Video. I do have a few questions, though:

  • Why did Vulkan Video focus on H.264 and H.265 for the initial spec instead of royalty-free accessible codecs like AV1 and VP9?
  • When will the VP9 and AV1 extensions become available?
  • Why no VP9 encoding support?
  • Will this work with any GPU that supports Vulkan? That is, can any GPU that offers Vulkan support offer Vulkan Video acceleration?

@eezstreet
Copy link

@eezstreet eezstreet commented Feb 18, 2022

Why did Vulkan Video focus on H.264 and H.265 for the initial spec instead of royalty-free accessible codecs like AV1 and VP9?

I’m not affiliated with the Khronos Group, but this is probably because the hardware vendors are targeting H.264/H.265. I know that the NVENC cores for example only target those two formats.

@ichlubna
Copy link

@ichlubna ichlubna commented Feb 18, 2022

I know that the NVENC cores for example only target those two formats.

Exactly, here is, for example, the NV support matrix for the en/decoders. The new 3x RTX supports also AV1 decoding. It all depends on the HW support. The new formats probably will be supported in Vulkan if the vendors decide to implement them in their products.

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Feb 24, 2022

In general, I'm super excited about Vulkan Video. I do have a few questions, though:

Very happy to hear this! :)

  • Why did Vulkan Video focus on H.264 and H.265 for the initial spec instead of royalty-free accessible codecs like AV1 and VP9?

That's due to the much more mature HW support for these codecs for the participating vendors, so much more video HW out there would be exposed in Vulkan when H.264/H.265 decode/encode extensions are available.

  • When will the VP9 and AV1 extensions become available?

We expect to start working on these right after the release of H.264/H.265 extensions. Note at that point the focus should only be on the VP9/AV1 codec-specific extensions (relying on the then already published core codec-independent extensions), so it should be quicker.

  • Why no VP9 encoding support?

We need at least 3 participating vendors to support VP9 encoding to warrant the work. AV1 came out too quickly after VP9 that some vendors just skipped adding VP9 encode HW and went straight to AV1. Of course any vendor is free to expose a vendor-specific VP9 encode extension compatible with the Khronos core Vulkan Video extensions.

  • Will this work with any GPU that supports Vulkan? That is, can any GPU that offers Vulkan support offer Vulkan Video acceleration?

That's up to each GPU (or APU, etc.) vendor that supports Vulkan. Vendors can still support Vulkan without supporting the Vulkan Video extensions. Feel free to contact your favorite vendor if they have Video HW acceleration & Vulkan drivers but they don't expose Vulkan Video!

@WonskuisYu
Copy link

@WonskuisYu WonskuisYu commented Apr 6, 2022

Are there any plans to add video post processing API definition?
And is there a official test case like WHQL for windows to verify the quality of driver?

@aabdelkh
Copy link
Author

@aabdelkh aabdelkh commented Apr 6, 2022

Are there any plans to add video post processing API definition?

Priority will be to add other codecs. But any inline pre/post processing that's supported by multiple IHVs will be considered.
Please identify specific processing that interests you so we may try to support it.

And is there a official test case like WHQL for windows to verify the quality of driver?

Vulkan CTS is being updated to include video extensions coverage. We may continue adding test cases post release to keep raising the quality bar for drivers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests