[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async



With Wasm multithreading we can turn any kind of Promise to a pollable event in the WebAssembly heap, "if (didTheAsyncMapBufferCompleteYet) ...", or something that one can synchronously wait on in a Web Worker running Wasm, so technically a Promise-based mechanism will not hold away the feature from Wasm content that runs its own main loop in a Worker. However the machinery to implement that is quite heavy weight, so it's not sure if it will be ideal.

(((As a sidenote, this is an example of a scenario where in an ideal world the problem would be solved in OpenGL specification itself, then trickle from there to OpenGL ES, and then WebGL could adopt it by "natively" targeting the newly exposed feature. I can appreciate how difficult and time consuming that is, since there are so many companies in play, though overall this is one of the painfully bad issues with native codebases targeting the web: WebGL is kind of a "bottom rung" last tier of APIs after OpenGL->OpenGL ES->WebGL (not to mention the "should work on top of D3D" issue), and the higher up APIs have largely been considered to be immutable when WebGL related problems are being tackled.

For Next-Gen Web GPU API on the web, I'd love to have a magic wand to get tight attention of driver vendors and native spec writers, and if, say, Vulkan has an issue that it doesn't cater well for the web due to e.g. security, then it'd be great to fix up Vulkan spec itself to provide that security, after which WebVulkan could leverage that; rather than regarding the Vulkan spec immutable as it shipped, and then WebVulkan attempting to layer security on top afterwards on its own. That kind of development mode would ensure best performance - I routinely see that all this after the fact WebGL related validation and emulation accounts for a substantially large % of CPU time in Unity and Unreal Engine 4 content; no doubt a driver with better guarantees would allow slimming that down.)))

Though that's definitely a separate conversation, and don't want to derail the conversation in this thread. As for the WEBGL_get_buffer_sub_data_async extension:

1) are there any requirements with respect to dstBuffer.BYTES_PER_ELEMENT and the content of the buffer that is being read, or can user choose their favorite ArrayBufferView type, and the filled content is "reinterpreted" to that destination byte buffer?

2) I am not sure I understand how the Promise<ArrayBuffer> and input parameter ArrayBufferView dstBuffer relate to each other? Will the ArrayBuffer in the promise always be the same underlying ArrayBuffer that the passed in dstBuffer is viewing? Or I wonder if I misunderstood something. There is no "automatic resizing to received size" kind of machinery happening if I understand correctly? (not that there probably would need to be)

3) Can ArrayBufferView dstBuffer be viewing a WebAssembly.Memory heap object? I presume there should not be an issue for this?

4) Could there exist a way to cancel a getBufferSubDataAsync() call? I imagine if an application is running its render loop, it would do in C code events such as

   char *buf = malloc(numBytesNeededToHoldTheGetBufferSubData);
   int asyncOperationId = emscripten_webgl_getBufferSubDataAsync(target, srcByteOffset, buf, dstOffset, length, completionCallback, errorCallback);

   void completionCallback(int asyncOperationId, char *buf, size_t len)
   {
      for(int i = 0; i < len; ++i) /* access buf[i]; */
      free(buf);
   }

   void errorCallback(int asyncOperationId, char *buf, size_t len)
   {
      free(buf);
   }

Now imagine all of a sudden that after such a call, but before completionCallback has fired, user has issued some kind of exit command in the application, that would end up in a scene being unloaded, or application going back to its main menu, or something of the sorts, which is supposed to tear down the application and its GL resources.

In this scenario, the memory area in buf is in something of a pinned down state, since there will be a pending memcpy from the browser coming to that memory area. So one can't free(buf) before that happens, but one will need to wait for either of the above callbacks to fire. Would it be possible, or make sense to have some kind of GL.cancelBufferSubDataAsync(The promise object(?)) call that would enable a

    if (OopsINeedToCancelThePendingBufferSubData)
    {
      emscripten_webgl_cancelBufferSubDataAsync(asyncOperationId);
      free(buf);
    }

in case that neither of the callbacks have yet fired (and if they have, make cancel fizzle) so that applications have an easy audited path to synchronously tear down their resources? This would be analogous to setTimeout() and clearTimeout() on the web. That way any memory passed to getBufferSubDataAsync() would not need to be considered scary "tainted" until the promise resolves.

Otherwise apps may need to track all the blocks of memory specially that have been pinned down like this, and in case these blocks of memory might become very large (as they might), it could happen that some render->deinit->reinit->render could transiently play havoc in the amount of allocated memory of an application, if something ends up stalling the promises to fire, and this could cause an app to OOM in a sudden spike. Being able to synchronously cancel and free() all this memory would avoid this kind of OOM scenario from ever happening.

5) Are there scenarios where the Promise would be allowed not to fire ever, but remain in unresolved state? E.g. if there's an intervening GL context teardown, deleted resource, GL context loss or similar? I presume not, though good to check. If such a Promise would never resolve, that could mean a memory leak in the Wasm heap.

Cheers,
   Jukka


On Fri, Nov 3, 2017 at 4:51 AM, Ken Russell <[email protected]> wrote:
Kai's written a nice design doc of the various options which is complete and has already been vetted within the working group. He and Corentin are cranking on some work for early next week, but expect to see it mid-next week. It addresses all of the concerns which have been raised.

-Ken


On Thu, Nov 2, 2017 at 3:38 AM, Florian Bösch <[email protected]> wrote:
Is there any progress on clearing the form of the API for asm.js/web-assembly users? I (and as Kenneth mentioned) many others could make good use of the functionality.

On Sat, Sep 30, 2017 at 7:24 PM, Florian Bösch <[email protected]> wrote:
Right, but this is a little weird. So basically it means you're allocating a memory region for use by the async readback upon every invocation. It's usually not a problem, but if you have a pipeline stall you could end up with hundreds of allocated buffers waiting for the GPU to catch up.

The way that's usually done by graphics programmers is to pre-allocate a limited number of such buffers (like 3) and when you're 3 buffers deep and the first not resolved, you don't emit more readbacks. You could emulate that behavior with the async readback extension, but you have to be aware that you have to, and the memory cost is hidden from you.

On Sat, Sep 30, 2017 at 7:04 PM, Kenneth Russell <[email protected]> wrote:
On Sat, Sep 30, 2017 at 5:12 AM, Florian Bösch <[email protected]> wrote:
On Sat, Sep 30, 2017 at 1:17 AM, Kenneth Russell <[email protected]> wrote:
Different regions of shared memory between Chrome's renderer and GPU processes are used if there are multiple pipelined calls to getBufferSubDataAsync.
Effectively a readback buffer cache. How do you know when you can discard those copies you keep around?

As soon as the data is copied out into the client's ArrayBufferView, just before resolving the Promise.