> on Apple Silicon, a WebAssembly module's linear memory can be shared directly with the GPU: no copies, no serialization, no intermediate buffers
enhance
> no copies, no serialization, no intermediate buffers
would it kill people to write their own stuff why are we doing this. out of all the things people immediately cede to AI they cede their human ability to communicate and convey/share ideas. this timeline is bonkers.
I'm curious what this offers over just building the host side code to be native?
> on Apple Silicon, a WebAssembly module's linear memory can be shared directly with the GPU: no copies, no serialization, no intermediate buffers
enhance
> no copies, no serialization, no intermediate buffers
would it kill people to write their own stuff why are we doing this. out of all the things people immediately cede to AI they cede their human ability to communicate and convey/share ideas. this timeline is bonkers.
This sort of obvious pattern is an instant AI dead give-away that I keep on seeing in hundreds of blogs and code posted on this site:
Another way of telling via code is by deducing the experience of the author if they became an expert of a different language since...yesterday.There will be a time where it will be problematic for those who over-rely on AI and will struggle on on-site interviews with whiteboard tests.
I think the days of on-site interviews with whiteboard tests may be drawing to a close faster than you suspect
I also think we will never go back to good old days.
This works in wasmtime not browsers.
Why would it not work in a browser?
it would be hard to share the same memory location with gpu, right ?
If the browser supported it it could expose it via a buffer view or something, but that'd be quite the security surface area one would think.