Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As an embedded system dev, I'm wondering if these async features can be implemented on bare-metal (or without runtime)? Maybe I'm dumb and it might be a wild thought but it would be great if I could easily integrate async language features with hardware interrupts.


Definitely. We’re currently blocked with the builtin await using thread local storage, but that’s planned to be removed and replaced with something that will work without an OS before stabilisation.

I have had the old macro based async code in Rust running on a Cortex M device, completely runtime free. Once the TLS stuff is sorted I plan to port this forward to work with the builtin syntax.


I don't understand why people don't just support TLS in non-userspace code. It's so convenient for a bunch of things, and sadly Rust, for now, has nothing in between "fully explicit argument passing" and "scoped global state".


Super excited about that! This will be a major feature for embedded devices which perform network I/O, be it Internet of Things or industrial control over Ethernet.


What’s the TLS stuff to be sorted out?


IIRC, the initial implementation of async/await requires TLS. Eventually it won’t.


Oh! For some reason I jumped to Transport Layer Security, not Thread Local Storage... duh.

Yeah the Pinning stuff is supposed to help with this as I understand.


Ah! Super reasonable, yeah. It can be confusing.


I'm not familiar with the way rust is going, but in general Async can be implemented purely a compile time by turning an async function into a set of functions tail calling each other or a switch statement. The stack frame need to go somewhere, but with allocator support this does not need to be exclusively handled by the runtime.


Async/await is basically sugar for generators and futures.


This is likely doable if no I/O is involved in the scheduling of coroutines. Usually it is.


That's completely orthogonal though.


If you're interested in async features for the sake of concurrency, you might want to give synchronous concurrency a look - Céu is a very recent entry into this paradigm[0]. It specifically targets the embedded space due to its minimal overhead compared to other concurrency solutions. Also, there is a version that supports libuv for added async stuff[1].

[0] http://ceu-lang.org/

[1] https://github.com/ceu-lang/ceu-libuv


Pretty sure you need at least an operating system and memory allocation


Futures have landed in libcore, and do not inherently allocate.

You can write an executor with a fixed-size queue as well, and not require dynamic allocation at all.

Being able to do this is a hard constraint on the design.


If I’m reading the proposal correctly, it’ll be possible and even straightforward because the standard library only provides the interface. You’ll be able to swap implementions by replacing the package with whatever fulfills the interface requirements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: