Roc does static reference counting too, otherwise it wouldn’t be able to do opportunistic in place mutation. It can do static reference counting up to a known compile time bound, whereas rust can only count to one. Both of them can do runtime reference counting, but it’s implicit in roc and explicit with Rc and Arc in rust.
For example, consider the pseudocode { h = "Hello, " hw = h + "world." hm = h + "Mum!" }
In real life, this could be something less swervable.
Roc counts, at compile time, 1,2,3,0, drop. No problem.
Depending on how you declare these variables (with what additional keywords, symbols, string types and concepts), rust counts, at compile time, 1,release,1,2!Nononostopbroken!Badprogrammer! This was in this case an unnecessary premature optimisation. That’s what I mean by rust counts references, but only counts up to 1.
The borrow checker is a static reference counter with an arbitrary number of immutable references that you must declare explicitly and a maximum of one mutable reference that you declare explicitly with mut or let under different circumstances. Arc and Rc are runtime reference counters that you declare explicitly. This is essentially all tracked in the type system.
Roc does the static reference counting and if the total doesn’t rise above rust’s maximum of 1, uses in place mutation (as opposed to the default immutability). If it is bounded it can use static (compile time) reference counting so that when, for example, all four local references fall out of scope, the memory is dropped. If the number is unbounded (eg parameter passing recursion that can’t be tail-cpseudocode ilarly removed), runtime reference counting is used. This is all essentially tracked in the runtime system, but calls to clone are automated in roc. A beginner absolutely can write a memory hog in roc, but the same beginner is likely to overuse clone in rust and write a similar memory hog.
I don’t know whatever that language is doing is called, but it’s not reference counting. It’s doing some kind of static code analysis, and then it falls back to reference counting.
If you call that reference counting, what stops you from calling garbage collectors reference counting too? They certainly count references! Is the stack a reference count too? It keeps track of all the data in a stack frame, some of it might be references!
Garbage collection is pausing the main thread while you go searching the heap for memory to free up. It’s slow and unpredictable about when it’ll happen or how long it’ll take. That’s a very different process indeed and roc doesn’t do it.
Whether you call it static reference counting or not, when roc chooses in-place mutation it’s because it would have satisfied the borrow checker. It can do a wider class of such things when stuff goes out of scope. There’s a webserver platform that does arena allocation, often swerving cache misses as a result, but crucially frees the entire arena in one step. Freeing up all the tiny little bits of memory for lots of individual stuff as you go along as rust would do would be far slower.
Calling that kind of thing garbage collection is I think very misleading indeed.
Optimising your memory management for each problem domain/platform actually give you memory management efficiencies.
It’s not as simple as that.
Roc does static reference counting too, otherwise it wouldn’t be able to do opportunistic in place mutation. It can do static reference counting up to a known compile time bound, whereas rust can only count to one. Both of them can do runtime reference counting, but it’s implicit in roc and explicit with Rc and Arc in rust.
For example, consider the pseudocode
{
h = "Hello, "
hw = h + "world."
hm = h + "Mum!"
}
In real life, this could be something less swervable.
Roc counts, at compile time,
1,2,3,0, drop
. No problem.Depending on how you declare these variables (with what additional keywords, symbols, string types and concepts), rust counts, at compile time,
1,release,1,2! No no no stop broken! Bad programmer!
This was in this case an unnecessary premature optimisation. That’s what I mean by rust counts references, but only counts up to 1.The borrow checker is a static reference counter with an arbitrary number of immutable references that you must declare explicitly and a maximum of one mutable reference that you declare explicitly with mut or let under different circumstances. Arc and Rc are runtime reference counters that you declare explicitly. This is essentially all tracked in the type system.
Roc does the static reference counting and if the total doesn’t rise above rust’s maximum of 1, uses in place mutation (as opposed to the default immutability). If it is bounded it can use static (compile time) reference counting so that when, for example, all four local references fall out of scope, the memory is dropped. If the number is unbounded (eg parameter passing recursion that can’t be tail-cpseudocode ilarly removed), runtime reference counting is used. This is all essentially tracked in the runtime system, but calls to clone are automated in roc. A beginner absolutely can write a memory hog in roc, but the same beginner is likely to overuse clone in rust and write a similar memory hog.
I don’t know whatever that language is doing is called, but it’s not reference counting. It’s doing some kind of static code analysis, and then it falls back to reference counting.
If you call that reference counting, what stops you from calling garbage collectors reference counting too? They certainly count references! Is the stack a reference count too? It keeps track of all the data in a stack frame, some of it might be references!
Garbage collection is pausing the main thread while you go searching the heap for memory to free up. It’s slow and unpredictable about when it’ll happen or how long it’ll take. That’s a very different process indeed and roc doesn’t do it.
Whether you call it static reference counting or not, when roc chooses in-place mutation it’s because it would have satisfied the borrow checker. It can do a wider class of such things when stuff goes out of scope. There’s a webserver platform that does arena allocation, often swerving cache misses as a result, but crucially frees the entire arena in one step. Freeing up all the tiny little bits of memory for lots of individual stuff as you go along as rust would do would be far slower.
Calling that kind of thing garbage collection is I think very misleading indeed.
Optimising your memory management for each problem domain/platform actually give you memory management efficiencies.