ameliaquining a day ago

"No more [...] slow compile times with complex ownership tracking."

Presumably this is referring to Rust, which has a borrow checker and slow compile times. The author is, I assume, under the common misconception that these facts are closely related. They're not; I think the borrow checker runs in linear time though I can't find confirmation of this, and in any event profiling reveals that it only accounts for a small fraction of compile times. Rust compile times are slow because the language has a bunch of other non-borrow-checking-related features that trade off compilation speed for other desiderata (monomorphization, LLVM optimization, procedural macros, crates as a translation unit). Also because the rustc codebase is huge and fairly arcane and not that many people understand it well, and while there's a lot of room for improvement in principle it's mostly not low-hanging fruit, requiring major architectural changes, so it'd require a large investment of resources which no one has put up.

  • unscaled a day ago

    I know very little about how rustc is implemented, but watching what kind of things make make Rust compile times slower, I tend to agree with you. The borrow checker rarely seems to be the culprit here. It tends to spike up exactly on the things you've mentioned: procedural macros use, generics use (monomorphization) and release builds (optimization).

    There are other legitimate criticisms you can raise at the Rust borrow checker such as cognitive load and higher cost of refactoring, but the compilation speed argument is just baseless.

    • SkiFire13 a day ago

      Procedural macros are not really _that_ slow themselves, the issue is more that they tend to generate enormous amount of code that will then have to be compiled, and _that_'s slow.

      • norman784 6 hours ago

        The issue, most commonly noted by devs, with proc macros is that it slows down the incremental compilation times, because proc macros are recomputed each time.

      • Arnavion 11 hours ago

        Proc macros themselves are slow too. If you compile them in debug mode, they run slowly. If you compile them in release mode, they run faster but take longer to compile. This is especially noticeable with behemoth macros like serde that use the complicated syn parser.

        Compiling them in release mode does have an advantage if the proc macro is used a lot in your dep tree, since the faster invocations compensate for the increased compile time. Another option is shipping pre-compiled macros like the serde maintainer tried to do at one point, but there was sufficient (justified) backlash to shipping blobs in that case that it will probably never take off.

        Here's a comparison of using serde proc macros for (De)Serialize impls vs pre-generated impls: https://github.com/Arnavion/k8s-openapi/issues/4#issuecommen... In other words the amount of code that is compiled in the end is the same; the time difference is entirely because of proc macro invocation. 5m -> 3m for debug builds, 3.5m -> 3m for release builds. It's from 2018, but the situation is largely unchanged as of today.

      • dathinab 7 hours ago

        yesn't they require you to compile a binary (or multiple ones when nested) before being able to compile your binary and depending on a lot of factors that can add quite a bunch of overhead especially for non-incremental non-release builds (and probably can be fixed by adding sand-boxing for reproducibility making most of them pure cache-able functions allowing distributed caching of both their binaries and output, like theoretically, not sure if rust will ever end up there).

        And the majority of procedural macros don't produce that much code and like you said their execution isn't the biggest problem.

        E.g. the recent article about a db system ending up with 30?min compiler times and then cutting them down to 4min was a case of auto generating a whole (very enormously huge) crate (no idea if proc-macros where involved, didn't really matter there anyway).

        So yeah, kinda what you said, proc macros can and should be improved, but rarely are they the root cause.

      • ameliaquining a day ago

        Also the procedural macro library itself and all of its dependencies have to be compiled. Though this only really affects initial builds, as the library can be cached on subsequent ones.

  • josh11b a day ago

    https://learning-rust.github.io/docs/lifetimes/

    > Lifetime annotations are checked at compile-time. ... This is the major reason for slower compilation times in Rust.

    This misconception is being perpetuated by Rust tutorials.

  • torginus 9 hours ago

    Generally why I think rust compile is unfixably slow is the decision to rely on compile-time static dispatch, and heavy generic specialization, which means there's a LOT of code to compile and the resulting binary size is large.

    Many-many people remarked that this is the wrong approach in todays world, where CPUs are good at doing dynamic dispatch prediction, but the cache sizes (esp. L1, and instr cache) is very limited, for most code (with the exception of very hot tight loops), fetching code into cache is going to be the bottleneck.

    Not to mention, for a systems programming language, I'd expect a degree of neatness of the generated machine code (e.g. no crazy name mangling, having the same generic method appear 30 places in assembly etc.)

    • nagisa 4 hours ago

      There are compile-time techniques that can mitigate the compile-time cost of monomorphization to a degree: optimizing on a generic IR (MIR) and polymorphization (merging functions that produce equivalent bodies) come to mind as immediate examples that have been talked about or implemented to a degree in rustc.

    • dathinab 7 hours ago

      > is unfixably slow

      it's not at all unfixable, I mean sure there is a limit to speed improvements but many of the things you mention aren't really as fundamental as they seem

      one one hand you don't have to go crazy with generics, `dyn` is a thing and not being generic is often just fine. Actually it's not rare to find it projects code guidelines to avoid unnecessary monopolization e.g. use `&mut dyn FnMut()` over `impl FnMut()` and similar. And sure there is some issue with people spreading some "always use generics it's faster, dynamic dispatch is evil FUD" but that's more a people then a language problem.

      on the other hand rust gives very limited guarantees about how exactly a lot of stuff happens under the hood, including the rust calling convention, struct layout etc. As long as rust don't change "observed" side effects it can to whatever it wants. Dynamic/Static dispatch is in general not counted as a observed side effect so the compiler is free to not monomorphe things if it can make it work. While it already kinda somewhat doesn't monomorphize some part (e.g. T=usize,T=u64 on 64bit systems) there is a lot of untapped potential. Sure there are big limits on how far this can go. But if combined with not obsessing with generics and other improvements I think rust can have very reasonable compile times, especially in a dev->unit test loop. And many people are already fine with them now so nothing I'm overly worried about tbh.

      > neatness of the generated machine code

      Why would you care about that in a language where you close to never have to look at assembly of it or anything similar? It's also not really what any other languages pursue, even in modern C that is more a side effect then a intend.

      Through without question kilobytes large type signatures are an issue (but the mangling isn't, IMHO if you don't use a tool to unmangle symbols on the fly that is a you problem).

      • torginus 5 hours ago

        It's unfixable in the sense that the problem isn't with how fast the compiler is, it's that you give it a ton of extra work. You could try to convince the library devs to use more dyn, but it'd require a culture shift. I don't think the compiler going behind the users back and second guessing whether to use static dispatch or inlining is something a low-level language should do. Java, sure.

        In fact I define a systems language as something that allows the dev to describe intended machine behavior more conveniently, as opposed to a higher-level language, where the user describes desired behavior and the compiler figures out the rest.

  • pjmlp 13 hours ago

    Rust is really getting hurt by at least not having some kind of interpreter like OCaml and Haskell have, to dispel the perpetual of urban myths from devs without background in compilers.

    • dathinab 7 hours ago

      fun fact there is work in progress to have a cranelift based backend, which isn't exactly an interpreter but more like a AOT compiler build for WASM

      but it anyway does compile things much faster at the cost of less optimizations (doesn't mean no optimizations or that it's slow per-see it's still designed to run WASM performant in a situation where a fast/low latency AOT is needed, but WASM programs are normally already pre-optimized and you many have to to do certain low level instruction optimizations which it still does)

      AFIK the goal is to run it by default for the dev->unit test loop, as very often you don't care about high perf. code execution but about low latency getting feedback.

      Through idk. the state of it.

      • pjmlp an hour ago

        All the best for those efforts.

    • _bent 5 hours ago

      well there's MIRI

      • pjmlp an hour ago

        However you cannot use it as a general purpose implementation.

  • brodo 12 hours ago

    Also, Rust compile times aren't that bad the last time I checked. Maybe they got better and people just don't realize it?

    • zimpenfish 9 hours ago

      > Also, Rust compile times aren't that bad the last time I checked.

      I dunno - I've got a trivial webui that queries an SQLite3 database and outputs a nice table and from `cargo clean`, `cargo build --release` takes 317s on my 8G Celeron and 70s on my 20G Ryzen 7. Will port it to Go to test but I'd expect it to take <15s from clean even on the Celeron.

      • fn-mote 8 hours ago

        I don’t think build time from `clean` is the right metric. A developer is usually using incremental compilation, so that’s where I want the whatever speed I can get.

        Nobody likes a 5m build time, but that’s a very old slow chip!

        • lerno 24 minutes ago

          "Incremental compilation is fast", is something people only talk about when normal compilation speeds are abysmal. Sadly, C++ set the expectations here, which made both Rust and Swift think that compilation times in the minutes is fine.

          If your code compiles in 1 second from scratch then what do you need incremental compilation for?

  • sim7c00 7 hours ago

    maybe the borrow checker takes most compile time if you take an avergae of how often it runs vs. how often the next compile phases are triggered over code lifespan :') (yes ok so i dont do well with lifetimes hah)

  • GoblinSlayer 10 hours ago

    Also bloat. Why ripgrep is 2mb gzip compressed?

    • burntsushi 7 hours ago

      If you're talking about the release binary, that has an entire libc (musl) statically linked into it. And all of PCRE2. And all of its Rust dependencies including the entire standard library.

    • torginus 9 hours ago

      Because generic monomorphization generates a massive amount of machine code.

      • dathinab 6 hours ago

        that can be the reason, but it's a very bad example

        it's quite unlikely that it would be _that_ much smaller if it had been written in C or C++ with the _exact_ same goals, features etc. in mind.

        like grep and ripgrep seem on the surface quite similar (grep something, have multiple different regex engine etc.) but if you go into the details they are quite different (not just because rg has file walking and resolution of gitignore logic build in, but also wrt. goals features of their regex engines, performance goals, terminal syntax highlighting etc.)

        • burntsushi 6 hours ago

          Responding narrowly:

          ripgrep doesn't do "terminal syntax highlighting." It has some basic support for colors, similar to GNU grep.

          GNU grep and ripgrep share a lot of similarities, even beyond superficial ones. There are also some major differences. But I've always said that the venn diagram of GNU grep and ripgrep has a much bigger surface area in their intersection than the area of their symmetric difference.

        • GuB-42 3 hours ago

          ugrep, which is C++ and similar in scope to ripgrep is 0.9 MB on my machine, ripgrep is 4.4 MB and GNU grep us 0.2 MB. They all depend on libc and libpcre2.

          Ugrep however depends on libstdc++ and a bunch of libraries for compressed file support (libz,...).

          So yeah a bit bloated but we are not at Electron level yet.

          • burntsushi 2 hours ago

            It's not clear to me that you're accounting for the difference in size that results from static vs dynamic linking. For example, if I build `ugrep` with `./build.sh --enable-static --without-brotli --without-lzma --without-zstd --without-lz4 --without-bzlib`, then I get a `ugrep` binary that is 4.5MB. (I added all of those `--without-*` flags because I couldn't get the build to work otherwise.) If I add `--without-pcre2`, I get a 3.9MB binary.

            ripgrep is only a little bigger here when you do an apples to apples comparison. To get a static build without PCRE2, run `cargo build --profile release-lto --target x86_64-unknown-linux-musl`. That gets me a 4.6MB `rg` binary. Running `PCRE2_SYS_STATIC=1 cargo build --profile release-lto --target x86_64-unknown-linux-musl --features pcre2` gets a fully static binary with PCRE2 at a 5.4MB `rg` binary.

            Popping up a level, a fair criticism is that it is difficult to get ripgrep to dynamically link most of its dependencies. You can make it dynamically link libc and PCRE2 (that's just `cargo build --profile release-lto --features pcre2`) and get a 4.1MB binary, but getting it to dynamically link all of its Rust crate dependencies is an unsupported build configuration for ripgrep. But I don't know how much tools like ugrep or GNU grep rely on that level of granular dynamic linking anyway. GNU grep doesn't seem to do so on my system (only dynamically linking with libc and PCRE2).

            Additionally, the difference in binary size may be at least partially attributable to a difference in Unicode support:

                $ echo ♥ | rg '\p{Emoji}'
                ♥
                $ echo ♥ | ugrep-7.5.0 '\p{Emoji}'
                ugrep: error: error at position 6
                (?m)\p{Emoji}
                      \___invalid character class
        • torginus 6 hours ago

          I don't know the reason, but as having worked in embedded, I worked on a project that had drivers, app logic, filesystem support, TCP stack, and then some more, that fit in less that 64kb of ROM (written in C), without much trouble, 2MB for such a tool seems excessive, would love to see a breakdown of what's in there and how much space it takes up.

          • GoblinSlayer 4 hours ago

            I wrote an encrypted file exchange tool, 26kb. External dependencies are files, sockets, memcpy and malloc. It's client and server in one file, so it's two times bigger than it can be. It also has complex and almost useless features like traffic obfuscation, probing resistance and hertzbleed resistance because why not, so it's not a minimal implementation.

    • Philpax 6 hours ago

      Is that a lot for an application that does what it does?

  • dathinab 8 hours ago

    > They're not;

    it's complicated, and simple

    the simple answer is rust compiler times are not dominated by the borrow checker at all, so "it's fast" and you can say it's not overly related to their being borrow checking

    the other simple answer is that a simple reasonable well implemented borrow checker is pretty much always fast

    to complicated answer is that rusts borrow checker isn't simple as there are a _huge lot_ of code a simple borrow checker wouldn't allow which is safe and people want to write and the borrow checker rust needs to run to support all that edge cases has to basically run a constraint solver. (Which a) is a thing which in O notation is quite slow and b) is a thing CS has researched optimizations and heuristics for since decades so it is often quite fast in practice.) And as far as I remember rust currently does (did? wanted to?) run this in two layers, the simple checker checks most code and the more powerful on only engages for the cases where the simple checker fails. But , like mentioned, as the compilation still isn't dominated by the borrow checker this doesn't exactly mean its slow.

    So the borrow checker isn't an issue and if you create a C-like language with a rust like borrow checker it will compile speedily, at least theoretically, if you then also have a tone of code gen and large compilation units you might run into similar issues as rust does ;)

    Also recently most of the "especially bad cases" project in rust have run into (wrt. compiler times, AFIK) all had the same kind of pattern: A enormous huge amount of code (often auto generated, often even huge before monomorphization) being squeezed into very few (often one single) LLVM compilation unit leading to both LLVM struggling hard with optimizations and then the linker drowning, too. And here is the thing, that can happen to you in C too and then your compilation times will be terrible, too. Through people tend to very rarely run into it in C.

    > not low-hanging fruit, requiring major architectural changes, so it'd require a large investment of resources which no one has put up.

    it still happens from time to time (e.g. polenious) and then there are still many "hard" but very useful improvements which don't require any large scale architectural changes and also some bigger issues which wouldn't be fixed by large scale architectural improvements. So not sure if we are anywhere close to needing a large scale architectural overall in rustc, probably not.

    E.g. in a somewhat recent article about way too long rust compiler times many HN comments thought that rustc had some major architectural issues wrt. parallelization, but the issue was that rust failed to properly subsection the massive auto-generated crate when handing code units to LLVM and that isn't an architectural issue. Or e.g. not replacing LLVM with cranelift (if viable) for the change->unit test loop is a good example for a change which can largely improve dev experience/decrease compiler times for the place where it matters the most (technically it does change the architecture of the stack, and needed many many small changes to allow a non LLVM backend, but it's not "a major rewrite(architectural) change" in the rustc compiler code).

UncleMeat a day ago

The core benefit of the borrow checker is not "make sure to remember to clean up memory to avoid leaks." The core benefits are "make sure that you can't access memory after it has been destroyed" and "make sure that you can't mutate something that somebody else needs to be constant." This is fundamentally a statement about the relationship between many objects, which may have different lifetimes and which are allocated in totally different parts of the program.

Lexically scoped lifetimes don't address this at all.

  • lerno a day ago

    Well, the title (which is poorly worded as has been pointed out) refers to C3 being able to implement good handling of lifetimes for temporary allocations by baking it into the stdlib. And so it doesn't need to reach for any additional language features. (There is for example a C superset that implements borrowing, but C3 doesn't take that route)

    What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used. That's of course not the level of compile time checking that Rust does. But then Rust has a lot more in the language in order to support this.

    Conversely C3 does have contracts as a language feature, which Rust doesn't have, so C3 is able to do static checking with the contracts to reject contract violations at compile time, which runtime contracts like some Rust creates provides, can't do.

    • SkiFire13 a day ago

      > What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used.

      The article makes no mention of this, so in the context of the article the title remains very wrong. I could also not find a page in the documentation claiming this is supported (though I have to admit I did not read all the pages), nor an explanation of how this works, especially in relation to the performace hit it would result in.

      > C3 is able to do static checking with the contracts to reject contract violations at compile time

      I tries searching how these contracts work in the C3 website [1] and these seems to be no guaranteed static checking of such contracts. Even worse, violating them when not using safe mode results in "unspecified behaviour", but really it's undefined behaviour (violating contracts is even their list of undefined behaviour! [2])

      [1]: https://c3-lang.org/language-common/contracts/

      [2]: https://c3-lang.org/language-rules/undefined-behaviour/#list...

      • lerno a day ago

        > The article makes no mention of this, so in the context of the article the title remains very wrong

        The temp allocator implementation isn't guaranteed to detect it, and the article doesn't go into implementation details and guarantees (which is good, because capabilities will be added on the road to 1.0).

        > I tries searching how these contracts work in the C3 website [1] and these seems to be no guaranteed static checking of such contracts.

        No, there is no guarantee at the language level because doing so would make a conforming implementation of the compiler harder than it needs to be. In addition, setting exact limits may hamper innovation of compilers that wish to add more analysis but will hesitate to reject code that can be statically know to violate contracts.

        At higher optimizations, the compiler is allowed to assume that the contracts evaluate to true. This means that code like `assert(i == 1); if (i != 1) return false;` can be reduced to a no-op.

        So the danger here is then if you rely on the function giving you a valid result even if the indata is not one that the function should work with.

        And yes, it will be optional to have those "assumes" inserted.

        Already today in current compiler, doing something trivial like writing `foo(0)` to a function that requires that the parameter > 1 is caught at compile time. And it's not doing any real analysis yet, but it will definitely happen.

        • UncleMeat a day ago

          Just my opinion, but I think that having contracts that might be checked is a really really really dangerous approach. I think it is a much better idea to start with a plan for what sorts of things you can check soundly and only do those. "Well we missed that one because we only have intraprocedural constant propagation" is not going to be the sort of thing most users understand and will catch people by surprise.

          • GoblinSlayer 10 hours ago

            Safety is a spectrum. You add +1 and safety goes up.

          • lerno 18 hours ago

            Well, we've already tried that, and no one used it.

        • SkiFire13 14 hours ago

          > The temp allocator implementation isn't guaranteed to detect it, and the article doesn't go into implementation details and guarantees

          Understandable, but then why are you mentioning the borrow checker if you avoided mentioning _anything_ that could be compared to it.

          > No, there is no guarantee at the language level

          Then don't go around claiming they are statically checked, that's false. What you have is a basic linter, not a statically enforced contract system.

        • rowanG077 17 hours ago

          Oof that sounds incredibly dangerous and basically means it doesn't really offer much of an improvement over C imo in terms of safety.

          • lerno 22 minutes ago

            What is "incredibly dangerous"? Having contracts that can catch errors at compile time?

    • pjmlp 11 hours ago

      That was already available in languages like Modula-2 and Object Pascal, as the blog post acknowledges the idea is quite old, and was also the common approach to manage memory originally with Objective-C on NeXTSTEP, see NSZone.

      Hence why all these wannabe be C replacements, but not like Rust, should bring more to the table.

    • fanf2 a day ago

      > What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used.

      I looked at the allocator source code and there’s no use-after-free protection beyond zeroing on free, and that is in no way sufficient. Many UAF security exploits work by using a stale pointer to mutate a new allocation that re-uses memory that has been freed, and zeroing on free does nothing to stop these exploits.

      • lerno 18 hours ago

        It doesn't zero on free, that's not what the code does. But if you're looking for something to prevent exploits, then no, this is not it, nor does it try to be.

        How would you want that implemented?

        • lmm 13 hours ago

          > But if you're looking for something to prevent exploits, then no, this is not it, nor does it try to be.

          > How would you want that implemented?

          Any of the usual existing ways of managing memory lifetimes (i.e. garbage collection or Rust-style borrow checking) prevents that particular kind of exploitation (subject to various caveats) by ensuring you can't have a pointer to memory that has already been freed. So one would expect something that claims to solve the same problem to solve that problem.

          • lerno 2 hours ago

            All of that is out of scope for a C-like though. Once you set the constraints around C, there will be trade-offs. Rust is a high level language.

  • imtringued 7 hours ago

    Agreed. I personally am interested in Rust after doing some research for parallel semantics. The state of the art language is usually some pure functional programming language coupled with a garbage collector, which is not suitable for the type of embedded development I'm thinking of.

    Doing alias analysis on mutable pointers seems to be inevitable in so many areas of programming and Rust is just one of the few programming languages brave enough to embark on this idea.

hvenev a day ago

I'm struggling to understand how this has anything to do with borrow checking. Borrow checking is a way to reason about aliasing, which doesn't seem to be a concern here.

This post is about memory management and doesn't seem to be concerned much about safety in any way. In C3, does anything prevent me from doing this:

  fn int* example(int input)
  {
      @pool()
      {
          int* temp_variable = mem::tnew(int);
          *temp_variable = input;
          return temp_variable;
      };
  }
  • cayley_graph a day ago

    Yes, this has little to nothing to do with borrow checking or memory/concurrency safety in the sense of Rust. Uncharitably, the author appears not to have a solid technical grasp of what they're writing about, and I'm not sure what this says about the rest of the language.

  • lerno a day ago

    No, that is quite possible. You will not be able to use that memory you just returned though. What actually happens is an implementation issue, but it ranges from having the memory overwritten (but still being writable) on platforms with the least support, to being neither read or writable, to throwing an exact error with ASAN on. Crashing on every use is often a good sign that there is a bug.

    • unscaled a day ago

      It might not be on every use though. The assignment could very well be conditional. If a dangling reference could escape from the arena in which it was allocated, you cannot claim to have memory safety. You can claim that the arena prevents memory leaks (if you remember to allocate everything correctly within the arena), but it doesn't provide memory safety.

      • lerno a day ago

        Memory safety as in the full toolset that Rust provides? C3 clearly doesn't, I fully agree.

unscaled a day ago

The post's title is quite hyperbolic and I don't think it serves the topic right.

Memory arenas/pools have been around for ages, and binding arenas to a lexical scope is also not a new concept. C++ was doing this with RAAI, and you could implement this in Go with defer and in other languages by wrapping the scope with a closure.

This post discusses how arenas are implemented in C3 and what they're useful for, but as other people have said this doesn't make sense to compare arenas to reference counting or a borrow checker. Arenas make memory management simpler in many scenarios, and greatly reduce (but don't necessarily eliminate - without other accompanying language features) the chances of a memory leak. But they contribute very little to memory safety and they're not nearly as versatile as a full-fledged borrow checker or reference counting.

  • LiamPowell 12 hours ago

    Ada did this in 1983 before C89 even existed as another point of reference and I'm sure other languages did before that.

    (I have not actually checked the standard, but I'm reasonably sure pools were there.)

    > But they contribute very little to memory safety [...]

    They do solve memory safety if designed properly, as in Ada, but they're not designed in a way that does anything useful here. In Ada the pointer type has to reference the memory pool, so it's simply impossible for a pointer to a pool to exist once the pool is out of scope because the pointer type will also be out of scope. This of course assumes that you only use memory pools and never require explicit allocation/deallocation, which you often do in real world programs.

    • pjmlp 11 hours ago

      One should note that Ada evolve quite a lot since 1983, and there are many improvements upon on how to manage resources in the language in a safe way.

      There is its own version of RAII, controlled lifetimes, unbounded collections, the dynamic stack allocation on runtime (with exception/retry) is also a way to do arena like stuff, SPARK proofs, and as of recent ongoing standards work, some affine types magic dust as well.

rq1 a day ago

What core type theory is C3 actually built on?

The blog claims that @pool "solves memory lifetimes with scopes" yet it looks like a classic region/arena allocator that frees everything at the end of a lexical block… a technique that’s been around for decades.

Where do affine or linear guarantees come in?

From the examples I don’t see any restrictions on aliasing or on moving data between pools, so how are use‑after‑free bugs prevented once a pointer escapes its region?

And the line about having "solved memory management" for total functions::: bravo indeed…

Could you show a non‑trivial case where @pool eliminates a leak that an ordinary arena allocator wouldn’t?

Could you show a non‑trivial case, say, a multithreaded game loop where entities span multiple frames, or a high‑throughput server that streams chunked responses, where @pool prevents leaks that a plain arena allocator would not?

  • sirwhinesalot a day ago

    It is unfortunate that the title mentions borrow checking which doesn't actually have anything to do with the idea presented. "Forget RAII" would have made more sense.

    This doesn't actually do any compile-time checks (it could, but it doesn't). It will do runtime checks on supported platforms by using page protection features eventually, but that's not really the goal.

    The goal is actually extremely simple: make working with temporary data very easy, which is where most memory management messes happen in C.

    The main difference between this and a typical arena allocator is the clearly scoped nature of it in the language. Temporary data that is local to the function is allocated in a new @pool scope. Temporary data that is returned to the caller is allocated in the parent @pool scope.

    Personally I don't like the precise way this works too much because the decision of whether returned data is temporary or not should be the responsibility of the caller, not the callee. I'm guessing it is possible to set the temp allocator to point to the global allocator to work around this, but the callee will still be grabbing the parent "temp" scope which is just wrong to me.

    • Sesse__ a day ago

      > "Forget RAII" would have made more sense.

      For memory only, which is one of the simplest kinds of resource. What about file descriptors? Graphics objects? Locks? RAII can keep track of all of those. (So does refcounting, too, but tracing GC usually not.)

      • sirwhinesalot 12 hours ago

        You deal with those the same way you deal with them in any language without RAII, some sort of try-with-resource block or defer.

        Not making a value judgment if that is better or worse than RAII, just pointing out that resources of different kinds don't have to be handled by the same mechanism. This blog post is about memory management in C3. Other resource management is already handled by defer.

        • Sesse__ 12 hours ago

          Why would you treat the two differently, though? What benefit does it bring? (defer is such an ugly, manual solution in general; it becomes very cumbersome once you may want to give control of the resource to anyone else.)

          • sirwhinesalot 11 hours ago

            Well, the answer is obvious in garbage collected languages: because a GC excels at managing memory but sucks at managing other resources.

            Here, the answer is that ownership semantics are disliked by the language designer (doesn't fit the design goals), so they're not in the language.

            • pjmlp 5 hours ago

              Not really obvious, given that there are garbage collected languages with RAII, like D, to quote one example.

              And even stuff like try/using/with can be made RAII alike, via static analysis, which defer like approaches usually can't, because it can be any expression type, unlike those other approaches that rely on specific interfaces/magic methods being present, thus can be tracked via the type system.

              So it can be turned into a compiler error if such tagged type doesn't escape lexical scope without calling the respective try/using/with on the variable declaration.

              • sirwhinesalot 4 hours ago

                Not disagreeing, just pointing out the reasoning.

                • pjmlp an hour ago

                  Fair enough, also wanted to make a point of the possibilities, given that too many people place all GC languages on the same basket.

duped 18 hours ago

It took me some time to collect my thoughts on this.

One: I don't believe they have solved use-after-free. Marking memory freed and crashing at runtime is as good as checked bounds indexing. It turns RCE into DOS which is reasonable, but what would be much better is solving it provably at compile time to reject invalid programs (those that use memory after it has been deallocated). But enough about that.

I want to write about memory leaks. Solving memory leaks is not hard because automatically cleaning up memory is hard. This is a solved problem, and the domain of automatic memory management/reclamation aka garbage collection. However I don't think they've gone through the rigor to prove why this is significantly different than say, segmented stacks (where each stack segment is your arena). By "significantly different" you should be able to prove this enables language semantics that are not possible with growable stacks - not just nebulous claims about performance.

No, the hard part of solving memory leaks is that they need to be solved for a specific class of program: one that must handle resource exhaustion (otherwise - assume infinite memory; leaks are not a bug). The actual hard thing is when there are no memory leaks in the sense that your program has correctly cleaned itself up everywhere it is able and you are still exhausting resources and must selectively crash tasks (in O(1) memory, because you can't allocate), those tasks need to be able to handle being crashed, and they must not spawn so many more tasks as to overwhelm again. This is equivalent to the halting problem, by the way, so automatic solutions for the general case are provably impossible.

I don't believe that can be solved by semantically inventing an infinite stack. It's a hard architectural problem, which is why people don't bother to solve it - they assume infinite memory, crash the whole program as needed, and make a best effort at garbage collection.

All that said, this is a very interesting design space. We are trapped in the malloc/free model of the universe which are known performance and correctness pits and experimenting with different allocation semantics is a good thing. I like where C3 and Zig's heads are at here, because ignoring allocators is actually a huge problem in Rust in practice.

  • masklinn 16 hours ago

    > One: I don't believe they have solved use-after-free. Marking memory freed and crashing at runtime is as good as checked bounds indexing.

    It’s also something allocators commonly implement already.

    • pjmlp 11 hours ago

      One recurring theme about these new wannabe C and C++ replacements, but not like Rust, is that their solution is basically to use what already exists in C and C++ for the last 30 years, if only people actually bother to learn how to use their tools.

      Unfortunely it is a bit like debugging, lets keep doing printf(), instead of take the effort to learn how to use better approaches.

donperignon 10 hours ago

I am having a hard time to connect the title with what they present. Is this just freeing memory when the program leaves the current context? How that matches the rust lifetime borrow checker?… this is a defer function that frees whatever has been allocated within a given scope, not that useful, unless i am missing something…

hrhrdorhrvfbf 3 days ago

Rust’s interface for using different allocators is janky, and I wish they had something like this, or had moved forward with the proposal for the mechanism for making it a part of a flexible implicit context mechanism that was passed along with function calls.

But mentioning the borrow checker raises an obvious question that I don’t see addressed in this post: what happens if you try to take a reference to an object in the temporary allocator, and use it outside of the temporary allocator’s scope? Is that an error? Rust’s borrow checker has no runtime behavior, it only exists to create errors in cases like that, so the title invites the question of how your this mechanism handles that case but doesn’t answer it.

  • lerno 3 days ago

    A dangling pointer will generally still possible to dereference (this is an implementation detail, that might get improved – temp allocators aren't using virtual memory on supporting platforms yet), but in safe more that data will be scratched out with a value, I believe we use 0xAA by default. So as soon as this data is used out of scope you'll find out.

    This is of course not as good as ASAN or a borrow checker, but it interacts very nicely with C.

    • Filligree a day ago

      So, would you say the title overstates its case slightly?

      • lerno a day ago

        I would say that the title is easily misread. If you open the blog post and just read the title and a few lines into the intro, I think it's clear it's about C3 not having to implement any recently popular language features in order to solve the problem of memory lifetimes for temporary objects as they arise in a language with C-like semantics.

        Now clearly people are misreading the title when it stands on its own as "borrow checkers suck, C3 has a way of handling memory safety that is much better". That is very unfortunate, but chance to fix that title already passed.

        It should also be clear from the rest of the blog post that it doesn't try to make any claims that it's a novel technique (it's something that has been around for a long time). What's novel is that it's well integrated into the stdlib.

        • hrhrdorhrvfbf 17 hours ago

          > Now clearly people are misreading the title

          This is so fucking obnoxious. There is no misreading. There is not misunderstanding. Any attempt to spin this as even in part a failure of the reader is so rude.

          The title is nonsense. Nobody is misreading it, the author was either willfully misleading for clicks (eww) or was just ignorant (excusable, but they need to own it).

          > That is very unfortunate, but chance to fix that title already passed.

          …the CMS doesn’t let them edit it? What nonsense is this.

          This is a lovely example of what professional communication does NOT look like. Incredibly disingenuous all around.

          (I’d love better arena syntax in more languages though. They don’t get enough support.)

          • lerno 6 hours ago

            Let me paste the introduction in the post, and let's see how much it claims that C3 has memory safety:

            Modern languages offer a variety of techniques to help with dynamic memory management, each one a different tradeoff in terms of performance, control and complexity. In this post we’ll look at an old idea, memory allocation regions or arenas, implemented via the C3 Temp allocator, which is the new default for C3.

            The Temp allocator combines the ease of use of garbage collection with C3’s unique features to give a simple and (semi)-automated solution within a manual memory management language. The Temp allocator helps you avoid memory leaks, improve performance, and simplify code compared to traditional approaches.

            Memory allocations come in two broad types stack allocations which are compact, efficient and automatic and heap allocations which are much larger and have customisable organisation. Custom organisation allows both innovation and footguns in equal measure, let’s explore those.

            • hrhrdorhrvfbf 6 hours ago

              I read the post multiple times before commenting. The more I read it the worse it looks.

    • SkiFire13 a day ago

      > C3 not having to implement any recently popular language features in order to solve the problem of memory lifetimes for temporary objects as they arise in a language with C-like semantics.

      But you said it yourself in your previous message:

      > A dangling pointer will generally still possible to dereference (this is an implementation detail, that might get improved – temp allocators aren't using virtual memory on supporting platforms yet)

      So the issue is clearly not solved.

      And to be complete about the answer:

      > in safe more that data will be scratched out with a value, I believe we use 0xAA by default. So as soon as this data is used out of scope you'll find out.

      I can see multiple issues with this:

      - it's only in safe mode

      - it's safe only as long as the memory is never used again for a different purpose, which seems to imply that either this is not safe (if it's written again) or that it leaks massive amounts of memory (if it's never written to again)

      > Now clearly people are misreading the title when it stands on its own as "borrow checkers suck, C3 has a way of handling memory safety that is much better". That is very unfortunate, but chance to fix that title already passed.

      Am I still misreading the title if I read it as "C3 solves the same issues that the borrow checker solves"? To me that way of reading seems reasonable, but the title still looks plainly wrong.

      Heck, even citing the borrow checker *at all* seems wrong, this is more about RAII than lifetimes (and RAII in Rust is solved with ownership, not the borrow checker).

      • lerno a day ago

        > So the issue is clearly not solved.

        You can use --sanitize=address to get this today, or use the Vmem-based temp allocator (which is only in the 0.7.4 prerelease and only for 64 bit POSIX) if you're curious how it feels and works in practice.

        > I can see multiple issues with this:

        There is a constant trade-off, and being as safe as possible is obviously great, but there is also the question of performance.

        The context matters though, it's a C-like language, an evolution of C. So it doesn't try to be a completely new language with new semantics, and that creates a lot of constraints.

        The "safe-C" C-dialects usually add a lot of additional annotations that doesn't seem particularly palatable to most developers.

        > Am I still misreading the title if I read it as "C3 solves the same issues that the borrow checker solves"?

        Yes I am afraid you do. But that's my fault (since I suggested the title, even though I didn't write the article), and not yours.

        • SkiFire13 14 hours ago

          > You can use --sanitize=address to get this today

          By the same argument you could say that C/C++ also solved memory safety then. Do you compile production code with `--sanitize=address`? Note that certain sanitizers can be unsafe to use in production due to e.g. reading some environment variables.

          > or use the Vmem-based temp allocator (which is only in the 0.7.4 prerelease and only for 64 bit POSIX)

          FYI it would be useful to pair claims of features with documentation that describes how they work, otherwise we may be just talking past each other. Seeing "vmem" mentioned this seems like it is just going to leak virtual memory address space.

          > There is a constant trade-off, and being as safe as possible is obviously great, but there is also the question of performance.

          You're changing argument. You can claim that C3 does not aim to solve memory safety for these reasons, _and they can be understandable_, but then don't go and claim you solved memory safety anyway because that's plain false.

          > Yes I am afraid you do. But that's my fault (since I suggested the title, even though I didn't write the article), and not yours.

          Some more argumentation would be nice. How am I misreading the title? If I'm misreading it then there should be another way of reading it that's more obvious and make sense. I have yet to see a reasonable way of reading it where the mention of the borrow checker makes sense.

          • lerno 6 hours ago

            > You can claim that C3 does not aim to solve memory safety for these reasons, _and they can be understandable_,

            This seems to be where we speak past each other. What the blog post talks is how C3 handles the problem of memory lifetimes for temporary data, which is a major lack of ergonomics in C (and arguably also C-likes, such as Zig).

            The title refers to how C3 does this is in userland without having to add any of the common solutions, such GC, ARC, RAII. Recently a superset of C called "Cake" added ownership annotations exactly to solve such problems.

            C3 doesn't have anything like Rust memory safety. Nor is the blog post about memory safety, but on memory lifetimes.

            • SkiFire13 3 hours ago

              > The title refers to how C3 does this is in userland without having to add any of the common solutions, such GC, ARC, RAII.

              No, the title does not mention any of those. Instead it mentions "borrow checking" and that solves a completely different problem that C3 does not even attempt to tackle.

Philpax a day ago

I feel like "solved" is a strong word for what's described here. This works for some - possibly even many - scenarios, but it does not solve memory lifetime in the general case, especially when data from different scopes needs to interact.

smcameron a day ago

Seems overly simplistic and doesn't seem to cover extremely common cases such as a thread allocating some memory then putting it into a queue to be consumed by other threads which then eventually free the memory, or any allocation lifetime that isn't simply the scope of the enclosing block.

  • lerno a day ago

    Well the latter is covered: you can make temp allocations out of order when having nested "@pool"s. There are examples in the blog post.

    It doesn't solve the case when lifetimes are indeterminate. But often they are well know. Consider "foo(bar())" where "bar()" returns an allocated object that we wish to free after "foo" has used it. In something like C it's easy to accidentally leak such a temporary object, and doing it properly means several lines of code, which might be bad if it's intended for an `if` statement or `while`.

gorjusborg 6 hours ago

Whether I like this feature or not depends on the low-level details of how @pool behaves and whether and how I can control it. I can't tell what @pool is going to do to my program, unlike when I'm using an arena (or another allocator) directly.

It seems that @pool is providing context the the allocator function(s), but is the memory in the pool contiguous? What is the initial capacity of the pool? What happens when the pool needs to grow?

I think I prefer the explicit allocator passing in Zig. I don't need to ask these questions because I'm choosing and passing the allocator myself.

  • Windeycastle 5 hours ago

    You can actually see the whole implementation of `@pool` inside the standard library (link: https://github.com/c3lang/c3c/blob/f082cac762939d9b43f7f7301...) if you're interested. You'll have to follow a few function calls, but the whole implementation is defined within the standard library and thus easily modified for your needs.

    I believe that the memory inside the pool is indeed contiguous, and you can ask for a custom initial capacity. The default capacity depends on `env::MEMORY_ENV` at compile-time, but for normal use it's 256kB (256 * 1024, in bytes).

    About the explicit allocator passing, that's also a theme throughout the C3 standard library. Functions that need to allocate some memory will take in an `Allocator` as first argument. For those kind of functions there is always a `t<function>` variant which does not take in an allocator but calls the regular function with the temporary allocator. It's a nice naming convention used which really helps together with `@pool`. Examples are `String format(Allocator allocator, String fmt, args...)` and `String tformat(String fmt, args...)`.

    I hope that clears up some "concerns", and maybe you'll also find some joy in programming in C3 =)

cogman10 a day ago

I really do not see the benefit of this over C++ destructors and or facilities like `unique_ptr` and `shared_ptr`.

@pool appears to be exactly what C++ does automatically when objects fall out of scope.

  • sirwhinesalot a day ago

    The advantage is that the allocations are grouped: they're allocated in the same memory region (good memory locality) and freed in bulk. The tradeoff is needing to explicitly create these scopes and not being able to have custom deallocation logic like you can in a destructor.

    (This doesn't seem to have anything to do with borrow checking though, which is a memory safety feature not a memory management feature. Rust manages memory with affine types which is a completely separate thing, you could write an entire program without a single reference if you really wanted to)

    • ameliaquining a day ago

      You can also do those things in an RAII language with an arena library. Is the complaint just that it's too syntactically verbose?

      • jdcasale a day ago

        I am also struggling to see the difference between this and language-level support for an arena allocator with RAII.

        • lerno a day ago

          You can certainly do it with RAII. However, what if a language lacks RAII because it prioritizes explicit code execution? Or simply want to retain simple C semantics?

          Because that is the context. It is the constraint that C3, C, Odin, Zig etc maintains, where RAII is out of the question.

          • imtringued 7 hours ago

            If you want RAII to be explicit, then show an error if you fail to call the destructor. That's it.

            • lerno 2 hours ago

              Ok then I understand what you mean (I couldn't respond directly to your answer, maybe there is a limit to nesting in HN?).

              Let me respond in some more detail then to at least answer why C3 doesn't have RAII: it tries to the follow that data is inert. That is – data doesn't have behaviour in itself, but is acted on by functions. (Even though C3 has methods, they are more a namespacing detail allowed to create methods that derive data from the value, or mutate it. They are not intended as organizational units)

              To simplify what the goal is: data should be possible to create or destroy in bulk, without executing code for each individual element. If you create 10000 objects in a single allocation it should be as cheap to free (or create) as a single object.

              We can imagine things built into the type system, but then we will need these unsafe constructs where a type is converted from its "unsafe" creation to its "managed" type.

              I did look at various cheap ways of doing this through the type system, but it stopped resembling C and seemed to put the focus on resource management rather than the problem at hand.

              So that is why it's closer to C than Rust.

            • lerno 6 hours ago

              You lost me there I'm afraid.

              • ameliaquining 2 hours ago

                The idea is, you could have a language like Rust, but with linear rather than affine types. Such a language would have RAII-like idioms, but no implicit destructors; instead, it'd be a compile-time error to have a non-Copy local variable whose value is not always moved out of it before its scope ends (i.e., to write code that in Rust could include an implicit destructor call). So you would have explicit deallocation functions like in C, but unlike in C you could not have resource leaks from forgetting to call them, because the compiler would not let you.

                To the extent that you subscribe to a principle like "invisible function calls are never okay", this solves that without undermining Rust's safety story more broadly. I have no idea whether proponents of "better C" type languages have this as their core rationale; I personally don't see the appeal of that flavor of language design.

      • sirwhinesalot a day ago

        I think the point is that it is the blessed/default way of doing things, rather than opt-in, as in C++ or Rust.

        Rust doesn't even have a good allocator interface yet, so libraries like bumpalo have a parallel implementation of some stdlib types.

      • cogman10 a day ago

        It seems like exactly the same verbosity as what you'd do with a custom allocator.

        I think the only real grace is you don't have to pass around the allocator. But then you run into the issue where now anyone allocating needs to know about the lifetimes of the pool of the caller. If A -> B (pool) -> C and the returned allocation of C ends up in A, now you potentially have a pointer to freed memory.

        Sending around the explicit allocator would allow C to choose when it should allocate globally and when it should allocate on the pool sent in.

  • lerno a day ago

    The benefit is that it: (a) works in a language without RAII, and C-like languages usually does not have that (b) there are no individual heap allocations and frees (c) allocations are grouped together.

    • littlestymaar a day ago

      > (a) works in a language without RAII

      I'm confused: how is it not exactly RAII?

      • lerno a day ago

        Well, there are no objects, no constructors and no destructors.

        • littlestymaar 14 hours ago

          *There are no “objects” in the OOP sense, and as such you can say it has “no constructors and no destructors”, but Rust doesn't have “objects” either and neither does it has constructors, yet there's zero doubt Rust is using RAII for memory management (Rust has destructors though, but they aren't generally being used for “memory management” but for broader “resource management”, like network sockets or file descriptors).

          If you really don't want to call them objects, there are at least “items” that are “created” and then “deallocated” based on their lexical scope, which is exactly what Rust does too for its RAII.

          • sirwhinesalot 11 hours ago

            There are a few misconceptions in your comment that are very common, so I hope you'll allow me to clarify them and in the process hopefully explain why this sort of "parallel stack on the heap" approach is not the same as RAII at all.

            Lets ignore the horrifically named C++ feature for a moment and speak more generally of "ownership".

            Ownership (in the domain of programming languages) is kind of relation between resources. If resource A is the sole owner of resource B, then relinquishing resource A also relinquishes resource B, but not vice versa. It gives a form of agency to resources: "Hey A, it's your job to take care of B alright?"

            Note that there's nothing about "lexical scope" in the description above. In fact, tying ownership to lexical scope isn't necessary at all. For example, I could do `new std::vector<std::string>()` in C++, fill it up with a bunch of strings, and then call `delete` on it at my leisure. No lexical scope involved.

            Tying freeing to lexical scope is a convenience, a way to automate the freeing of resources, but it's not required by "ownership semantics" at all. In fact, the idea that freeing is tied to "lexical scope" is incorrect even when considering the automation provided by Rust and C++.

            Both Rust and C++ have the concept of "moving". If an object is moved, then its lifetime is now tied to a different scope (if it was returned for example) or to another resource.

            If you move a resource in C++, it's destructor at the end of scope will be a no-op. If you move a resource in Rust, it won't add a destructor call at the end of scope at all. If the move was conditional (only happens on a subset of branches) rust will actually add a little boolean variable tracking this, and then check that variable before calling the destructor at the end of the scope.

            This last bit is actually only necessary because Rust wants to tie the freeing of the resource to end of scope. Freeing at end of scope isn't necessary at all. Rust could free the resource right at the point of last use, the language has excellent lifetime tracking it could use for this.

            The reason Rust places the call to the destructor at the end of the scope is to make unsafe less unsafe. If there's a raw pointer to that data and it might get deleted at any point then, well, that's very dangerous. If you know the free will only happen at end of scope, then you can work with that.

            So what do I want to get across with all of this? Ownership (RAII) is a way to tie resources together that is only incidentally related to lexical scopes. It gives resources agency by allowing them to manage other resources, via custom destructors.

            What C3 has (for memory management only, it uses defer for other resources) is just a way to have a parallel stack on the heap. There's no "ownership" in the same way the stack isn't really an "owner" in the same sense as other resources. There are no destructors, data is inert.

            All allocations that happen within that parallel stack are freed in bulk (not one by one, a single big free), because the only real "resource" is the parallel stack itself. You cannot "move" this parallel stack, and is not managed by another parallel stack, there's no ownership.

            If you want the same functionality in Rust you need to use a crate like Bumpalo. In C++ you'd use std::pmr::monotonic_buffer_resource. RAII is strictly more powerful since you can model these parallel stacks in it, but it's also not equivalent since you can't have cycles without implementing this first.

            • littlestymaar 10 hours ago

              Thanks a lot for your excellent comment.

              • sirwhinesalot 8 hours ago

                If you're interested, the Austral language (not my creation, but I'm a fan of the author's work) has ownership semantics where freeing is always explicit and in the hands of the programmer, but it's still safe like Rust.

                Because both the destructor calls and the "borrowing" scopes have to be written explicitly in the language, it really helps in understanding how it all fits together.

  • vineethy a day ago

    My first thoughts also

huhtenberg 10 hours ago

> Enter The Temp Allocator

  alloca 
https://man7.org/linux/man-pages/man3/alloca.3.html :)
  • lerno 6 hours ago

    Not possible to nest and possible to run down out of stack memory quickly. That said, C3 has a `@stack_mem(1024; Allocator mem) { ... }` which allows to allocate a part of the stack and use that as an allocator with fallback.

gorjusborg 6 hours ago

Why add a default '@pool' for main?

Operating systems free resources upon process exit. That's one of the fundamental responsibilities of an OS. You can use malloc without free all you want if you are just going to exit the process.

  • Windeycastle 6 hours ago

    If I could take a guess, then it is for embedded applications.

    You could argue though that people should then really do there own memory management, but in the end you might end up just recreating the temp allocator with `@pool` anyway. It's a neat feature. (btw, `@pool` is just a macro from the standard library, no fancy compiler builtin)

Alifatisk 21 hours ago

Wow, this is such a fascinating concept. The syntax can’t stop reminding me of @autoreleasepool from ObjC. I’ll definitely try this out on a small project soon.

Also, since D lang usually implements all kinds of possible concepts and mechanism from other languages, I would love to see those being implemented aswell! D already has a borrow checker no so why not also add this, would be very cool to play with it!

  • DmitryOlshansky 10 hours ago

    I think there was a discussion to add SuperStack a threadlocal buffer that will be used for arena style allocation back in 2010-2011. While std wont have it I see no problem implementing one yourself.

ac130kz a day ago

The post doesn't even mention how it works/improves DX in a multi-threaded environment, borrow checkers are targeting specifically that use case.

timeon a day ago

I don't think technical writing needs this kind of rage-bait. They could have presented just the features of the language. Borrow-checker is clearly unrelated here.

ltbarcly3 a day ago

This literally doesn't solve any actual problems. If all memory allocation patterns were lexical this is the most easy and most obvious thing to do. That is why stack allocation is the default and works exactly like this.

  • lerno a day ago

    Imagine we have a function "foo" which returns an allocated object Bar, we want to pass this to a function "bar" and then have it released.

    Now we usually cannot do "bar(foo())" because it then leaks. We could allocate a buffer on the stack, and then do "bar(foo(&buffer))", but this relies on us safely knowing that the buffer does not overflow.

    If the language has RAII, we can use that to return an object which will release itself after going out of scope e.g. std::unique_ptr, but this relies on said RAII and preferably move semantics.

    If the context is RAII-less semantics, this is not trivial to solve. Languages that run into this is C3, Zig, C and Odin.

    With the temp allocator solution, we can write `bar(foo())` if `foo` always allocates a temp variable, or `bar(foo(tmem))` if it takes an allocator.

    • ltbarcly3 a day ago

      Wait, you are implying this is some kind of algorithmic 'solution' to a long standing problem. It's not. This is notable because it's an implementation that works in C++. The 'concept' of tracking allocations in a lexical way is trivially obvious.

      • lerno 18 hours ago

        It is a problem for manual memory management. I am not quite sure where you're coming from. In Modern C++ this is managed by RAII, but if you instead look at C there is no solution that doesn't involve various additional code.

        You seem the same in "C alternatives" such as Zig.

        But obviously if you're coming from a language which already has some form of automatic memory management it won't seem like a big thing.

        But in the context of manual memory management it is solving a very real problem.

  • amelius a day ago

    Well, it solves the problem of destructors/deallocation wasting a lot of time.

    • imtringued 7 hours ago

      You still have to call the destructor.

      • lerno 6 hours ago

        What destructor?

amelius a day ago

Smart compilers already do this with escape analysis.

  • lerno a day ago

    No, I don't think they do.

    Given a function `foo` that is allocating an object "o" and returns it to the upper scope, how would you do "escape analysis" to determine it should be freed and HOW it should be freed? What is the mechanism if you do not have RAII, ARC or GC?

    • throwawaymaths a day ago

      you track how the variable is used in the compilation unit which should have a finite set of possibilities?

      • lerno a day ago

        This is about tracking allocated memory, which is different. I know V claimed it could solve this with static analysis, but in practice it didn't work and had to fallback to a GC.

        This is true for all similar schemes, that they have something for easy for simple-to-track allocations, and then have to fallback on something generic.

        But even that is usually assuming that the language is somehow having a built-in notion of memory allocation and freeing.

        • dnautics a day ago

          It should be possible in zig! Here's a proof of concept, I would guess that if V failed it was because they tried to do it at the language level. If you analyse intermediate representations the work is much, much easier.

          https://youtu.be/ZY_Z-aGbYm8?feature=shared

bbminner a day ago

Ok, now give me an example of a resource manager (eg in a game) that has methods for loading resources into memory and also for releasing such resources - all of a sudden if a system needs to give away pointer access to its buffers, things become more complicated and arena allocators are not enough.

  • Calavar a day ago

    For that scenario you can use a pool allocator backed by a fixed size allocation from an arena. That gives you the flexibility to allocate and free resources on the fly, but with a fixed upper limit to the lifetime (e.g. the lifetime of the level or chunk). Once you're ready to unload a level or a chunk, you can rewind the arena, which is a very cheap operation (as opposed to calling free in a loop, which can be expensive if the free implementation tries to defragment the freelist)

  • lerno a day ago

    I am not sure how this would be a problem. Certainly the resource manager should manage the memory itself in some manner.

    It has very little to do with trying to manage temporary memory lifetimes.

codedokode a day ago

I never heard about this language, so I quickly looked through the docs and here is what I didn't like:

- integers use names like "short" instead of names with numbers like "i16"

- they use printf-like formatting functions instead of Python's f-strings

- it seems that there is no exception in case of integer overflow or floating point errors

- it seems that there is no pointer lifetime checking

- functions are public by default

- "if" statement still requires parenthesis around boolean expression

Also I don't think scopes solve the problem when you need to add and delete objects, for example, in response to requests.

caim a day ago

funny thing is that Malloc also behaves like an arena. When your program starts, Malloc reserves a lot of memory, and when your program ends, all this memory is released. Memory Leak ends up not being a problem with Memory Safety.

So, you will still need a borrow checker for the same reasons Rust needs one, and C/C++ also needed.

turnsout a day ago

Is this different from NSAutoreleasePool, which has been around for over 30 years?

  • lerno a day ago

    NSAutoreleasePool keeps a list of autoreleased objects, that are given a "release" message when the pool goes out of scope.

    `@pool` flushes the temp allocator and all allocations made by the temp allocator are freed when the pool goes out of scope.

    There are similarities, but NSAutoreleasePool is for refcounting and an object released by the autoreleasepool might have other objects retaining it, so it's not necessarily freed.

  • sirwhinesalot a day ago

    Implementation-wise yes, very different, idea-wise not really. The author of C3 is a fan of Objective-C.

knorker 5 hours ago

This does not seem like it solves any hard problem. It's just a 10x better alloca() with allocator integration?

  • lerno 2 hours ago

    Alloca would not allow you to pass data from the current scope up to a parent scope.

Windeycastle 3 days ago

Nice read, although a small section on how it's implemented exactly would've been nice.