Last week, <a href=“https://lobste.rs/~icefox” rel=“ugc”>icefox</a> and I exchanged some messages and here’s the result! (<a href=“https://lobste.rs/~hwayne” rel=“ugc”>Hwayne</a> and <a href=“https://lobste.rs/~matklad” rel=“ugc”>matklad</a> are next.) <hr> <blockquote> Introduce yourself a bit! (Work? Common threads?) </blockquote> Sooooo the common threads in my life have mostly been “I like building stuff” and “I make bad decisions that inexplicably work out well in the end”.  I’ve been a gamer and computer nerd since forever, growing up coaxing DOS into being able to run games that always required juuuuust a little more base memory than it had.   So of course I went to university for computer science.  I failed out horribly the first year, took a year off to rethink my life, went back and got a degree in geology, worked in IT, went to grad school and got a Master’s degree in geology, worked in the field drilling for oil and gas for a couple years, hated it and went back to computers, spent a couple years at a research lab in CMU, spent a while in software contracting, ended up doing software integration and flight testing for drones for five years, loved it but got sick of the stress, and now work in IT again helping my drone-inclined coworkers actually make reproducible systems and stuff.  It’s been a fun roller coaster. Throughout it all I’ve always been programming stuff, and always wanted to make languages, operating systems and video games.  I love the idea of building a world up from nothing, and languages, OS’s and games are all programs that let you do that.  It turns out if you do something as a hobby with a fair amount of dedication for most of your life, you end up pretty good at it I guess? Really my only skills are analysis and stubbornness.  If you hand me a problem then I will take it apart into pieces, and then bludgeon my brain against each of those pieces until it’s solved. <blockquote> How does your “small Rust” (<a href=“https://hg.sr.ht/~icefox/garnet” rel=“ugc”>Garnet</a>) compare to an ML/OCaml? The <a href=“https://conservatory.scheme.org/schemers/Documents/Standards/” rel=“ugc”>Scheme standards</a> mention that they’re exploring how to remove features to add power, are you doing similar? </blockquote> IMO it’s pretty accurate to consider Rust as a descendant of OCaml, and Garnet a descendant of Rust.  So Garnet is another step down a slightly odd branch on the ML family tree.  When I think of ML I think “garbage collected functional language with strong static types and uniform data representation,” and Garnet is not any of that except for “strong static types” and maybe “functional language”.  But it also tends to support my long-held suspicion that the right ML would really make a very good systems language. As for removing features…  At its heart I want Garnet to have three concepts it builds most other concepts out of: closures, structures and generic types.  It turns out that when you try to build something like traits out of closures, structures and generic types then you accidentally get ML’s module system anyway, so Garnet has actually grown closer to ML than Rust purely via convergent evolution.  And it’s also an opportunity to make ML’s modules better, ‘cause frankly they never made much sense to me in OCaml and there’s <a href=“https://mpi-sws.org/~rossberg/1ml/” rel=“ugc”>some good research out there</a>) waiting to be applied to the real world. Annoyingly though, those three concepts alone don’t seem to be enough to make a really good low-level language, because you will want more powerful and convenient ways to express things like binary data layout, linear types, spooky operations like stack unwinding, etc.  You get into the problem of being minimalistic, which is that you can do anything you want but none of it is very convenient: past a certain point minimalism requires unsafety or more runtime abstraction, and I’m reluctant to add either.  If everything is represented in a Lisp-y fashion through pointers, functions and RTTI then life as a programmer is very simple and minimal, and in principle  you can write the sort of runtime-free code that you get with C and Rust… but doing it needs a really deep understanding of a quite complicated set of assumptions the compiler and runtime makes, so you can figure out how to subvert those assumptions safely.  So these days I am thinking of Garnet in terms of a “core” language that is just those three features and the things that directly support them: borrow checking, namespaces, destructors, etc.  Then I want a small set of “bonus” features that make life a lot easier for the things I want Garnet to be particularly good at, which currently is just two things: Erlang’s bitwise pattern-matching, and the vaguely effect-like constraints on data and functions that has the working name of “properties”.  The core language is pretty fixed and I know it’ll be good, and we’ll see if the bonus features end up mattering. Oh, and a macro system.  I haven’t really thought about Garnet’s macro system at all, I’ve just assumed I’m going to need one sooner or later.  Macros are just too good at solving the small annoying problems a type system has trouble covering, such as <code>println!()</code> formatting and funky compiler directives like <code>unreachable!()</code>.  Though I suppose that something like Zig’s builtin functions would be another valid way of doing that.  We’ll see. <blockquote> Your idea of compiler as a library is cool! I’m curious how Garnet and/or ML compares to Lisp for making a compiler etc. (BTW, have you seen how Plan9 <a href=“https://lobste.rs/s/ab86gm/r9_plan_9_rust#c_3xham6” rel=“ugc”>sped up</a> C compilation?) </blockquote> (I have seen that link but I don’t know what to make of it; C’s includes are so broken by design that revamping them to be better and faster is just part of the price of admission in making a new language.  It’s not even hard.  Every single language made since the 1980’s has used some minor variation of Modula-2’s module system, and give or take some tedium and design warts they all work just fine.) I feel like ML and Lisp are two ends of a spectrum for “how do I want to write a compiler?”  The ML end is very rigid and strong and static and formal and goes through very concrete steps to turn a source language into a target language, and the Lisp end is very fluid and flexible and generally compiles a language just by turning it into Lisp, or Lisp into the language, or both.  The difference is like making a sculpture by carving it out of stone vs. building it up out of clay.  You can get the same result out of either of them, but they tend to favor different approaches. Frankly I haven’t actually thought much about “compiler as a library” in aaaaages, even though I suppose it’s still in Garnet’s README file.  The problem really comes down to linking and execution model.  In a Lisp runtime, it’s as though everything is dynamically linked.  Every function is treated as a function pointer, which means any function can be swapped out by the runtime if they are recompiled, and taking a running program and slapping new code into it is very easy.  You can treat the world as if it were Python, where every function reference is a dynamic hashtable lookup, even if the compiler is smarter than that in practice.  Having your compiler operate as a library is then very easy: you feed it source code, and it gives you compiled functions back and it can essentially re-link them into your program without anything else needing to care about them. The downside of this is speed: every function call needs to be able to be treated as if it were a function pointer, and the runtime needs a lot of information about the running program so that it can modify and re-link it.  If you want fast Rust/C-like code, you instead get the Unix process model.  The code is mostly immutable, heavily inlined, and you don’t really know much about it apart from a mapping of names to addresses.  This lets you optimize code far more easily and in much larger chunks, but it’s really difficult to modify.  Instead of fixing up a few function pointers you have to stop the program, rewrite a bunch of code in large chunks, and figure out where to restart it and how to change the state it needs.  A compiler as a library needs to do much more work and is still less powerful.  Dynamic linking in this model is an awkward and error-prone process with lots of limitations. In the end I think the C linking model is obsolete and we need something better, but have only vague ideas about what that would look like.  Zig is doing a lot of neat work exploring this space, like doing incremental builds by making every function called through a function pointer and updating them on the fly, then inlining them into oblivion in optimized builds.  I want Garnet to be able to express something sort of like that through it interface types, where a program can decide fairly easily whether or not an interface is a literal vtable of function pointers and using this ability to “re-link on the fly” is something the consumer of the API can decide, rather than something the producer has to support. No idea how the details will shake out yet. <blockquote> What are the core problems/operations/concerns in systems? How can a language/design best present those (meaningful decisions)? (Or what feature set?) Big question(s) I know. There are many approaches e.g. Alan Kay with <a href=“https://www.reddit.com/r/lisp/comments/acid7a/how_innovative_is_clojure_as_a_lisp_dialect_from/ed93ay3/” rel=“ugc”>DSL</a>’s or Van Roy on <a href=“http://www.info.ucl.ac.be/people/PVR/paradigmsDIAGRAMeng.pdf” rel=“ugc”>features</a>: but what thoughts have you found exploring, what lead to your decisions so far? What are you most and least sure about? </blockquote> Ooh that’s broad.  The most important thing I can say is that a good solution is always always always context-dependent.  (Working on drones and robots is fun ‘cause it’s a context very unlike most other software; you get unusual design constraints.)  I spent a long time in life looking for The Perfect Programming Language, or The Perfect Operating System, or The Perfect Whatever, and only very slowly realized that they can never exist.  Perfection needs to take the nature of the problem into account.  Lisp is not the perfect language for writing device drivers, Rust is not the perfect language for exploring piles of loosely-organized data. So I like the concept of design patterns, as descriptive collections of “how to solve problem X” that are broadly useful but also mutable.  The Portland Pattern Repository <a href=“https://c2.com/” rel=“ugc”>Wiki</a> taught me a huge amount about software development, but if I had to choose one design pattern as the most important, it might be “alternating hard and soft layers”.  By programming computers, we’re really building machines, and machines have to exist in the same world that humans do.  So you need hard bits that will metaphorically stamp your metal or extrude your wires, but you also need soft bits that are comfy for a human to hold, control and manipulate.  A good solution is context dependent, but also context and goals will always shift in a changing world, so it’s useful to be able to pull apart a machine into pieces and rearrange them in ways you don’t expect from the start. Really I feel like the biggest problems in software are cultural right now. Researchers need to accept that tooling matters.  In one of those links, someone says “From a pessimistic view, what innovations have any dynamic languages had over the products of the 60’s-80’s? Python, Ruby are the popular ones that come to mind. Outside of aesthetic and the opinion of the designers, it’s hard for me to quantify the inherent innovation, but the broad adoption is what put them on the map.”  Okay then, try building and running any software ever built in the 1980’s or 1990’s.  Try modifying it and adding or removing subsystems.  Try maintaining it continually for a few years.  Add complicated features such as video decoding or network protocols.  Have it talk to a database.  These are all things where life is far, far better than it was in 2000 when I started programming, and it’s entirely because people have spent 25 years making better tools for these things.  I have a friend who works on IBM systems that actually are the products of the 60’s-80’s, and the books the people learn programming from were written during those times.  Half of her coworkers literally have never heard of regular expressions.  What innovation has any dynamic language had over the product of the 60’s-80’s?  Fire up your PDP-11 emulator and start a CLU interpreter and write the equivalent of Python’s <code>import re</code> or <code>import urllib3</code>.  Go ahead, I’ll wait. Meanwhile, engineers need to accept that proofs are practical.  Rust is helping with this, by giving everyone a little proof system that’s more powerful than anything they’ve encountered before but still reasonably easy to use, and letting them apply it to the really hard and really important problem of compile-time memory safety.  Before that existed the answer was always garbage collection, ie, “enforce your proof at runtime”.  So I feel like this wall has been cracked a little and people are now starting to seriously ask “what other useful stuff can we prove?”  Nobody really knows yet, and by its nature science tends to give unhelpful answers such as “nothing that matters” or “anything if you try hard enough”.  It’s up to the engineers to come up with innovative little corners where “possible” and “practical” overlap, like Rust’s borrow checker, and to do that they’re gonna need to start learning about proofs.  Software engineering needs to step up to be on par with other fields of engineering and learn how to make software that actually works, within some concrete specification, instead of software that appears to work until something disturbs it. We have so much infrastructure built by now that rewriting the world in one fell swoop is not going to happen.   It also should not.  People need to think more about how to phase out things gracefully, even if it’s hard.  This is unpopular because it requires actually understanding the problems that the existing solutions solve.  Much easier to pretend those problems don’t matter and the last generation of people who worked on them just had a Skill Issue.  But you’re not going to be able to replace large and complicated software with something new unless you actually care about that limnal in-between state where both systems need to cooperate, humans need to be re-trained, and workflows need to be modified. <blockquote> What attracts you to systems languages in particular? Do you have thoughts/hopes for the system then implemented, e.g. inspired by minimal systems like 9Front, STEPS, SPIN etc.? </blockquote> Like I said, I like being able to write worlds from nothing.  🙂  Systems languages are an important part of this, because they let you lay the foundations of your world by talking to hardware.  (I have to very palpably prevent myself from getting interested in FPGA’s, ‘cause then I would never get anything done.)  Ironically, like many others with this tendancy, I am so <a href=“https://en.wiktionary.org/wiki/not_invented_here#English” rel=“ugc”>NIH</a>-y about it that I have not gotten very deep into anything like 9front/plan9, Oberon, Inferno, etc.  They’re fun to read about but I very seldom sit down and start hacking on them.  I also haven’t thought too hard about further steps; writing an OS or something is fun, but Garnet is important, so I’ve really chopped down my ambitions outside of it. One big reason I’m working on Garnet because it seemed like a very strong conjunction of “right place and right time”, and that doesn’t happen very often.  Rust has finally broken the 25-year monopoly that C/C++ had on everyone’s brains, but it also has some very real downsides that just aren’t in scope for it to fix.  There are very valid reasons to write C code instead of Rust in 2025, and those reasons are pretty difficult for Rust to tackle because of very deep design decisions it has made.  So, having a “little brother” to Rust seems like an important tool to make.  Even if it never goes anywhere, it can explore design space that is useful to the next languages, the same way that Cyclone, Alef and MLkit have. If you really want to make an operating system that is both useful and novel, I see two good approaches right now: take an existing Linux or BSD kernel and write a novel userland atop it, or take an existing L4 kernel and turn it into an actually usable capability-based userland.  (Or maybe work on Fuschia, which might have a real stab at making a fresh OS out of whole cloth, but Google cut Fuschia’s team to the bone during the Great Layoff Of 2023-2024 and it’s Google so I just expect it to get axed sooner or later.)  For example we have <a href=“https://nerves-project.org/” rel=“ugc”>Nerves</a> and <a href=“https://en.wikipedia.org/wiki/Genode” rel=“ugc”>Genode</a> doing each of these a little bit, but there’s plenty of space for exploration out there.  We have good(ish) kernels that are sorely underused at the moment, and honestly it’s a lot easier to write a good userland atop a bad kernel than it is to actually write a good kernel. (Oh, or just write it in Webassembly.  There’s another wide-open opportunity for you to re-invent the world and make something useful in the process.  Get involved in WASI and figure out a killer app for it, ’cause they desperately need one.) Turns out that userlands are boring to write though, at least for most people.  I used to be one of the “Lisp Machines were the peak of existence and everything has been downhill from there” crowd, until I discovered that, you know what?  It’s trivial to make a Linux kernel just boot directly to Emacs, SBCL, or whatever else you want.  There’s your LispM, go ahead and start writing.  I know it’s not a real LispM, but everything’s gonna need some layers of spackle between it and reality, so just suck it up and start writing anyway.  [Erlang]((<a href=“https://lfe.io/” rel=“ugc”>https://lfe.io/</a>) and <a href=“https://fennel-lang.org/” rel=“ugc”>Lua</a> are also very convicing Lisp runtimes under the hood, more than good enough for a hobby project or special-purpose standalone tool.  But nobody ever writes <code>init=/boot/sbcl</code> into their Linux command line and starts hacking, ‘cause it’s a lot more fun to write blog posts about the Good Old Days than it is to write basic-but-necessary crap like a logging frameworks and network configuration managers.  Which is a shame, ‘cause I think you could do a lot of good stuff with a fresh userland.  We’re at a time where it really doesn’t matter how an operating system works to most applications as long as you can treat it as though it were a Docker container running a web server, so there’s very little stopping some enterprising people from pulling the rug out from under the POSIX ecosystem.  On the internet nobody knows you’re actually a <del>dog</del> Lisp Machine. <blockquote> What is your workflow like? (Choosing projects, tasks for the day, keeping track of notes, communication etc.) </blockquote> oh gods.  I’ve started keeping a “lab notebook” recently when working on Garnet and it’s really made me realize just how dogshit my workflow is right now.  But whatever I start the day off doing tends to be what my brain gravitates back towards for the rest of the day, so I try to settle in to Whatever Should Work On early in the day.  In reality, life has been so interruption-prone and full of random problems and stress for the last six months that I really have barely managed to do anything specific for three days in a row. So just this month I got a kitten.  That’ll help me focus and not be distracted, right? In theory, I have an issue tracker for Garnet or a set of design notes for whatever other project I’m working on.  Usually I have a long-running subproject of some kind and know what the next 2-3 steps for it are, and each day I write those steps down somewhere labelled “what to do today”, and then I get 0-2 of them done that day and come up with a couple more in the process.  Things that are larger or would interrupt that chain get issues made on the issue tracker, ‘cause otherwise I’m going to forget about them.  Discussions go on the issue tracker, decisions that come out of those discussions get written down somewhere (if I’m lucky), and once in a blue moon I actually get to look at the state of a project and ask myself “what should I do next?” In terms of “what to work on in Garnet”, I’ve been on a pretty straightforward path for a long time.  The Big Barrier is making type checking good enough to implement and compile the interface system, and so I work on whatever the next step is along the path.  Every once in a while I hit a checkpoint in that process and take a step back to breathe and refactor. Communication happens mostly on Discord.  The Programming Language Theory, Development and Implementation Discord server is a goldmine of weirdos with odd opinions and skill sets, and I’ve been there long enough that I have a circle of friends there to ramble about weird design concerns with. <blockquote> How do you approach a new project/problem? What steps lie between a problem and a working solution? </blockquote> The key fact here is that I’m a little bipolar, which really screws with getting anything useful done.  Every two weeks or so I go from “life is awesome and I want to make stuff” to “this is stupid, why did I do this, this code I wrote sucks and I just don’t want to bother” and back. Medication and therapy have helped a lot, but before those I went through a good 15 years of my life starting fresh on everything I ever wanted to do at least once a month.  So it’s very very useful to be able to tell myself “yes it would be good to reinvent how .tar archives work, translate Modula-2 into Lisp, write a really good Scheme implementation for Rust, or fix email, but all of those are secondary and I should just work on <code>$CHOSEN_PROJECT</code> instead”.  When I’m down and feeling useless then it’s really helpful to know that this is happening: life doesn’t suck, the code I wrote doesn’t suck, everything’s pretty much like it was two days ago when everything was great.  It’s just my brain playing tricks on me, and it’s okay to relax and take it easy when that happens instead of feeling like I need to force myself into being productive anyway. Like I said, I’ve really trimmed down on side-projects a lot to focus on Garnet.  It’s painful but worth it.  I really work best focusing on one project at a time.  Before Garnet it was <a href=“https://ggez.rs/” rel=“ugc”>ggez</a>, and before that it was an attempt at making a <a href=“https://noctis.itch.io/rocket-kiwi” rel=“ugc”>commercial video game</a>.  The key to all of those was feeling that it was both the right place and the right time for that project to happen. Results have been mixed.  🙂 That said, you gotta take breaks as well.  Burnout is no fun.  So the first thing I ask myself with a new project is “how much effort do I really want to put into this, really?” and “how important is this to me?”  ggez was important to me; it needed to exist.  Garnet is important to me; it needs to exist.  I have a toy OS project that plays with making some bad decisions, which is fun but really not important.  So sometimes I think of something fun to do right at the start of the manic phase of my cycle and it’s something small enough or silly enough that I won’t feel bad playing with it for a couple weeks and then putting it down forever, and I just say “you know, let’s just do it.”  Knowing your scope, and being able to limit your scope, is also essential.  If you have no boundaries to an idea then there’s really nothing to explore, and you just wander off forever. Game jams are really good at this, if they happen at the right time.  I need to do more game jams. <blockquote> Bonus topic: Datalog/logic programming! </blockquote> When I was in my ill-fated undergrad language class I learned Prolog and, like almost every other undergrad who learns Prolog, said “this is neat but how do I do anything real with it?”  I couldn’t figure out the answer and so basically ignored logic languages for a long time.  Then I started noticing people talking about Datalog and <a href=“https://github.com/vmware-archive/differential-datalog” rel=“ugc”>Differential Datalog</a> and it went back into my mental “check this out sometime” bin.  Eventually Datalog became one of the side-projects I spent a couple weeks playing with, and I tried seeing if I could write a type checker in Datalog ’cause it seemed a lot like the type of follow-the-chain-of-rules that HM type inference does.  Turns out the answer was “<a href=“https://hg.sr.ht/~icefox/pancake” rel=“ugc”>yes, if you work at it a bit</a>” and so I started looking for other problems that fit Datalog or other logic languages. Turns out that lots of awkward little DSL’s in life are actually shitty logic languages, and would be far better served by just being Datalog or something in the first place.  So I think the answer to “how do I do anything real with a logic language?” is “embed it in something else as a way to answer queries about stuff.” My favorite example right now is Gitlab’s CI scripts.  They are made of complicated, ad-hoc piles of rules for checking, setting, substituting, and generally mangling all sorts of variables for all sorts of wonky semi-structured symbolic data: CPU architectures, CI runner tags, datetidogshitmes, git branches etc.  And they are rife with ugly special cases: there’s complicated phasing rules about what data is available when, you can make a job run on a CI machine that has a particular tag but not on one that doesn’t have a particular tag, some variables can be overridden but others can’t, stuff like that. And all of it really is just a bad job at trying to express <code>do_i_run(This_Job) :- commit_branch(“main”), cpu_arch(arm64), phase_of_moon_is(full).</code> Other things logic languages would be good at: finding paths through loosely-structured chains or webs of information, such as cooking recipes or Factorio progression paths.  Kubernetes or Ansible are basically examples of this, you have a current state and a desired end state, and need to generate a list of steps to move from one to the other.  You could use logic langs for Discord/IRC bots as well, where you have a bunch of state (previous messages) and want to answer a question about it (is this user being abusive?)  Type checkers.  Compiler optimization phasing.  Decision trees in video game AI.  Tax law.  More specialized data-science-y things like supply chain analysis.  I think this is one of the areas where the researchers/math people and the engineers/tool people really need to talk to each other more, ’cause the implementations for most logic languages really suck to use and are difficult to embed in larger systems, and those larger systems would be far better served by a logic solver with some actual principles behind it than whatever cobbled-together solution people have invented from scratch. Plus, logic languages are basically databases, and modern databases deserve something far better than SQL.  SQL is legendarily useful and powerful, but hoo boy is it absolute dogshit to try to apply any kind of software engineering to it it.  Modularity?  Abstraction?  What are those?  Now get to work implementing tagged unions from scratch via joins and indexes for the 10,000’th time!  I have hopes for Datomic shaking up the world here, but haven’t had time to dig into it for anything concrete.  I don’t actually use databases in practice very often, but it’s always interesting when I do.