Hi <a href=“https://lobste.rs/~susam” rel=“ugc”>@susam</a>, I primarily know you as a Lisper, what other things do you use? Yes, I use Lisp extensively for my personal projects, and much of what I do in my leisure is built on it. I ran a <a href=“https://github.com/susam/mathb” rel=“ugc”>mathematics pastebin</a> for close to thirteen years. It was quite popular on some IRC channels. The pastebin was written in Common Lisp. My <a href=“https://susam.net/” rel=“ugc”>personal website</a> and blog are generated using a tiny static site generator written in Common Lisp. Over the years I have built several other personal tools in it as well. I am an active Emacs Lisp programmer too. Many of my software tools are in fact Emacs Lisp functions that I invoke with convenient key sequences. They help me automate repetitive tasks as well as improve my text editing and task management experience. I use plenty of other tools as well. In my early adulthood, I spent many years working with C, C++, Java, and PHP. My first substantial open source contribution was to the Apache Nutch project which was in Java, and one of my early original open source projects was Uncap, a C program to remap keys on Windows. These days I use a lot of Python, along with some Go and Rust, but Lisp remains important to my personal work. I also enjoy writing small standalone tools directly in HTML and JavaScript, often with all the code in a single file in a readable, unminified form. How did you first discover computing, then end up with Lisp, Emacs and mathematics? As I mentioned earlier while discussing what makes computing fun for me, I got introduced to computers through the Logo programming language as a kid. Using simple arithmetic, geometry, logic, and code to manipulate a two-dimensional world had a lasting effect on me. I still vividly remember how I ended up with Lisp. It was at an airport during a long layover in 2007. I wanted to use the time to learn something, so I booted my laptop running Debian GNU/Linux 4.0 (Etch) and then started <a href=“https://www.gnu.org/software/clisp/” rel=“ugc”>GNU CLISP</a> 2.41. In those days, Wi-Fi in airports was uncommon. Smartphones and mobile data were also uncommon. So it was fortunate that I had CLISP already installed on my system and my laptop was ready for learning Common Lisp. I had it installed because I had wanted to learn Common Lisp for some time. I was especially attracted by its simplicity, by the fact that the entire language can be built up from a very small set of special forms. I use <a href=“https://www.sbcl.org/” rel=“ugc”>SBCL</a> these days, by the way. I discovered Emacs through Common Lisp. Several sources recommended using the <a href=“https://slime.common-lisp.dev/” rel=“ugc”>Superior Lisp Interaction Mode for Emacs (SLIME)</a> for Common Lisp programming, so that’s where I began. For many years I continued to use Vim as my primary editor, while relying on Emacs and SLIME for Lisp development. Over time, as I learnt more about Emacs itself, I grew fond of Emacs Lisp and eventually made Emacs my primary editor and computing environment. I have loved mathematics since my childhood days. What has always fascinated me is how we can prove deep and complex facts using first principles and clear logical steps. That feeling of certainty and rigour is unlike anything else. Over the years, my love for the subject has been rekindled many times. As a specific example, let me share how I got into number theory. One day I decided to learn the RSA cryptosystem. As I was working through the <a href=“https://people.csail.mit.edu/rivest/Rsapaper.pdf” rel=“ugc”>RSA paper</a>, I stumbled upon the Euler totient function φ(n) which gives the number of positive integers not exceeding n that are relatively prime to n. The paper first states that φ(p) = p - 1 for prime numbers p. That was obvious since p has no factors other than 1 and itself, so every integer from 1 up to p - 1 must be relatively prime to it. But then it presents φ(pq) = φ(p) · φ(q) = (p - 1)(q - 1) for primes p and q. That was not immediately obvious to me back then. After a few minutes of thinking, I managed to prove it from scratch. By the inclusion-exclusion principle, we count how many integers from 1 up to pq are not divisible by p or q. There are pq integers in total. Among them, there are q integers divisible by p, and p integers divisible by q. So we need to subtract p + q from pq. But since one integer (pq itself) is counted in both groups, we add 1 back. Therefore φ(pq) = pq - (p + q) + 1 = (p - 1)(q - 1). Next I could also obtain the general formula for φ(n) for an arbitrary positive integer n using the same idea. There are several other proofs too, but that is how I derived the general formula for φ(n) when I first encountered it. And just like that, I had begun to learn number theory! You’ve said you prefer computing for fun. What is fun to you? Do you have an idea of what makes something fun or not? For me, fun in computing began when I first learnt IBM/LCSI PC Logo when I was nine years old. I had very limited access to computers back then, perhaps only about two hours per month in the computer laboratory at my primary school. Most of my Logo programming happened with pen and paper at home. I would “test” my programs by tracing the results on graph paper. Eventually I would get about thirty minutes of actual computer time in the lab to run them for real. So back then, most of my computing happened without an actual computer. But even with that limited access to computers, a whole new world opened up for me: one that showed me the joy of computing, and more importantly, the joy of sharing my little programs with my friends and teachers. One particular Logo program I still remember very well drew a house with animated dashed lines, where the dashes moved around the outline of the house. Everyone around me loved it, copied it, and tweaked it to change the colours, alter the details, and add their own little touches. For me, fun in computing comes from such exploration and sharing. I enjoy asking “what happens if” and then seeing where it leads me. My Emacs package <a href=“https://elpa.nongnu.org/nongnu/devil.html” rel=“ugc”>devil-mode</a> comes from such exploration. It came from asking, “What happens if we avoid using the <code>Ctrl</code> and <code>Meta</code> modifier keys and use the comma key (<code>,</code>) or another suitable key as a leader key instead? And can we still have a non-modal editing experience?” Sometimes computing for fun may mean crafting a minimal esoteric drawing language, making a small game, or building a tool that solves an interesting problem elegantly. It is a bonus if the exploration results in something working well enough that I can share with others on the World Wide Web and others find it fun too. How do you choose what to investigate? Which most interest you, with what commonalities? For me, it has always been one exploration leading to another. For example, I originally built <a href=“https://github.com/susam/mathb” rel=“ugc”>MathB</a> for my friends and myself who were going through a phase in our lives when we used to challenge each other with mathematical puzzles. This tool became a nice way to share solutions with each other. Its use spread from my friends to their friends and colleagues, then to schools and universities, and eventually to IRC channels. Similarly, I built <a href=“https://github.com/susam/texme” rel=“ugc”>TeXMe</a> when I was learning neural networks and taking a lot of notes on the subject. I was not ready to share the notes online, but I did want to share them with my friends and colleagues who were also learning the same topic. Normally I would write my notes in LaTeX, compile them to PDF, and share the PDF, but in this case, I wondered, what if I took some of the code from MathB and created a tool that would let me write plain Markdown (<a href=“https://github.github.com/gfm/” rel=“ugc”>GFM</a>) + LaTeX (<a href=“https://www.mathjax.org/” rel=“ugc”>MathJax</a>) in a <code>.html</code> file and have the tool render the file as soon as it was opened in a web browser? That resulted in TeXMe, which has surprisingly become one of my most popular projects, receiving millions of hits in some months according to the CDN statistics. Another example is <a href=“https://susam.github.io/muboard/” rel=“ugc”>Muboard</a>, which is a bit like an interactive mathematics chalkboard. I built this when I was hosting an <a href=“https://susam.net/journey-to-prime-number-theorem.html” rel=“ugc”>analytic number theory book club</a> and I needed a way to type LaTeX snippets live on screen and see them immediately rendered. That made me wonder: what if I took TeXMe, made it interactive, and gave it a chalkboard look-and-feel? That led to Muboard. So we can see that sharing mathematical notes and snippets has been a recurring theme in several of my projects. But that is only a small fraction of my interests. I have a wide variety of interests in computing. I also engage in random explorations, like writing IRC clients (<a href=“https://github.com/susam/nimb” rel=“ugc”>NIMB</a>, <a href=“https://github.com/susam/tzero” rel=“ugc”>Tzero</a>), ray tracing (<a href=“https://github.com/susam/pov25” rel=“ugc”>POV-Ray</a>, <a href=“https://github.com/spxy/java-ray-tracing” rel=“ugc”>Java</a>), writing Emacs guides (<a href=“https://github.com/susam/emacs4cl” rel=“ugc”>Emacs4CL</a>, <a href=“https://github.com/susam/emfy” rel=“ugc”>Emfy</a>), developing small single-HTML-file games (<a href=“https://susam.net/invaders.html” rel=“ugc”>Andromeda Invaders</a>, <a href=“https://susam.net/myrgb.html” rel=“ugc”>Guess My RGB</a>), purely recreational programming (<a href=“https://susam.net/fxyt.html” rel=“ugc”>FXYT</a>, <a href=“https://github.com/susam/may4” rel=“ugc”>may4.fs</a>, <a href=“https://susam.net/self-printing-machine-code.html” rel=“ugc”>self-printing machine code</a>, <a href=“https://susam.net/primegrid.html” rel=“ugc”>prime number grid explorer</a>), and so on. The list goes on. When it comes to hobby computing, I don’t think I can pick just one domain and say it interests me the most. I have a lot of interests. What is computing, to you? Computing, to me, covers a wide range of activities: programming a computer, using a computer, understanding how it works, even building one. For example, I once built a tiny 16-bit CPU along with a small main memory that could hold only eight 16-bit instructions, using VHDL and a Xilinx CPLD kit. The design was based on the Mano CPU introduced in the book Computer System Architecture (3rd ed.) by M. Morris Mano. It was incredibly fun to enter instructions into the main memory, one at a time, by pushing DIP switches up and down and then watch the CPU I had built myself execute an entire program. For someone like me, who usually works with software at higher levels of abstraction, that was a thrilling experience! Beyond such experiments, computing also includes more practical and concrete activities, such as installing and using my favourite Linux distribution (Debian), writing software tools in languages like Common Lisp, Emacs Lisp, Python, and the shell command language, or customising my Emacs environment to automate repetitive tasks. To me, computing also includes the abstract stuff like spending time with abstract algebra and number theory and getting a deeper understanding of the results pertaining to groups, rings, and fields, as well as numerous number-theoretic results. Browsing the On-Line Encyclopedia of Integer Sequences (OEIS), writing small programs to explore interesting sequences, or just thinking about them is computing too. I think many of the interesting results in computer science have deep mathematical foundations. I believe much of computer science is really discrete mathematics in action. And if we dive all the way down from the CPU to the level of transistors, we encounter continuous mathematics as well, with non-linear voltage-current relationships and analogue behaviour that make digital computing possible. It is fascinating how, as a relatively new species on this planet, we have managed to take sand and find a way to to use continuous voltages and currents in electronic circuits built with silicon, and convert them into the discrete operations of digital logic. We have machines that can simulate themselves! To me, all of this is fun. To study and learn about these things, to think about them, to understand them better, and to accomplish useful or amusing results with this knowledge is all part of the fun. How do you view programming vs. domains? I focus more on the domain than the tool. Most of the time it is a problem that catches my attention, and then I explore it to understand the domain and arrive at a solution. The problem itself usually points me to one of the tools I already know. For example, if it is about working with text files, I might write an Emacs Lisp function. If it involves checking large sets of numbers rapidly for patterns, I might choose C++ or Rust. But if I want to share interactive visualisations of those patterns with others, I might rewrite the solution in HTML and JavaScript, possibly with the use of the Canvas API, so that I can share the work as a self-contained file that others can execute easily within their web browsers. When I do that, I prefer to keep the HTML neat and readable, rather than bundled or minified, so that people who like to ‘View Source’ can copy, edit, and customise the code themselves, and immediately see their changes take effect. Let me share a specific example. While working on a game, I first used <code>CanvasRenderingContext2D.fillText()</code> to display text on the game. However, dissatisfied with the text rendering quality, I began looking for IBM PC OEM fonts and similar retro fonts online. After downloading a few font packs, I wrote a little Python script to convert them to bitmaps (arrays of integers), and then used the bitmaps to draw text on the canvas using JavaScript, one cell at a time, to get pixel-perfect results! These tiny Python and JavaScript tools were good enough that I felt comfortable sharing them together as a tiny toolkit called <a href=“https://susam.github.io/pcface/src/demo.html” rel=“ugc”>PCFace</a>. This toolkit offers JavaScript bitmap arrays and tiny JavaScript rendering functions, so that someone else who wants to display text on their game canvas using PC fonts and nothing but plain HTML and JavaScript can do so without having to solve the problem from scratch! Has the rate of your making new emacs functions has diminished over time (as if everything’s covered) or do the widening domains lead to more? I’m curious how applicable old functionality is for new problems and how that impacts the APIs! My rate of making new Emacs functions has definitely decreased. There are two reasons. One is that over the years my computing environment has converged into a comfortable, stable setup I am very happy with. The other is that at this stage of life I simply cannot afford the time to endlessly tinker with Emacs as I did in my younger days. More generally, when it comes to APIs, I find that well-designed functionality tends to remain useful even when new problems appear. In Emacs, for example, many of my older functions continue to serve me well because they were written in a composable way. New problems can often be solved with small wrappers or combinations of existing functions. I think APIs that consist of functions that are simple, orthogonal, and flexible age well. If each function in an API does one thing and does it well (the Unix philosophy), it will have long-lasting utility. Of course, new domains and problems do require new functions and extensions to an API, but I think it is very important to not give in to the temptation of enhancing the existing functions by making them more complicated with optional parameters, keyword arguments, nested branches, and so on. Personally, I have found that it is much better to implement new functions that are small, orthogonal, and flexible, each doing one thing and doing it well. What design methods or tips do you have, to increase composability? For me, good design starts with good vocabulary. Clear vocabulary makes abstract notions concrete and gives collaborators a shared language to work with. For example, while working on a network events database many years ago, we collected data minute by minute from network devices. We decided to call each minute of data from a single device a “nugget”. So if we had 15 minutes of data from 10 devices, that meant 150 nuggets. Why “nugget”? Because it was shorter and more convenient than repeatedly saying “a minute of data from one device”. Why not something less fancy like “chunk”? Because we reserved “chunk” for subdivisions within a nugget. Perhaps there were better choices, but “nugget” was the term we settled on, and it quickly became shared terminology between the collaborators. Good terminology naturally carries over into code. With this vocabulary in place, function names like <code>collect_nugget()</code>, <code>open_nugget()</code>, <code>parse_chunk()</code>, <code>index_chunk()</code>, <code>skip_chunk()</code>, etc. immediately become meaningful to everyone involved. Thinking about the vocabulary also ensures that we are thinking about the data, concepts, and notions we are working with in a deliberate manner, and that kind of thinking also helps when we design the architecture of software. Too often I see collaborators on software projects jump straight into writing functions that take some input and produce some desired effect, with variable names and function names decided on the fly. To me, this feels backwards. I prefer the opposite approach. Define the terms first, and let the code follow from them. I also prefer developing software in a layered manner, where complex functionality is built from simpler, well-named building blocks. It is especially important to avoid layer violations, where one complex function invokes another complex function. That creates tight coupling between two complex functions. If one function changes in the future, we have to reason carefully about how it affects the other. Since both are already complex, the cognitive burden is high. A better approach, I think, is to identify the common functionality they share and factor that out into smaller, simpler functions. To summarise, I like to develop software with a clear vocabulary, consistent use of that vocabulary, a layered design where complex functions are built from simpler ones, and by avoiding layer violations. I am sure none of this is new to the Lobsters community. Some of these ideas also occur in <a href=“https://en.wikipedia.org/wiki/Domain-driven_design” rel=“ugc”>domain-driven design</a> (DDD). DDD defines the term ubiquitous language to mean, “A language structured around the domain model and used by all team members within a bounded context to connect all the activities of the team with the software.” If I could call this approach of software development something, I would simply call it “vocabulary-driven development” (VDD), though of course DDD is the more comprehensive concept. Like I said, none of this is likely new to the Lobsters community. In particular, I suspect Forth programmers would find it too obvious. In Forth, it is very difficult to begin with a long, poorly thought-out monolithic word and then break it down into smaller ones later. The stack effects quickly become too hard to track mentally with that approach. The only viable way to develop software in Forth is to start with a small set of words that represent the important notions of the problem domain, test them immediately, and then compose higher-level words from the lower-level ones. Forth naturally encourages a layered style of development, where the programmer thinks carefully about the domain, invents vocabulary, and expresses complex ideas in terms of simpler ones, almost in a mathematical fashion. In my experience, this kind of deliberate design produces software that remains easy to understand and reason about even years after it was written. Not enhancing existing functions but adding new small ones seems quite lovely, but how do you come back to such a codebase later with many tiny functions? At points, I’ve advocated for very large functions, particularly traumatized by Java-esque 1000 functions in 1000 files approaches. When you had time, would you often rearchitecture the conceptual space of all of those functions? The famous quote from Alan J. Perlis comes to mind: “It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures.” Personally, I enjoy working with a codebase that has thousands of functions, provided most of them are small, well-scoped, and do one thing well. That said, I am not dogmatically opposed to large functions. It is always a matter of taste and judgement. Sometimes one large, cohesive function is clearer than a pile of tiny ones. For example, when I worked on parser generators, I often found that lexers and finite state machines benefited from a single top-level function containing the full tokenisation logic or the full state transition logic in one place. That function could call smaller helpers for specific tasks, but we still need the overall <code>switch</code>-<code>case</code> or <code>if</code>-<code>else</code> or <code>cond</code> ladder somewhere. I think trying to split that ladder into smaller functions would only make the code harder to follow. So while I lean towards small, composable functions, the real goal is to strike a balance that keeps code maintainable in the long run. Each function should be as small as it can reasonably be, and no smaller. Like you, I program as a tool to explore domains. Which do you know the most about? For me too, the appeal of computer programming lies especially in how it lets me explore different domains. There are two kinds of domains in which I think I have gained good expertise. The first comes from years of developing software for businesses, which has included solving problems such as network events parsing, indexing and querying, packet decoding, developing parser generators, database session management, and TLS certificate lifecycle management. The second comes from areas I pursue purely out of curiosity or for hobby computing. This is the kind I am going to focus on in our conversation. Although computing and software are serious business today, for me, as for many others, computing is also a hobby. Personal hobby projects often lead me down various rabbit holes, and I end up learning new domains along the way. For example, although I am not a web developer, I learnt to build small, interactive single-page tools in plain HTML, CSS, and JavaScript simply because I needed them for my hobby projects over and over again. An early example is <a href=“https://susam.net/quickqwerty.html” rel=“ugc”>QuickQWERTY</a>, which I built to teach myself and my friends touch-typing on QWERTY keyboards. Another example is <a href=“https://susam.net/cfrs.html” rel=“ugc”>CFRS[]</a>, which I created because I wanted to make a total (non-Turing complete) drawing language that has turtle graphics like Logo but is absolutely minimal like P′′. How do you approach learning a new domain? When I take on a new domain, there is of course a lot of reading involved from articles, books, and documentation. But as I read, I constantly try to test what I learn. Whenever I see a claim, I ask myself, “If this were wrong, how could I demonstrate it?” Then I design a little experiment, perhaps write a snippet of code, or run a command, or work through a concrete example, with the goal of checking the claim in practice. Now I am not genuinely hoping to prove a claim wrong. It is just a way to engage with the material. To illustrate, let me share an extremely simple and generic example without going into any particular domain. Suppose I learn that Boolean operations in Python short-circuit. I might write out several experimental snippets like the following: <pre><code class=“language-python”>def t(): print(‘t’); return True def f(): print(‘f’); return False f() or t() or f() </code></pre> And then confirm that the results do indeed confirm short-circuit evaluation (<code>f</code> followed by <code>t</code> in this case). At this point, one could say, “Well, you just confirmed what the documentation already told you.” And that’s true. But for me, the value lies in trying to test it for myself. Even if the claim holds, the act of checking forces me to see the idea in action. That not only reinforces the concept but also helps me build a much deeper intuition for it. Sometimes these experiments also expose gaps in my own understanding. Suppose I didn’t properly know what “short-circuit” means. Then the results might contradict my expectations. That contradiction would push me to correct my misconception, and that’s where the real learning happens. Occasionally, this process even uncovers subtleties I didn’t expect. For example, while learning socket programming, I discovered that a client can successfully receive data using <code>recv()</code> even after calling <code>shutdown()</code>, contrary to what I had first inferred from the specifications. See my Stack Overflow post “<a href=“https://stackoverflow.com/q/39698037/303363” rel=“ugc”>Why can recv() in the client program receive messages sent to the client after the client has invoked shutdown(sockfd, SHUT_RD)?</a>” for more details if you are curious. Now this method cannot always be applied, especially if it is very expensive or unwieldy to do so. For example, if I am learning something in the finance domain, it is not always possible to perform an actual transaction. One can sometimes use simulation software, mock environments, or sandbox systems to explore ideas safely. Still, it is worth noting that this method has its limitations. In mathematics, though, I find this method highly effective. When I study a new branch of mathematics, I try to come up with examples and counterexamples to test what I am learning. Often, failing to find a counterexample helps me appreciate more deeply why a claim holds and why no counterexamples exist. Do you have trouble not getting distracted with so much on your plate? I’m curious how you balance the time commitments of everything! Indeed, it is very easy to get distracted. One thing that has helped over the years is the increase in responsibilities in other areas of my life. These days I also spend some of my free time studying mathematics textbooks. With growing responsibilities and the time I devote to mathematics, I now get at most a few hours each week for hobby computing. This automatically narrows down my options. I can explore perhaps one or at most two ideas in a month, and that constraint makes me very deliberate about choosing my pursuits. Many of the explorations do not evolve into something solid that I can share. They remain as little experimental code snippets or notes archived in a private repository. But once in a while, an exploration grows into something concrete and feels worth sharing on the Web. That becomes a short-term hobby project. I might work on it over a weekend if it is small, or for a few weeks if it is more complex. When that happens, the goal of sharing the project helps me focus. I try not to worry too much about making time. After all, this is just a hobby. Other areas of my life have higher priority. I also want to devote a good portion of my free time to learning more mathematics, which is another hobby I am passionate about. Whatever little spare time remains after attending to the higher-priority aspects of my life goes into my computing projects, usually a couple of hours a week, most of it on weekends. How does blogging mix in? What’s the development like of a single piece of curiosity through wrestling with the domain, learning and sharing it etc.? Maintaining my personal website is another aspect of computing that I find very enjoyable. My website began as a loose collection of pages on a LAN site during my university days. Since then I have been adding pages to it to write about various topics that I find interesting. It acquired its blog shape and form much later when blogging became fashionable. I usually write a new blog post when I feel like there is some piece of knowledge or some exploration that I want to archive in a persistent format. Now what the development of a post looks like depends very much on the post. So let me share two opposite examples to describe what the development of a single piece looks like. One of my most frequently visited posts is <a href=“https://susam.net/lisp-in-vim.html” rel=“ugc”>Lisp in Vim</a>. It started when I was hosting a Common Lisp programming club for beginners. Although I have always used Emacs and SLIME for Common Lisp programming myself, many in the club used Vim, so I decided to write a short guide on setting up something SLIME-like there. As a former long-time Vim user myself, I wanted to make the Lisp journey easier for Vim users too. I thought it would be a 30-minute exercise where I write up a README that explains how to install <a href=“https://github.com/kovisoft/slimv” rel=“ugc”>Slimv</a> and how to set it up in Vim. But then I discovered a newer plugin called <a href=“https://github.com/vlime/vlime” rel=“ugc”>Vlime</a> that also offered SLIME-like features in Vim! That detail sent me down a very deep rabbit hole. Now I needed to know how the two packages were different, what their strengths and weaknesses were, how routine operations were performed in both, and so on. What was meant to be a short note turned into a nearly 10,000-word article. As I was comparing the two SLIME-like packages for Vim, I also found a few bugs in Slimv and contributed fixes for them (<a href=“https://github.com/kovisoft/slimv/pull/87” rel=“ugc”>#87</a>, <a href=“https://github.com/kovisoft/slimv/pull/88” rel=“ugc”>#88</a>, <a href=“https://github.com/kovisoft/slimv/pull/89” rel=“ugc”>#89</a>, <a href=“https://github.com/kovisoft/slimv/pull/90” rel=“ugc”>#90</a>). Writing this blog post turned into a month-long project! At the opposite extreme is a post like <a href=“https://susam.net/elliptical-python-programming.html” rel=“ugc”>Elliptical Python Programming</a>. I stumbled upon Python <a href=“https://docs.python.org/3/library/constants.html#Ellipsis” rel=“ugc”>Ellipsis</a> while reviewing someone’s code. It immediately caught my attention. I wondered if, combined with some standard obfuscation techniques, one could write arbitrary Python programs that looked almost like Morse code. A few minutes of experimentation showed that a genuinely Morse code-like appearance was not possible, but something close could be achieved. So I wrote what I hope is a humorous post demonstrating that arbitrary Python programs can be written using a very restricted set of symbols, one of which is the ellipsis. It took me less than an hour to write this post. The final result doesn’t look quite like Morse code as I had imagined, but it is quite amusing nevertheless! What draws you to post and read online forums? How do you balance or allot time for reading technical articles, blogs etc.? The exchange of ideas! Just as I enjoy sharing my own computing-related thoughts, ideas, and projects, I also find joy in reading what others have to share. As I mentioned earlier, other areas of my life take precedence over hobby projects. Similarly, I treat the hobby projects as higher priority than reading technical forums. After I’ve given time to the higher-priority parts of my life and to my own technical explorations, I use whatever spare time remains to read articles, follow technical discussions, and occasionally add comments. What’re your favorite math textbooks? I have several favourite mathematics books, but let me share three I remember especially fondly. The first is Advanced Engineering Mathematics by Erwin Kreyszig. I don’t often see this book recommended online, but for me it played a major role in broadening my horizons. I think I studied the 8th edition back in the early 2000s. It is a hefty book with over a thousand pages, and I remember reading it cover to cover, solving every exercise problem along the way. It gave me a solid foundation in routine areas like differential equations, linear algebra, vector calculus, and complex analysis. It also introduced me to Fourier transforms and Laplace transforms, which I found fascinating. Of course, the Fourier transform has a wide range of applications in signal processing, communications, spectroscopy, and more. But I want to focus on the fun and playful part. In the early 2000s, I was also learning to play the piano as a hobby. I used to record my amateur music compositions with <a href=“https://github.com/audacity/audacity” rel=“ugc”>Audacity</a> by connecting my digital piano to my laptop with a line-in cable. It was great fun to plot the spectrum of my music on Audacity, apply high-pass and low-pass filters, and observe how the Fourier transform of the audio changed and then hear the effect on the music. That kind of hands-on tinkering made Fourier analysis intuitive for me, and I highly recommend it to anyone who enjoys both music and mathematics. The second book is Introduction to Analytic Number Theory by Tom M. Apostol. As a child I was intrigued by the prime number theorem but lacked the mathematical maturity to understand its proof. Years later, as an adult, I finally taught myself the proof from Apostol’s book. It was a fantastic journey that began with simple concepts like the Möbius function and Dirichlet products and ended with quite clever contour integrals that proved the theorem. The complex analysis I had learnt from Kreyszig turned out to be crucial for understanding those integrals. Along the way I gained a deeper understanding of the Riemann zeta function ζ(s). The book discusses zero-free regions where ζ(s) does not vanish, which I found especially fascinating. Results like ζ(-1) = -1/12, which once seemed mysterious, became obvious after studying this book. The third is Galois Theory by Ian Stewart. It introduced me to field extensions, field homomorphisms, and solubility by radicals. I had long known that not all quintic equations are soluble by radicals, but I didn’t know why. Stewart’s book taught me exactly why. In particular, it demonstrated that the polynomial t⁵ - 6t + 3 over the field of rational numbers is not soluble by radicals. This particular result, although fascinating, is just a small part of a much larger body of work, which is even more remarkable. To arrive at this result, the book takes us through a wonderful journey that includes the theory of polynomial rings, algebraic and transcendental field extensions, impossibility proofs for ruler-and-compass constructions, the Galois correspondence, and much more. One of the most rewarding aspects of reading books like these is how they open doors to new knowledge, including things I didn’t even know that I didn’t know. How does the newer math jell with or inform past or present computing, compared to much older stuff? I don’t always think explicitly about how mathematics informs computing, past or present. Often the textbooks I pick feel very challenging to me, so much so that all my energy goes into simply mastering the material. It is arduous but enjoyable. I do it purely for the fun of learning without worrying about applications. Of course, a good portion of pure mathematics probably has no real-world applications. As G. H. Hardy famously wrote in A Mathematician’s Apology: <blockquote> I have never done anything ‘useful’. No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world. </blockquote> But there is no denying that some of it does find applications. Were Hardy alive today, he might be disappointed that number theory, his favourite field of “useless” mathematics, is now a crucial part of modern cryptography. Electronic commerce wouldn’t likely exist without it. Similarly, it is amusing how something as abstract as abstract algebra finds very concrete applications in coding theory. Concepts such as polynomial rings, finite fields, and cosets of subspaces in vector spaces over finite fields play a crucial role in error-correcting codes, without which modern data transmission and storage would not be possible. On a more personal note, some simpler areas of mathematics have been directly useful in my own work. While solving problems for businesses, information entropy, combinatorics, and probability theory were crucial when I worked on gesture-based authentication about one and a half decades ago. Similarly, when I was developing Bloom filter-based indexing and querying for a network events database, again, probability theory was crucial in determining the parameters of the Bloom filters (such as the number of hash functions, bits per filter, and elements per filter) to ensure that the false positive rate remained below a certain threshold. Subsequent testing with randomly sampled network events confirmed that the observed false positive rate matched the theoretical estimate quite well. It was very satisfying to see probability theory and the real world agreeing so closely. Beyond these specific examples, studying mathematics also influences the way I think about problems. Embarking on journeys like analytic number theory or Galois theory is humbling. There are times when I struggle to understand a small paragraph of the book, and it takes me several hours (or even days) to work out the arguments in detail with pen and paper (lots of it) before I really grok them. That experience of grappling with dense reasoning teaches humility and also makes me sceptical of complex, hand-wavy logic in day-to-day programming. Several times I have seen code that bundles too many decisions into one block of logic, where it is not obvious whether it would behave correctly in all circumstances. Explanations may sometimes be offered about why it works for reasonable inputs, but the reasoning is often not watertight. The experience of working through mathematical proofs, writing my own, making mistakes, and then correcting them has taught me that if the reasoning for correctness is not clear and rigorous, something could be wrong. In my experience, once such code sees real-world usage, a bug is nearly always found. That’s why I usually insist either on simplifying the logic or on demonstrating correctness in a clear, rigorous way. Sometimes this means doing a case-by-case analysis for different types of inputs or conditions, and showing that the code behaves correctly in each case. There is also a bit of an art to reducing what seem like numerous or even infinitely many cases to a small, manageable set of cases by spotting structure, such as symmetries, invariants, or natural partitions of the input space. Alternatively, one can look for a simpler argument that covers all cases. These are techniques we employ routinely in mathematics, and I think that kind of thinking and reasoning is quite valuable in software development too. When you decided to stop with MathB due to moderation burdens, I offered to take over/help and you mentioned others had too. Did anyone end up forking it, to your knowledge? I first thought of shutting down the <a href=“https://github.com/susam/mathb” rel=“ugc”>MathB</a>-based pastebin website in November 2019. The website had been running for seven years at that time. When I announced my thoughts to the IRC communities that would be affected, I received a lot of support and encouragement. A few members even volunteered to help me out with moderation. That support and encouragement kept me going for another six years. However, the volunteers eventually became busy with their own lives and moved on. After all, moderating user content for an open pastebin that anyone in the world can post to is a thankless and tiring activity. So most of the moderation activity fell back on me. Finally, in February 2025, I realised that I no longer want to spend time on this kind of work. I developed MathB with a lot of passion for myself and my friends. I had no idea at the time that this little project would keep a corner of my mind occupied even during weekends and holidays. There was always a nagging worry. What if someone posted content that triggered compliance concerns and my server was taken offline while I was away? I no longer wanted that kind of burden in my life. So I finally decided to shut it down. I’ve written more about this in <a href=“https://susam.net/mathbin-is-shutting-down.html” rel=“ugc”>MathB.in Is Shutting Down</a>. To my knowledge, no one has forked it, but others have developed alternatives. Further, the <a href=“https://wiki.archiveteam.org/” rel=“ugc”>Archive Team</a> has <a href=“https://web.archive.org/web/*/https://mathb.in/” rel=“ugc”>archived</a> all posts from the now-defunct MathB-based website. A member of the Archive Team reached out to me over IRC and we worked together for about a week to get everything successfully archived. re: QWERTY touch typing, you use double spaces after periods which I’d only experienced from people who learned touch typing on typewriters, unexpected! Yes, I do separate sentences by double spaces. It is interesting that you noticed this. I once briefly learnt touch typing on typewriters as a kid, but those lessons did not stick with me. It was much later, when I used a Java applet-based touch typing tutor that I found online about two decades ago, that the lessons really stayed with me. Surprisingly, that application taught me to type with a single space between sentences. By the way, I disliked installing Java plugins into the web browser, so I wrote <a href=“https://susam.net/quickqwerty.html” rel=“ugc”>QuickQWERTY</a> as a similar touch typing tutor in plain HTML and JavaScript for myself and my friends. I learnt to use double spaces between sentences first with Vim and then later again with Emacs. For example, in Vim, the <code>joinspaces</code> option is on by default, so when we join sentences with the normal mode command <code>J</code>, or format paragraphs with <code>gqap</code>, Vim inserts two spaces after full stops. We need to disable that behaviour with <code>:set nojoinspaces</code> if we want single spacing. It is similar in Emacs. In Emacs, the <code>delete-indentation</code> command (<code>M-^</code>) and the <code>fill-paragraph</code> command (<code>M-q</code>) both insert two spaces between sentences by default. Single spacing can be enabled with <code>(setq sentence-end-double-space nil)</code>. Incidentally, I spent a good portion of the README for my Emacs quick-start DIY kit named <a href=“https://github.com/susam/emfy” rel=“ugc”>Emfy</a> discussing sentence spacing conventions under the section <a href=“https://github.com/susam/emfy#single-space-for-sentence-spacing” rel=“ugc”>Single Space for Sentence Spacing</a>. There I explain how to configure Emacs to use single spaces, although I use double spaces myself. That’s because many new Emacs users prefer single spacing. The defaults in Vim and Emacs made me adopt double spacing. The double spacing convention is also widespread across open source software. If we look at the Vim help pages, Emacs built-in documentation, or the Unix and Linux man pages, double spacing is the norm. Even inline comments in traditional open source projects often use it. For example, see Vim’s <a href=“https://github.com/vim/vim/blob/v9.1.1752/runtime/doc/usr_01.txt” rel=“ugc”>:h usr_01.txt</a>, Emacs’s <a href=“https://cgit.git.savannah.gnu.org/cgit/emacs.git/tree/doc/emacs/emacs.texi?h=emacs-30.2#n1556” rel=“ugc”>(info “(emacs) Intro”)</a>, or the comments in the <a href=“https://gcc.gnu.org/git/?p=gcc.git;f=gcc/cfg.cc;hb=releases/gcc-15.2.0” rel=“ugc”>GCC source code</a>. How and why do you use reference-style links? I’ve only seen them unrendered on HN with confusion. I am typing out the reference-style links manually with a little help from Emacs. For example, if I type the key sequence <code>C-c C-l</code>, actually it is <code>, c , l</code> with <code>devil-mode</code>, Emacs invokes the <code>markdown-insert-link</code> command. Then I type the key sequence <code>[] M-j example RET https://example.com/ RET RET</code> to have Emacs insert the following for me: <a href=“https://example.com/” rel=“ugc”>example</a>. I normally use <a href=“https://spec.commonmark.org/0.31.2/#reference-link” rel=“ugc”>reference links</a> in Markdown to save horizontal space for my text. As you can see, I hard-wrap my paragraphs so that no line exceeds 70 characters in length. Long URLs can break this rule, since some are longer than 70 characters, but reference-style links solve that problem. They let me keep paragraphs neatly wrapped, and they also collect all URLs together at the bottom of the section. I like the aesthetics of this style. Of course, you are welcome to reformat the links however you like while publishing your post on Lobsters! As a reader on Lobsters, I don’t think I can tell which style you use. I’d also like to suggest adding another link: <a href=“https://oeis.org/” rel=“ugc”>https://oeis.org/</a> for “On-Line Encyclopedia of Integer Sequences”. A small portion of my Emacs is shared here: <a href=“https://github.com/susam/dotfiles/blob/main/.emacs” rel=“ugc”>https://github.com/susam/dotfiles/blob/main/.emacs</a> But not all of my setup is in the form of <code>.emacs</code>. Many of my Emacs Lisp functions are spread out across numerous <code>.org</code> files. Each <code>.org</code> file is like a little workspace for a specific aspect of my life. For example, there is one <code>.org</code> file for bookmarks, another for checklists, another to keep track of my utility bills, another to plan my upcoming trips, and so on. I have several Emacs Lisp source blocks in these <code>.org</code> files to perform computations on the data in these files, generate tables with derived values, and so on.