Preamble
Below are some thoughts on me trying to understand programming language design for the first time. I found an axiomatic or vector-based approach makes most intuitive sense to me, so I try to explain my thoughts. I can’t promise that all of the thoughts are complete, self-consistent, or even little more than emotions. But if we knew where we were going, we wouldn’t have to make the journey in the first place.
Introduction
My friends will sometimes get into debates about which programming languages are better than others. Other times, there will seemingly be a unspoken consensus that certain languages are not worth the silicon they run on.
At the expense of revealing my ignorance, I confess I had little to no idea what substance there was to fuel this debate1. I mostly chalked it up to tribalism or a ritual to delineate true CS people from imposters: if you can contribute intelligently to the conversation, then you belong. If not, get out. Don’t get me wrong, social rituals have a place in society, but sometimes they’re fucking annoying. Doesn’t everything more or less compile from an imperative style to assembly? It’s like C but with X (where X is some new feature, such as Rust’s borrow checker or Python’s simple syntax).
So, for a while, it felt like the emperor had no clothes and I wasn’t going to be the fool to point it out. But then I listened to an interview with Chris Lattner, one of the leaders behind the Swift programming language and LLVM compiler technology. Chris is a quality speaker. He has a deep understanding of language design (having developed one helps, I’m sure) and speaks on the topic with straightforward observations that make me go “wait a minute, he has a point there.” Maybe this whole language design thing has something to it after all?
So I decided to write this post. I just wanted to jot down some thoughts and observations, from one layman to another. This post will mostly focus on imperative languages, because I have to constrain the dimensions of inquiry in some way and that seems as good as any. Even then, I’ve barely scratched the surface. But perfection is the enemy of the good, so let’s get started.
My background with programming languages
I learned how to program in high school through Codecademy, then college through my CS degree. My initial understanding of programming languages reflected the academic, algorithmic style in which I first learned how to program: write a program as a set of imperative steps to accomplish X, turn it in, and never look at it again.
Observation 1: When scalability was discussed for code in the classroom, it was in the context of algorithms and Big-O notation, not on the structural complexity or size of code artefacts.
For sure, algorithm time complexity is important for understanding computer science. It has a whole field of theory dedicated to it. But most of the scale, demands, and situations of programming are stretched by software engineering rather than theoretical CS. Put another way, you have to put a tool through fire to see what it’s made of. So this led me to an understanding that you can’t benchmark a language by writing a small program. If that’s all we did, then we’d (probably) never need to evolve past C.
Observation 2: If you treat every language like C, it will act like C. Because it’s capable of doing C stuff. But then you’re not programming in language X, you’re programming in C with different syntax.
And yet, programming languages HAVE evolved past C. Why? Well, let’s look at history.
A brief history of programming languages
One of the recurring amusements of life is that by studying the history of a topic, one can learn the present state of that topic better than by starting with the present state of the topic. Instead of history providing additional complexity to the topic, such as arbitrary information like the name-of-the-inventor’s-pet-dog, well-synthesized history2 provides a continual loop of
problem->solution->innovation->problem->...
or maybe it’d be more accurate to say
problem->innovation->dissemination and adoption of innovation->problem->...
providing us with a reconstruction from first principles and motivations on the present state of affairs. Therefore, we can start with the dissemination and adoption of new languages and trace it back to understand the problems of the languages before them.
While there are many sources of authority on programming language history and ways of delineating them historically, some of which contradict each other I’m sure, I’m electing to use a summary taken from Brian Kernighan’s podcast interview with Lex Friedman. For context, Kernighan is one of the authors of The C Programming Language textbook and contributed to the development of the historically important Unix OS. The summary is taken from around 47 minutes into the episode
Complexity: Language:
----------- ---------
(low) ^ Machine language
| Assembly
| -- High level languages --
| Imperative/procedural
| System programming
| Object-oriented
| Parallel/CSP
(high) V
and here’s a richer version, annotated by yours truly by searching Wikipedia
Complexiy: First introduced*: Language: Example: Summary of improvements:
----------- ------------------ --------- -------- ------------------------
(low) ^ 1849 Machine language Literal code run by the CPU
| 1947 Assembly x86, ARM human-readable syntax improvements, virtualizes certain convenient operations such as no-op, virtualizes labels
| -- High level languages** --
| 1956 Imperative/procedural FORTRAN allowed easy porting of programs to different architectures
| 1960 System programming C easier access and manipulation to low-level primatives like the heap
| 1962 Object-oriented C++ introduced class relationships to reduce complexity of ever-growing large code bases
| 1967 Parallel/CSP Golang hardware switched from predominantly single-core to multi-core; adds corresponding language primatives to abstract and handle parallelization (ex. mutex)
(high) V
*There's endless ambiguity on which language introduced the new paradigm. So I just picked the first prominent example. Details below:
--------------------------------------------------------------------------------------------------------------------------------------
1849 - Ada Lovelace finishes the biography and includes a program to calculate Bernoulli's numbers, widely considered to be the world first published program
1947 - Coding for A.R.C
1956 - Fortran
1960 - ESPOL
1962 - Simula
1967 - IBM's MVT variant of OS/360
(these are quick references and not definitive answers)
**High level language here simply means hardware-independent
Hint: you can scroll left/right
The above description may imply the development of languages was linear. This is not the case.
There are also other notable ways of representing the history of languages, including
.
grabbed from this research article on the history of programming languages. Shoutout to Prolog, alone in the world.
If we cross-examine language developent with the context of the technological innovation at the time, a few trends emerge. Early language development was constrained and driven by the computational power of the computers that ran the language. The earliest computers did not have the memory or computation power to allow assemblers or compilers (themselves programs which require memory and computation) to run. As computational power improved, many of the mental semantics that abstract the computational logic from the program the computer runs (such as abstract labels instead of concrete line numbers) were able to be leveraged. They were symbolic representations, facilitating assembly and then functions.
Eventually, computers gained enough heap storage to be useful to programmers. Thus, new virtual primitives were created in languages to allow representation and manipulation of this dynamic memory. Notably, they did not replace or fundamentally change the computation of earlier languages, such as type systems or functions; rather, they built on top of it. Computers continued to grow, and eventually software was being written at a larger and larger scale, so decoupling interfaces and implementation became a way to work on parts in isolation and scale up production: object-oriented programming.
Many new languages were created for each paradigm, varying approaches, of which a few were widely adopted and maintained. In reaction to these changes in environment, languages had to create new abstractions, often on top of existing abstractions, to handle the increasing complexity of logic.
To exemplify this trend, here’s a snippet of Kernighan discussing Golang, a language recently developed at Google (where Kernighan works) and released in 2009:
[01:00:20]
Kernighan: And so some ways I go captures the good parts of C. It looks sort of like C. It’s sometimes characterized as C for the 21st century.3
On the surface, it looks very, very much like C. But at the same time, it has some interesting data structuring capabilities.
…
And go routines are, to my mind, a very natural way to talk about parallel computation. And in the few experiments I’ve done with them, they’re easy to write. And typically it’s going to work and very efficient as well.
So I think that’s one place where go stands out that that model of parallel computation is very, very easy and nice to work with.
Fridman: Just to comment on that, do you think she foresaw or the early Unix days for more threads and massively parallel computation?
Kernighan: I would guess not really. I mean, maybe it was seen, but not at the level where it was something you had to do anything about for a long time.
Processor’s got faster and then processors stopped getting faster because of things like power consumption and heat generation.
And so what happened instead was that instead of processors just getting faster, there started to be more of them. And that’s where that parallel thread stuff comes in.
Notably, when C first came out, it did not have to concern itself with multiple processors. Computers were not that advanced yet. Therefore, the C language did not create abstract primitives to deal with such scenarios. Of course, it can and did add the primatives for some of these abstractions after the fact, either in the language itself or in libraries. However, other languages were built with these thoughts informing the language design from the beginning, rather than tacked on. As we’ll see in the next section, small changes can ripple though a language, either making it simple and internally inconsistent or a hot mess.
Understanding a language as a set of axioms
I would argue that my academic misunderstanding of language design stems from considering programming languages as a single level: a function to convert imperative statements to assembly. In fact, languages are an ecosystem in and of itself, with multiple abstractions built on top of each other within the language. Instead of one layer of abstraction, we find several. Even the number of layers of abstraction themselves can be the focus of contention between languages.
We should look at a language not as a closed box, such as
+-----------+
| Java code |
+-----------+
|
V
+----------+
| Assembly |
+----------+
but as a set of complex virtual relationships within a system
(Language ecosystem)
+-----------+
| Java code |
+-----------+
|
|----------------. Level 4: Parallel computing
| | Level 3: Object-oriented interface
V | Level 2: Systems programming
+---------------------+ | Level 1: Imperative statements
,>| Java multithreading | | (Level 4)
| +---------------------+ |
| | |
| V |
| +--------------------+ | (Level 3)
->| Java classes and |<----'
| | interfaces |------------.
| +--------------------+ |
| | V
| | +---------------+
| | ,--------| Memory and GC | (Level 2)
| | | +---------------+
| V V
| +-----------------------+ +----------------------+
`-| Java statement syntax |---->| Java virtual machine | (Level 1)
+-----------------------+ +----------------------+
Different layers of abstraction, with mostly well-defined interfaces, sit on top of each other. These layers both allow and minimize complexity in the system and allow a programmer to hold the mechanisms of one component or abstraction layer in their mind. However, these interfaces are not perfect. For example, if you’ve ever had to understand pass-by-copy (built-in types) versus pass-by-reference (classes) in Java, we can see a bit of the intersection of different levels of abstraction: the class level and the memory level. Perhaps this feature is why new programmers have such difficulty with the concept. Some languages, such as Swift, use tools like reference semantics to help smooth this crease.
Using this model, it then becomes easier to discuss language design in concrete terms. For example, it provides a framework to easily compare different languages’ multithreading framework and the ramifications through the rest of the language. We could discuss it before as well, but I find this mental model to be useful in breaking down a language into its core design decisions. Maybe everyone else already does this and I’m catching up.
It also explains why, despite all compliling to assembly, languages are not analogous to one another: the language itself builds up new primatives and rules that the compiler checks. For example, you could define logic in a language that would technically make sense on a assembly level, but the compiler won’t allow it because on a higher level, it makes no sense. Therefore, code that runs on the same computer can have vastly different capabilities.
The system also explains why simple design decisions can lead to complex results. Take, for example, Python’s decision not to include types in its syntax. While this decision operates on a low level, the design decision echoes through the rest of the language in unexpected ways:
(using the level definitions above)
3.+--------------------------+
Level 4: ,--------------- | Multithreading ambiguity |
| +--------------------------+
| ^
V |
4.+--------------------+ +-------------------+
Level 2: | Global interpreter | 2.| Lax memory safety |
| lock | | |
+--------------------+ +-------------------+
^
,----------------'
+----------+ | +---------------+
Level 1: 0.| No types | --------->1.| Simple syntax |
+----------+ +---------------+
Steps:
------
0. Python makes the decision to have no static types
1. The decision makes python's syntax simpler, which many people view positively
2. However, the lack of static types also makes it difficult for Python to check if code is memory safe
3. Therefore, in multithreading contexts, Python cannot guarantee certain memory properties are upheld
4. Python has to implement the GIL, to prevent multithreaded code, which decreases python performance
A simple decision on the level of syntax has significant and latent reverberations in other parts of a system. While these tradeoffs are largely hidden from the programmer, the language designer must intentionally map out each vector and ensure the different layers interact as little as possible. That’s one reason why Python is well regarded for its syntactic convenience and mental ergonomics, but suffers from performance issues.
Therefore, the somewhat arbitrary fight over whether you include a type or not (surely it shouldn’t matter that much if I write int x=7 versus x=7) has sub-surface ramifications in other parts of the code base, including other parts we may care about more (such as multithreading performance). At the very least, it makes such tradeoffs explicit in conversation.
When I first found out that, in order to run a game on my computer, the game must contain all the code required to function. We can then disassembly the binary to look at the code, like this really cool video. Nowadays, that makes complete sense: if we didn’t have all the code, how were we supposed to run the game? Similarly, we might think after we define our vectors, let whatever edge cases that happen, happen. Well, turns out because the compiler defines the exact function from vectors to assembly, we can’t be wishy washy about our definition. But we can go into the code and examine every rough edge.
If we consider a language to be a function or a vector space, then the compiler is the function definition. As such, we can interactively query the compiler to see the function definition and any interesting edge cases. Similar to how your computer, by definition, must have access to the code of a program to run it and therefore you can dig through the code yourself, so to must the compiler handle all cases and therefore we can dig through it.
One might imagine we can then define more vectors, so the programmer can always choose ones that play well together. One issue is by defining many representations, the language does not take a stance on which is the correct one. Therefore, you cause the user confusion by allowing multiple similar paths without indicating to them why they should choose one over the other. You’re dropping the complexity baggage onto the user rather than reducing it for them.
In addition, if you language takes multiple stances on multiple vectors, then your language will have to account for all intersections. For example, if your language has two different ways of representing nullable data, then you better make damn sure that all of the rest of your language plays nicely with both representations. Chances are, certain representations will pair naturally with other representations, which will cause users to adopt dialects of your language or this will leave the user with some unexpected or incongruous behavior.
Here’s another snippet for why C has survived so long, while most other languages have died out:
[00:53:12]
Important languages in the history of programming languages. If you kind of look at impact, what do you think is the most elegant or powerful part of see, why did it survive? What did it have such a long lasting impact? I think it found a sweet spot that in of expressiveness that you could rewrite things in a pretty natural way and efficiency, which was particularly important when computers were not nearly as powerful as they are today.
Now that we know that our choices of representation will bite us in the butt if we choose the wrong ones, we’re motivated to pick good ones. For example, here’s a snippet of Kernighan discussing the design of the Unix interface from the same podcast:
[00:29:51]
And so there’s a ripple effect that all the faculty and students can go up and they’re the one throughout the world and permitting that kind of way. So what kind of features do you think makes for good operating system? If you take the lessons of Unix, you said, you know, make it easy for programmers like that seems to be an important one. But also Unix turned out to be exceptionally robust and efficient. Right. So is that an accident when you focus on the programmer or is that a natural outcome? I think part of the reason for efficiency was that it began on extremely modest hardware, very, very, very tiny. And so you couldn’t get carried away.
You couldn’t do a lot of complicated things because you just didn’t have the resources, i.e. the processor speed or memory.
And so that enforced a certain minimal of mechanisms and maybe a search for generalizations so that you would find one mechanism that served for a lot of different things rather than having lots of different special cases. I think the file system in Unix is a good example of that file system interface in its fundamental form is extremely straightforward, and that means that you can write code very, very effectively for the file system.
And then one of those ideas, one those generalisations, is that, gee, that file system interface works for all kinds of other things as well. And so in particular, the idea of reading and writing to devices is the same as reading and writing to a disk that has a file system and then that gets carried further in other parts of the world, processes become.
In effect, files in a file system in the Plan nine operating system, which came along, I guess in the late 80s or something like that, took a lot of those ideas from the original Unix and tried to push the generalization even further so that in planning a lot of different resources or file systems, they all share that interface. So that would be one example where finding the right model of how to do something means that an awful lot of things become simpler.
And it means, therefore, that more people can do useful, interesting things with them without having to think as hard about it.
The simplicity of file interfaces has led to some interesting results, such as using tetris to represent a hard drive. What’s notable is if you substitute the word programming language for operating system and re-read the text, you find a useful explanation of good programming languages. Of course, these observations are not exclusive to programming language or computer designs; they represent useful notes for all good design.
I consider these abstraction layers to be vector spaces and the design decisions (aka opinions) of the language to be a collection of vectors in that vector space. They intersect in weird and unexpected ways, which is one of the reasons language design is so hard.
A theoretical example: axiomatic basis of arrays versus linked lists
This example comes from some offhand thoughts on why doubly-linked lists are used so often in Operating Systems while arrays are not. While not a specific example of language design, as languages often support both data structures, it represents how certain constraints in different dimensions of systems can affect the larger system design.
Let’s consider the simple example of representing a list of indexable data. Two representations immediately come to mind: an array and a linked list. Each provide the same interface for the data: the ability to read/write the nth data element. However, different underlying properties based on implementation. For example, the doubly linked list relaxes memory contiguity requirements, which doesn’t change the interface but does change the side effects or how the vector interacts with other vectors. This property makes it easier to fit into an OS for example.
| Array | Linked list |
|---|---|
![]() |
![]() |
Notice for the array, there’s arbitrary data access in constant time. For a linked list, we have to walk the nodes to get the n’th element. However, in another vector space, memory contiguity, there is no requirement for a linked list to be contiguous, while an array must all be sequential. These vectors do not sit in isolation; rather, they shoot off across the complex terrain of a problem (mental image akin to) infinitely far and intersect other vectors at unexpected points. Each of these vectors, in turn, can be broken down further. Type systems usually have incredible complexity to it and can be represented as a collection of many cohesive def vectors.
For example, when we run on an OS which virtualizes paging, we can fragment and re-orient our vector space further, the allow some things to work and others to break.
(Doubly) Linked lists axioms:
- Access contiguous data
Array axioms:
- Access contiguous data
- Memory: must be contiguous
A real world example: Swift
This theorizing is all nice and good, but still somewhat abstract. If possible, we’d like a concrete language example to poke, prod, and validate or counter our conjectures. So I’ll revisit a real-world example mentioned earlier in the post: Swift.
As are most other languages, Swift is open source. This feature is somewhat notable by itself, because Apple is known for strict control over their technology. Because it’s open source, it’s run by a committee with a transparent roadmap. How each programming language handles each part of the complex ecosystem changes how it handles other parts as well.
We can study the following change sequence:
Introduces some (the generic keyword) as a valid parameter type. Existentials are simply generic types that guarantee to implement an interface (protocol in Swift). Code example
func f(_ p: some P) { }
protocol P {
func funcName() -> Double
}
While this change is technically syntactic sugar, as detailed in the change document, it’s a useful starting point to understand the subsequent changes.
Tangential to existentials are optionals. Optionals are analogous to nullables in other languages (probably inferrable from the ? syntax). Basically, an optional can be either something or null.
In Swift’s type system, once you make an existential some P into an optional (some P)?, Swift can’t easily convert back. This change allows Swift to easily do that.
In the previous change, the decision was made to not allow an argument (data passed into a function) of an existential to work with a parameter (function input type) of an optional existential. This change reverses that decisions and allows such an opening to occur.
The reasoning for this reversal is interesting. From the document
The rationale for not opening the existential p in the first call was to ensure consistent behavior with the second call, in an effort to avoid confusion…However, experience with implicitly-opened existentials has shown that opening an existential argument in the first case is important, because many functions accept optional parameters. It is possible to work around this limitation, but doing so requires a bit of boilerplate, using a generic function that takes a non-optional parameter as a trampoline to the one that takes an optional parameter:
The design decision to not unwrap optionals for existentials ended up being the wrong one in practice. Even after the design change went through a peer-reviewed process and approved, it was wrong. Even the smartest and well-intentioned designer can get it wrong. Moreover, the result was determined only after the feature was implemented and it had been extensively used.
The choice of “existentials” is orthogonal to the concept of “optionals”. It’s not always clear how or when these vectors will intersect, or what the behavior will be (or ought to be) when they do. As such, there’s a constant re-definition of “what should happen in this case”
Intersecting vectors:
^ ^ Optionals
\_ |
+ What should happen here??
|\_
| \_
| \_ Unwrapping
| \_
<--+----------\-- Existentials
|
|
(beautiful, I know \s)
My point is, it’s difficult to design a language so that all vectors intersect all other vectors in a predictible and well-designed way. One example is this post on “orthogonal” vs “diagonal” languages. Specifically, this line
Back then, I regarded this diagonality analogy as somewhat ill-formed, an over-zealous application of the term “orthogonal”. But now, years of real-world experience later, I see that Larry is right. Also, I am now more comfortable with mathematics, and I no longer think of the “orthogonality” idea as an ill-fitting metaphor. Rather, I think it’s a fairly accurate description, perhaps even an isomorphism rather than an analogy.
I want to be careful of adopting an existing term such as orthogonal. Reading the Wikipedia page for orthogonal, I don’t know that it fits the same framework as my intuition. Another question is: does topology or category theory have any existing insight into this? I started looking into whether these intuitions had an existing mathematical basis. I stumbled on a number of questions and comments on CS Theory stack overflow, and eventually led to this post titled ‘Category Theory Screws You Up!’, where I found this terrifying message

Maybe I’ll hold off on learning category theory after all.
Why haven’t we developed a universal programming language by now?
You might say to yourself: okay then, either there exists one (or maybe a handful) of useful representations of a vector, or there doesn’t. In the case that there is one, we should have honed in on it by now; in the case that there isn’t, then the whole cause of the vector is pointless and we shouldn’t waste time arguing on a subjective topic.
Here are a few plausible explanations as to why, despite there might be an optimal (or at least preferrable) representation of a vector, it has not been found yet:
- One explanation as to why we haven’t reached such an equilibrium is that language design itself takes time and innovation. Rust’s borrow checker is one such example.
- Another is, as new technology/forms of computing are introduced and software scales to new sizes, new vectors need to be developed to virtualize the change, which in turn cascades and causes the entire lower levels to be re-designed. An example would be parallelization due to multiple CPU cores.
- A third is that languages are expensive to build, require a lot of expertise, and need a large community to provide feedback to push the language to the (or one of many) logical end(s).
- Another explanation is that we have language design mostly figured out, and what we consider to be changes are the fine tuning, fiddly bits. Put another way, we’re at the long tail where most of the work has already been done.
- A final explanation is that the user/system is sensitive. This is similar to the above explanation, except the previous one suggests that the small changes don’t matter. This one suggests rather the small changes have an outsized impact on either the user working in the system or the system itself. Sort of a chaos theory of language design, where a small tweak to this or that ultimately cascades through the whole system and we end up with an unpredictable yet inevitable result in the form of a language definition. Another way of thinking about it would be while the “tail theory” assumes we’re gradient descent converging on a fine tuned ideal language, this theory proposes that the underlying function is so complex that gradient descent can converge on local optima but there’s an impossibly low chance we converge on a “ideal language” before the heat death of the universe.
And here are a couple explanations why, while there is no universal representation of a vector, it’s still worth rolling the boulder up the Sisyphean hill of language design:
- Languages are inherently social constructs; as our society changes (and it never stops changing), so do our languages. Therefore, while there will never be a finished language, but it’s still useful to keep the language in tune with our evolution.
Or it could be some combination thereof mixed with a million other things. I don’t know. Moving on…
Conclusion
Considered this way, a programming language is nothing more than a set of axioms from which to build complex systems to model the topologies of problems. For the sake of reducing complexity, we often partition axioms that often intersect each other but not others into categories. We can consider these partitions as abstraction layers. We naturally assume that such systems are consistent with themselves and more-or-less perfect. After all, they’re usually developed by smart people. In fact, languages are neither perfectly consistent with themselves nor are they static. Most languages maintain an active repository of bugs and community suggestions on how to change the language to improve usability. Simply put, programming languages are designed and implemented by humans, often without full knowledge of how the internal axioms will combine together or what complex situations will come up.
One way of measuring language design would be along three axes:
- internal consistency
- representative of the problem space
- mental ergonomics for the human programmer4, such as syntax
When we use words like “good” or “bad” to describe languages, there’s usually a lot of ambiguity hiding behind these generic words. Perhaps instead of “bad” we mean to say “inconsistend with regards to the different interactions of vectors in the language” but the implicit meaning gets lost with the singular term “bad.”
I’m sure many or all of the above observations are well-worn into existing Programming Language textbooks. In fact, arguably I should have just read one or more before writing this, to see page 7 covering this exact topic. Still I find it useful to think and write from first principles, from my own experience as a software engineer. In addition, I’m sure it helps reinforce my understanding to actively write and inquire about it than it would to passively read a textbook passage. In which case, you should probably stop reading this and go explore yourself!
This topic also sometimes falls into the category of everyone thinking “it’s so obvious, we shouldn’t need to explain it or put it into writing”. Which is questionable, because usually when one explains or puts something into writing, it’s not as obvious to state every implicit logical step as one initially thinks. Certainly I haven’t always found it obvious. Furthermore, even if these statements are obvious, making them explicit allows a rigorous foundation to discuss and analyze languages in the future.
I’d like to do another post on functional programming (which have interesting math implications for the mathematical axiom system theory) or declarative programming (Prolog?), or maybe take a few languages and analyze the key vectors of the language, examine the effect that propogates through the system, and compare/contrast the result. There’s always more to do.
Footnotes
-
Especially in the modern age, where intense scrutiny should quickly pick up and address issues, no that no languge should be “so obviously bad as to deserve scorn.” ↩︎
-
One might argue, then, that well synthesized history is then fiction providing an over-simplistic narrative, to which I kindly refer them to talk to a history person. But I digress. ↩︎
-
Of course, the programming language community’s opinion of Go is less universally positive than Kernighan’s. Kernighan notes that he’s only played around with Golang a bit, despite being one of the coauthors of the Golang textbook. See’s I want off Mr. Golang’s Wild Ride for a dissenting view. Notably, the issues raised in Wild Ride are focused on system calls between Linux and Windows, default HTTP server handling, etc. which are not Golang’s parallel computation. So a language like Go can be lauded for one part of its language design but critisized for a different part. And these parts seem to matter differently to different people/applications. ↩︎
-
If phenomenology has taught us anything, you can never assume an objective observer ↩︎

