Programming languages geek out | Thoughtworks

Rebecca Parsons
thoughtworks
Related Topic
:- programming languages programming skills

Programming languages are simultaneously a creative and an engineering medium. So how do you find the programming language that most facilitates you being creative, while enabling you to produce code that others can read? In this episode, two of our regular co-hosts, Rebecca Parsons and Neal Ford, take a deep dive into the world of programming languages.

Podcast transcript

Neal Ford:

Hello, and welcome to the Thoughtworks Technology Podcast. This is one of your regular hosts, Neal Ford.

 

Rebecca Parsons:

And I'm another one of your regular hosts, Rebecca Parsons.

 

Neal Ford:

And generally, on the podcast, we have guests who are Thoughtworkers who have interesting things to talk about, but today it's just me and Rebecca, because we have something interesting to talk about. So we do not preclude the possibility of just having a podcast host to talk about something that they are particularly interested in or have some perspectives on, and that is definitely the case today.

 

Now, Rebecca and I are both quite passionate about this subject of programming languages in general, and some perspectives on that, very specifically. When we were chatting about podcast topics, one of the topics that I thought about, that I've been talking about for years, was this realization that I made several years ago. This is back when I was writing my book, Functional Thinking, that there is no real true one programming language to rule them all, even though a lot of developers seem to be chasing that, and there are lots of interesting reasons why that is.

 

So I'll start this by a famous quote from William Faulkner, who was a Nobel Prize winning author, and he, very famously, has a quote that says, "I'm a failed poet." He said, "I tried to write poetry and I was really bad at it. So then I tried to write short stories, and I was really bad at that, so I guess I'm a novelist."

 

And the point of that quote is that I believe that programming languages are a creative medium. It's interesting because it's both a creative medium and it's an engineering medium, which makes programming one of those interesting things. Who was it that said that? Was it Fred Brooks that said, "Programming is like building castles in the sky," or some really eloquent thing like that? Programming languages are a creative media, and I think one of the things you have to do, as a programmer, is find the programming language that most facilitates you being creative, but doesn't annoy you and get in your way by putting too many roadblocks in front of that creativity.

 

Rebecca Parsons:

Well, and one of the things that I think is important in that context too is, different programming languages have different underpinning characteristics. What are the fundamental building blocks that you use within the language to build up a particular program? And when I think about programming, I think about, what are the things that I am trying to represent in the world? And similarly, what are the constructs and what are the fundamental building blocks that that programming language makes available to me? And I want to match those.

 

And again, I agree completely, Neal, that there is never going to be one true programming language to rule them all, because the kinds of problems we're trying to solve are so broad. And the only way you could get one true language to rule them all, it would have to be a kitchen sink language like, unfortunately, C++, which has a little bit of object and a little bit of C and a little bit of this and a little bit of that. And I've heard similar things about Scala, where there's too much of a mixing of the paradigms within the language. But there's a reason we have different kinds of languages, and it's because we have different kinds of problems and people think in different kinds of ways.

 

Neal Ford:

Yeah. I think one of our colleagues, actually, I wished I had come up with this quote, but one of our colleagues, and I can't even remember who it was, said that they thought that one of the problems in Scala was that it was too flexible, that it was flexible in too many ways. And it's very much a philosophy, very much like a C++ of, let's support all the paradigms and then let developers choose which paradigm they want to pick, which path they want to go today. And that can be very empowering for exactly the reason you're talking about, because it allows you to, very tightly, build a language toward the exact problem you're working on, but it also requires incredible discipline if you have a large programming team.

 

Because one of the dysfunctions I've seen in Scala projects is that some people are coding in an object-oriented style and some are coding in a functional style. And some are doing their own wacky thing with a lot of metaprogramming and other powerful facilities, and they're writing in their own language they've created, and it becomes very, very difficult to understand. And of course, it's a creative medium. And one of the interesting things about programming languages is that, it's a creative medium, but other people have to read what you've created toward a clarity. Clarity's one of the primary goals of being able to read code, and being able to create readable code in a particular language is its own challenge sometimes.

 

Rebecca Parsons:

Well, and also, there are different people who conceive of solutions to problems in different ways, and I think that complicates it as well, getting to the clarity point that you talked about. One of the most frightening pieces of code I've ever laid eyes on was written by someone who was an excellent Fortran programmer, but that was the only language he had ever programmed in, and he thought, in terms of problem solution, very much in the Fortran imperative style. But he went to an organization that wanted to use Lisp, and so he wrote Fortran programs in Lisp syntax. He could understand the syntax of Lisp and he knew what he had to do to get his Fortran program to work in the Lisp programming language, but he never really grasped what Lisp was all about, and therefore, he literally wrote Fortran programs in Lisp syntax.

 

Now, that's obviously an extreme case, but I think it does get back to what you're talking about, both in terms of the problems with these kitchen sink languages on large teams. Different people are going to think about how to solve the problem differently, and if you have someone who is thinking object oriented and the language supports both functional and object oriented styles, he is going to approach looking at that problem and looking at the code and trying to interpret it as OO code, even if it's maybe actually more calling on the functional characteristics of Scala, and that is just going to increase the dissonance.

 

It's sort of like looking at a piece of text and expecting to see English, and actually it's filled with Latin phrases. Even if you know Latin, if you're going in expecting to read English, it's going to take a moment to make that shift and say, "Ah, actually, no. This isn't how it's intended."

 

Neal Ford:

Well, there are two interesting things about that, because it goes way down all the way to the fundamental levels of language, what you're talking about. Because when you think in an object-oriented way, you think iteration, but when you think in a functional way, you think MapReduce, and those are fundamentally different ways to attack a problem that gets below the level of all the other things you have to consider about the problem. And so, it's the beginning phase of the solution, not the last little bit, of solving a problem, because it's a fundamental way of thinking about it.

 

And everybody who's listening to this has programmed in more than one programming languages knows that you always pick up a new programming language by programming it in the old style, exactly like your exemplar there. Because I know when I first started coding in Ruby, I was very much writing Java code in Ruby, and then I gradually learned the idioms of Ruby. But, and I think this is one of the huge benefits of embracing being a polyglot programmer and knowing many programming languages, inevitably learning a new language and learning its idioms makes you better in whatever language that you happen to be in.

 

Rebecca Parsons:

Exactly. And it's one of the reasons why I think, in learning a language, you should start from, what are the fundamental Symantec constructs? The syntax is how you express it. It's learning first, what are those fundamental building blocks? What are the constructs that I have to build up my program? And what are the idioms, as you say? What are the organizing principles that I'm going to use to construct this? And then you learn the syntax.

 

And then, eventually, if you learn enough languages within the same family, then you can start to really delve into, what are the differences between OO programs expressed in Smalltalk, versus Java versus C#, versus C++, as examples? How are those expressed differently? And then you can start to sort of tease apart the subtleties of different language decisions. For so long, we've had very dominant languages. First it was Fortran. Actually, first, it was Assembly, but then it was Fortran, and then COBOL. COBOL was everywhere.

 

Neal Ford:

And stupid.

 

Rebecca Parsons:

Right, exactly. And then, you had C as kind of the system's language. But when I look at the language landscape now, it is so much richer with languages like Go Lang and Rust and Julia, this entire proliferation, and I have my own series. But I wondered, Neal, why do you think we are seeing this explosion in the availability of different kinds of languages, that are not just toy languages, they're not just used kind of on the edges, but actually getting serious consideration?

 

Neal Ford:

I think there's a couple of things around that. Part of it is the maturity of people understanding how programming languages work and what they're useful for. So language design has gone in leaps and bounds. Particularly, one of the benefits of Java and its ecosystem is the way they've been studying both the idea of the underlying machinery of the virtual machine and the language design, and they've actually done a really good job, and of course, C# added to that. But I think, just in general, there's a much richer ecosystem of languages.

 

But I think, too, that they're much smarter languages now. This is one the points that I want to make and this makes it easy to circle back to. One of the things that you and I called out in the Building Evolutionary Architectures book was the 4GL languages that were all the rage back in the 1990s. There was a Cambrian explosion of these 4GL languages back then with the dBase and FOXPRO and Clipper and Access and Paradox, and there was this whole family, PowerBuilder. It was a huge dominant force for years in the industry.

 

But it goes exactly to the thing that you were saying before, the building blocks part of the programming language. The building blocks were too coarse grain. They were too chunky. And this is what we could define as the last 10% trap, which I use Access as the problem child here, but it's true for all of these 4GLs, that when you build something in Access, you can get about 80% of what you want really fast and easy, and the next 10% is possible with some difficulty, but that last 10% is always out of your reach. And so, that's why we always go back to general purpose programming languages.

 

And towards your point about modern languages, they haven't, for the most part, fallen into that trap. If you look at Go Lang and Rust and Clojure and Scala, those are all very general purpose programming languages, but with a very distinct philosophy behind them. So I think, as we learn more about programming languages, we learn what things create really truly capable blank canvases versus the ones that are useful for limited problems, but then quickly run out of steam. And of course, the current incarnation of that is the low code environment stuff that you see popping up. That's just fourth generation languages reborn yet again, and I expect the same fate for those environments.

 

Rebecca Parsons:

Exactly. And one of the things that Martin and I were trying to get at, with the Domain-Specific Languages book, addresses that same kind of 80/20 trap. People, in the past, when they've tried to write and design these domain specific languages, have fallen into that same kind of trap where, "Okay. I can think about 80% of my problem this way, but then I need a notion of iteration, and I need a notion of this and a notion of that." And so, they build into their domain-specific language these general purpose programming language constructs, and then the entire thing blows up.

 

And trying to look at this, rather than allow that to happen, think about, from the perspective of your domain, why do you need this concept of iteration? Perhaps it is something more along a MapReduce style, but that you can create constructs that don't replicate the general purpose capability of iteration, but gives you the semantically relevant iteration that you need within the context of that domain. And I think that's one of the problems, again, that the low code environments are going to run up against is, where do you draw that boundary?

 

And I've heard many of the things that we've heard for years about generators and automatic code translators and, "Well, but you can always just have it generate source code and fiddle with it from there." And it's like, A, that kind of defeats the purpose of having a more simplified programming environment in this low code environment. And, B, is that code in any way, shape, or form maintainable in that new state? Fundamentally, all languages, if you have a few basic constructs, they can all do everything. It's just how much work and how much complexity are you introducing to try to do something that the language just really isn't well suited for? Back to my Fortran programmer and the Lisp example.

 

Neal Ford:

In fact, if I remember correctly, one of the things you called out in the DSL book were DSLs that were too ambitious, that were creeping too far toward Turing completeness, which is an anti-pattern in that space.

 

Rebecca Parsons:

Exactly. Exactly. And going back to your point too about language design, one of the problems that we have when you think about domain-specific languages is, outside a few very basic design principles, I don't think we have yet a shared understanding of what constitutes a good domain-specific language. And it's conceivable that there is not a general definition of what constitutes a good domain-specific language outside the bounds of things like don't be over ambitious, have a well defined domain, all of that sort of thing. But we have gotten much more sophisticated in what constitutes good language design from a general purpose programming language position, and even more so, what does a good OO language look like, versus what does a good functional language look like, or what a good declarative language looks like. So I agree. We have gotten a lot more sophisticated.

 

The other part of my hypothesis on why more of these languages are coming into being, I do think the computational viability of the Bytecode interpretation is a big part of that. And the decision by both Sun, and then Oracle and Microsoft in thinking about the JVM and the CLR respectively as a language implementation platform has made a big difference, because it significantly reduces the barrier to implement a new programming language, because you don't have to do all of the stuff around compilation, optimization, getting down to what's going to run on the hardware. You just have to take it down to the point of the JVM language or the CLR language.

 

So the reduction in difficulty of language implementation, I think is important. It's certainly not simple, even going to a platform like the JVM to implement a new language, but it's certainly easier than it has been in the past.

 

Neal Ford:

Yeah. And I think that speaks exactly to your point about evolution as well, because the code generator thing you were talking about, PowerBuilder, back in the day, which is a very popular 4GL, when it was on its last legs, it was an interpreted language back when interpreters weren't that powerful because machinery wasn't that powerful. So in the last gasp of relevance in PowerBuilder, what they started doing was code ginning all the PowerBuilder code into C code, and then handing that to a C compiler to compile it as the optimization step. And for exactly the reasons you were talking about, that was a nightmare, because now you take that generated C code, it was incomprehensible, but there were some things that you had to go tweak in that, and of course, as soon as you tweaked that, you couldn't go backwards. And so, what you ended up with was a massively generated unmaintainable C code base.

 

But by that split, and I think the split that they made when they designed the Java virtual machine, they managed to cleave it in such a useful place, or between this kind of fundamental operations and behavior that a programming language needs, versus the syntax that produces that behavior, has proved to be incredibly fruitful in our industry, because it has allowed this explosion of languages for really sophisticated languages that can actually compete as first class languages, because they can rely on the underlying garbage collection and performance and all the other tuning that they do on a regular basis on those underlying virtual machines.

 

Rebecca Parsons:

I also do think another aspect of computation today that is feeding this expansion of the programming language landscape is, we are solving a much broader range of problems than we used to. I mean, back when I was studying, you had two classes of programs. You had business programs that were written in COBOL, and then you had scientific programs that were written in Fortran, and those communities never really interacted very much. They solved very different kinds of problems and the fundamental languages were quite different between those.

 

When you start to think about implementing on a mobile device, implementing a rich immersive user experience, but also dealing with embedded computation in cars or in networks of IoT devices, there's a real breadth to that landscape of problem types and how you might want to approach those problems, and also the utilities related to it. You will think about memory utilization or power utilization or code footprint in a very different way if you're running on a desktop versus running on an IoT device that's sitting out somewhere in the middle of a field.

 

Neal Ford:

Yeah. And the capabilities of these tiny platforms is amazing too. I mean, you can reasonably run high level Java code on like a Raspberry Pi now, where a few years ago, you would have had to write an Assembly language to write and target a platform that compact. So just the rising tide of hardware capabilities has made our job as programmers a lot more convenient.

 

Rebecca Parsons:

So shall we speculate a little bit?

 

Neal Ford:

Sure.

 

Rebecca Parsons:

Where, if at all, do you see a gap in the landscape of languages?

 

Neal Ford:

That's a really interesting question, because the new languages that have come out are opinionated, but don't always pick up on a lot of the things that other well established languages have been doing. So Rust is a good example of one that has chosen a very, very, "We want to replace C," kind of perspective of a very low level kind of system programming, Go languages designed for these very small little command line utilities. And then, they purposely left some things out around a concurrency that modern Javas currently do around deadlock prevention and that kind of stuff, and some things like support for some of the data structures, like generics, because they wanted to keep things quite simple. So I think that kind of opinionated language design will continue, and I think we'll see some interesting perspectives on that going forward.

 

I think that, as you were talking about, the fundamental capabilities that we're seeing are going to keep spawning new ways to be able to write stuff. So I think, very soon, you'll be able to write in a very high level language and easily target some environment, like a watch or IoT device and not have to worry about a lot of the things you have to worry about now. I think connectivity, like Bluetooth, will get much better at over time, and we can stop thinking about locality quite so much. And, of course, the idea of cloud and resources, like a memory, is becoming a fuzzier and fuzzier concept on desktop applications. I think that's going to continue going down, and we can see a day where we're actually starting to see this, where everything with electricity also has an IP address.

 

So the reach that you can have with the programs, I think is going to be astounding. And I think one of the interesting things is going to be how do we take all these different programming languages, technology stacks, platforms, but they all have an IP address, and we all want to get them to talk to one another and do useful stuff? So I think there's going to be some interesting challenges in integration architecture. Maybe even some of the language is targeted purely at integrating all the stuff together. We're still writing things at a very low level right now, but it would be really nice to have a higher level programming language that allows you to wire together your Alexa and your home pod and your phone and your washing machine to do some sort of useful thing. While the capability is technically there right now, we just haven't invented the language yet to get those things to easily talk to one another.

 

Rebecca Parsons:

Yeah. That was the same direction I was going, where you got to towards the end, is a higher level language that allows the proper modeling of distributed computation. When we think about microservices, when we think about IoT devices, when we think about these kinds of ecosystems, though I'm not sure about my washing machine, and I still have never figured out why I would want an IP address on a toaster. But yeah, I'm sure that's just my limited thinking, but I do think some higher level constructs that help us more properly express the communication and coordination aspects of distributed computation and to allow an easier way to express some of those notions of distributed systems and solve some of the problems that are inherent as soon as you start to run across a network of distributed devices, and are even more complicated as the devices become more heterogeneous.

 

Neal Ford:

Yeah. In fact, my sous vide has an IP address and it is actually useful, because it tells me when things are done. And in fact, I've even played around a little bit toward what I was talking about before. There are some utilities, like If This Then That, that you can do things on your phone to say things like, "When the sous vide has been cooking for four hours, notify me that it's done and start the toaster," because I need to start the rest of dinner. So we are starting to see some very rudimentary ways of wiring things together, but it's very, very, very primitive, and reminds me of the early days of Assembly language now. So I think we'll eventually get to a much more interesting place with those kinds of things.

 

Rebecca Parsons:

And it is certainly far more satisfying to do programming that makes changes in the real world, and that just generates a number or spits out a report.

 

Neal Ford:

Yeah, totally more fun to click a button on your phone and watch something in the real world or react to it. It's always a real treat.

 

So we'll wrap up here. So the premise of this podcast was that there is no one true programming language, but we're both kind of language geeks, and so I feel compelled to, because I get this question a lot, because one of my hobbies has kind of been looking at programming languages and I've always been interested. And one of the questions people always ask me is, "Well, you know a lot programming languages. Do you have a favorite?" And, in fact, I do have two favorites for different purposes.

 

Ruby is still my all time favorite programming language for getting little stuff done, because it is so developer friendly. In fact, what I say is, it's so developer friendly, that it's dangerous, because it's so friendly, it will let you do really, really dumb things, because it's trying to be as friendly and as accommodating as possible. So I still write a lot of Ruby. I still use Rake files to make utility in Ruby to do all kinds of things on my machine to automate all kinds of tasks.

 

But for really serious work, I've looked at C++. I spent many years programming in C++, and I've looked deeply at a lot of other programming languages, but I think I've come around to my favorite one. If I had choice, which you never do of course, but if I had my sole choice for programming language on a project, I think it would probably end up being Clojure, because of the reasons we talked about the JVM earlier. You get all that first class support. But I really like the perspective that Clojure has, which is kind of the opposite of the one that Scala had.

 

Like Go, it's a very opinionated language, by Rich Hickey, the language designer, and I have a deep abiding love for Lisp anyway. It is a Lisp. Because one of the things I like most about Lisp is that, unlike a Scala language, or even a Ruby that gives you a million different ways to extend it, there's basically one way to extend a Lisp, and it's the macro facility, and everything you look at is either a thing or it's a macro on a thing. And so it has a very, very well defined extension mechanism. And I think, that long-term, leads to better readability and understandability.

 

And I have to bend your mind around to understand the notation of Lisp, but one of the things that I like about it, and the reason I say that I think it's one of the better engineering languages, is because there's no ambiguity when you read that code, whereas, there is very often the opportunity for ambiguity in other languages, and sometimes egregious ambiguity in really powerful languages that let you rewrite every part of it. You start mistrusting what you are seeing. And so I really like that combination of things in Clojure, the opinionation and the single paradigm approach that still supports others, but definitely leans into one opinion about how to do things correctly.

 

Rebecca Parsons:

I'm sure Rich Hickey will be pleased.

 

Neal Ford:

Yeah, I'm sure he will. Well, I know you have a deep abiding love for the Scheme and the Lisp family of languages as well.

 

Rebecca Parsons:

Yes. We should probably not get into the Lisp versus Scheme debate.

 

Neal Ford:

Yes.

 

Rebecca Parsons:

But I do tend more towards the Scheme side of the fence. But for many of the reasons that you stated for your fondness for Clojure, that all transfers from Scheme just as readily as it transfers from Lisp. And I guess I had the most fun programming actually in C, which I know most people would find horrendous. And I'm still looking for a language that gives me the freedom of C, but also getting rid of some of the landmines that were so easy.

 

When I was doing a lot of this programming, I was programming by myself, so solving problems, and I didn't have to worry about what other people were going to be doing with the code, because it was solving particular problems for me, and I was in a position, therefore, that I could maintain the discipline that so often has people, in a language like C or Ruby, as you described, it can all go horribly wrong if you have people with different sets of disciplines trying to maintain the same code base, but I guess that makes me very old fashioned.

 

Neal Ford:

No, no. I do have a great fondness for C, not C++, which I think is kind of a train wreck, but C, KNRC was a beautiful language, because they managed to hit just the right level of higher than Assembly language, or lower level enough to write really astoundingly performant things. And as much as people bad mouth C, most of the code we're running right now is still written in C. Unix is fundamentally still written in C, and then of course, everything is now running some flavor of Unix, and Windows is still largely written in C and C++, I suspect. So I thought that was a beautiful language, and I think one of the things that really helps C out tremendously was that original KNR book about C, because it was, I think, secondarily, a reference on C programming, but primarily a guide on how to program in C idiomatically

 

Rebecca Parsons:

Yes, I agree.

 

Neal Ford:

And that was brilliant, because it really showed people how to think in C in a way that I think very few introductory books on a programming language have ever managed to do.

 

Rebecca Parsons:

Agreed.

 

Neal Ford:

All right. So that's Rebecca and I geeking out about programming languages. So to the small subset of people who enjoyed that, thank you, and the others probably turned this podcast off a little bit earlier. But certainly something that she and I are interested in. So thanks very much for listening, and hope you enjoyed it, and come back for our next episode.

Comments