Like human languages, programming languages evolve over time. However, this process is unstructured, inconsistent, and influenced by a variety of forces. While new language features and versions trigger fiery debates across the industry, the underlying forces that shape programming languages, and ultimately their footprint on technology, are discussed less frequently.
Language developers publish extensive documentation justifying why particular modifications were introduced throughout versions. There is also a tremendous amount of ink spilled about lineage of programming languages, such as the ancestral tree of Go including Oberon, Pascal, C and ALGOL 60. That said, there is little written to generalize isolated decisions that lead to extensions within a language—let alone ideas that gave way to a new language altogether—into a canonical set of principles applicable to all languages. This is primarily because no such universal “standard” exists; even if it did, it might not be all too useful or nuanced since programming languages, along with their uses and applications, are so vast and diverse. This is why certain languages have gained more popularity over others. The variability across languages mirrors the enormous range of domain-specific applications available for each language, its user-base, community, differences in schools of thought, preferences and taste, in addition to social and economic factors that influence its success or decline.
Programming language theory (PLT) is a sub-discipline of computer science concerned precisely with the study, design, and implementation of languages. Despite being an active area of research, it is not well understood by most developers in a pragmatic sense. The goal of this post is to familiarize software developers with basic concepts underlying PLT, and to reveal why subjectivity and chaos surround software creation. Understanding these ideas is useful for a few reasons:
- Making informed technical decisions. Building the vocabulary and tools to examine languages provides a sophisticated way to reason about technical decisions.
- Thinking about developer experience improvements. Understanding language features empowers us to better innovate in the corners of technology while focusing on both technical correctness and developer experience.
- Becoming better at learning new languages. Various features and properties are shared between multiple languages. Being acquainted with constructs in one language helps us form themes and recognize patterns and design principles that transcend languages.
- Being less prone to systematic error. Decisions in computing are not free from the cognitive bias of their implementers. Understanding foundational principles can provide awareness and enhance meta-cognition about why certain language features were designed in a particular way, or how different approaches impacted mindshare and adoption.
Standards committees and human error
In the absence of a “hard science”, decisions about how language features are extended, eliminated, or modified are left up to standards committees. These individuals are responsible for evaluating a language, surveying its usage, understanding its community, and ecosystem and reacting to needs. Represented by a body of trusted experts, they are the ultimate authorities on how a programming language evolves. However, they are no less susceptible to the cognitive bias and subjectivity that adds error to any human decision-making endeavor. This is why some programming languages follow a structured path toward evolving (such as Racket Scheme, which is well-organized). In contrast, other languages take a looser approach (such as PHP, which is mostly organic).
Languages are as diverse as people
Humans vary. Some prefer iteration, while others appreciate the recursive life. We come equipped with our unique neuroses and proclivities for what feels most intuitive. Before diving into PLT, it is essential to acknowledge that while there are thousands of languages, only some gain popularity. The next section examines potential factors that have influenced the success of a few languages that are popular today.
What makes a language “successful”?
Programming languages are all more or less Turing equivalent, but remain religiously debated amongst software developers. This dogmatism arises for several reasons; languages represent different design philosophies. Language communities also create different cultures, offering varying levels of social support that color a developer’s opinion. In spite of the parochialism—it is difficult to discern what makes one language “good” over another. Here are some factors that have traditionally influenced mindshare, perceptions, and adoption:
Expressiveness. Language features and abstraction facilities influence a programmer’s ability to write concise, maintainable code. Programming is, after all, a communication tool.
Open source. Making source code freely available means it can be used by and improved upon by anyone. This allows source code to be customized and adapted by a wider range of people. This allows it to be ported to different architectures (e.g. Go’s Windows port was entirely community driven). Unix is known as the world’s most portable operating system. C was developed in conjunction with Unix (and happens to be the language Linux is also written in).
Economics. Microsoft developed C#. IBM developed Cobol. Ada was backed by the Department of Defense. The explosive growth and survival of many languages have benefited from powerful sponsors. New and “better” languages continue to spring into existence. However, we tend to lean toward the devils we know because it is non-trivial to replace an existing large code base and programmer expertise.
Low learning curve. While Basic may not be considered the most sophisticated language of choice—much of its success is attributed to how easy it is to learn. Logo was designed for use in education and is used even at the elementary school level. Java (while more complex than Basic or Logo) has gained popularity in universities — being more beginner-friendly than the alternative C++.
Portability. Having a language that is easily ported allows developers to become productive quickly. C is a good example of a language that required minimal modifications across platforms. Niklaus Wirth created Pascal to work reliably and quickly on 1970s machines and shipped it for free to all of the world’s universities. Basic was easy to run on small machines. These languages were straightforward to implement and required little computing power.
Support, tooling and programming environments. Scaling large projects depends on more than a language in isolation. Code-writing is aided by interpreters, compilers, linkers, assemblers, debuggers, preprocessors, editors, and several other tools. Powerful development environments contribute to a fluid programming experience. Common Lisp benefited from great compilers and supporting tools. Visual Studio is an excellent example of an editor that works with syntactic knowledge of C# to provide capabilities to cross-reference or locate definitions.
Open questions in the design of languages
Introspecting upon what is “better” opens up several questions. For example, how much time and energy should be devoted to writing efficient code? What proportion of the busywork should be abstracted away from the developer and passed on to the computer? By contrast, what should remain in control of the engineer? How much direct access to machine memory should be provided? How easily can a feature be implemented, and how will the choice of various features impact performance? We evaluate these trade-offs as engineers and computer scientists but often do so with sub-optimal processes and inadequate information.
A brief introduction to Programming Language Theory
Most developers focus on learning the “how” of programming rather than consuming materials that expound on underlying theoretical foundations. The following introduction to PLT does not include proofs or build deep mathematical intuition. The goal instead is to establish familiarity with core concepts and areas of study.
Turing completeness. This property defines the criteria necessary for being considered a real programming language. A programming language is a formal language defined by a set of rules used to translate instructions to a computer. These rules specify acceptable ways to communicate with a computer. The Turing machine is a mathematical model of an abstract computer capable of implementing any algorithm by simulating its logic. When a system that can simulate the Turing machine can also implement any algorithm—it is known as Turing complete. This usually requires the language to be able to possess state (i.e., variables) and conditional logic. By this definition, HTML is not Turing complete—but lambda calculus is.
Decidability and the halting problem. In his seminal work, Alan Turing proved that there is no general algorithm we can use to determine whether a program will terminate for all possible inputs (though we can model expected behaviors for some program-input pairs). This is known as the halting problem. A decidable problem is one that can be solved. Therefore, the halting problem is undecidable. Conversely, a decidable problem is one for which it is possible to construct an algorithm that always leads to a correct yes-or-no answer.
Type theory. This is a strand of mathematics underlying type systems in programming languages. It forms the internal logic for type checking algorithms in compilers. A type-theoretic framework allows us to evaluate and computationally justify why certain types should or should not exist in a strongly typed language. Types are one way of defining interfaces between different parts of a language, and ensuring the program is connected consistently. In functional programming, type theory suggests that types should express what the programming language is to do (and the lambda calculus that materializes from these types is the language itself). This means that being acquainted with some type theory provides the foundation to better understand static languages with rich type systems (such as Haskell). Type theory essentially studies type systems, which define a language’s organizing rules. An interesting type system is the Hindley-Milner Type System which uses unification to provide type inference capabilities for untyped syntax.
Set theory. Sets are collections of objects. These sets have operations (such as intersection, union and complement). According to formal language theory, a language is a set of strings. Data structures themselves can also be seen as sets with various implementations of set operations. One clear way set theory is notionally relevant to programming is through relational databases (where a database is a relation over sets).
Category theory. Category theory generalizes several branches of mathematics. It does so by providing a generic “meta” language to be able to model concepts abstractly. This is profoundly useful as it allows us to reason about a network of relationships, instead of being concerned with the details of a particular system. A category is a collection of objects. The relations between these objects (such as composition or associativity) are known as morphisms. Relationships between categories are known as functors. Several concepts described by category theory are used in computer science to assess the correctness and concision of programs. Category theoretical concepts are more visible in pure functional paradigms, where computation comprises of mathematical functions that can be composed to build complexity.
Automata theory. This corner of computer science defines the footing for formal language analysis, developing the theoretical basis for compiler design, parsers, grammars, and regular expressions. We model formal languages using an automaton, a finite representation of a formal language that draws from an infinite set. A finite automaton is an abstract machine that moves through states to perform a computation, determining whether a given input should be accepted or rejected based on a set of rules defined by the formal language. The Chomksy hierarchy provides a way to group formal languages into successively larger classes.
Embracing the chaotic evil of programming languages
The success of programming languages is unrelated to a consistent set of principles or standards for technical correctness. Social inertia, human subjectivity and economics have influenced the way we build programs today. The history of computer science has produced abstraction upon abstraction, meaning most modern developers can remain blissfully unacquainted with the details conveniently hidden away behind a language’s interface. However, poorly designed, opaque abstractions lead to a murky understanding of internal logic and structural properties. Many languages are also built on flimsy theoretical foundations. Familiarity with theoretical underpinnings can help developers become better diagnosticians, architects and engage in deeper technical work.