The end of coding

During past couple of years there’s been a strong push from the technology industry to teach everyone to code.

“Every student in every school should have the opportunity to learn computer science” — code.org

Everyone should have the opportunity to learn computer science. Understanding computation changes the way you think, and directing it gives you amazing power to realise your ideas. Understanding concepts like abstraction, coupling, generality, complexity and scale change the way you understand and approach problems. Wielding general purpose programming tools changes the way you solve them.

Software is transforming the world more and faster than anything since agriculture. It is at the heart of business growth and innovation today, both in the technology industry and outside it, and is rapidly changing the way individuals live their lives. Software has taken over our ways of accessing knowledge, of storing and processing information, of publishing and receiving news and media, of executing commercial transactions, and of communicating with our friends, colleagues and communities. The world’s largest bookseller and video service are software companies; our dominant music companies are software companies; our fastest growing entertainment companies and telecom companies are software companies. Companies that aren’t software companies are increasingly depending on software to optimise logistics, supply chains, manufacturing processes, and advertising or provide tools for their employees to produce ever more value. Software is on the brink of disrupting the way we teach and learn, borrow and lend, learn about and care for our heath, and find and consume services of all types.

But despite this unprecedented transformation, one day, coding will be rare. The current enthusiasm for and growth of coding is temporary, an artefact of our tools. Coding is, right now, our best technology for directing computation, but coding itself is not the essence of computer science. Computing is manipulating data and directing algorithmic computation in order to solve problems. Code is our current tool of choice, but we must make better tools. One day, it will be commonplace for people to manipulate data and direct computation without writing a line of code. I can’t wait.

 

Programming is a highly specialised skill. Solving complex problems is naturally difficult, and as a coder, I frequently write programs to solve problems of all sizes. I cringe at the techniques non-programmers bring to bear on easily automated tasks. I happen to be blessed with particular logical and linguistic facilities which mean I can crudely simulate a computer in my head and talk to it in an unnatural language with weird, unforgiving rules (I’m less good at simulating humans). Many people are less well adapted to be good at coding, but not much less likely to benefit from solving complex problems. The tools and methods of programming introduce much of the complexity associated with solving a problem with code, and take those solutions out of reach of the majority of us who aren’t trained in the craft. Programming is not easily learnable, and is an unnecessarily distant abstraction from many problems people may want to solve. People shouldn’t have to learn to code to apply software to these problems.

There are a few tools I can think of that today give non-programmers some programming-like general problem solving power.

Calculators

Calculators have come a long way since the introduction of pocket calculators in the ’70s. Programmable calculators allowed scientists and engineers to solve problems more complicated than simple arithmetic could handle (though they might have used some code to do so), and graphing calculators helped them understand the answers visually. Since the popularity of personal and mobile computers, software calculator interfaces have evolved towards representing the problem the user is expressing, rather than the anachronistic accumulator-style implementation (e.g. typing a whole expression left-to-right at once rather than one term and operator at a time and inside out). Innovative designs like Soulver and Calca embed the calculation in its context and show working on the surface, providing some ability to vary inputs and watch results change live.

Spreadsheets

Spreadsheets are some 30 years old but still fundamentally pretty similar to their first ledger book-inspired ancestors. They’re still the best lightweight accounting tool but also turned out to be a great general purpose calculation and modelling tool, and are good at representing tabular data, too. The tabular format is nonthreatening yet general enough to wrangle into so many roles[1], and the live recalculation encourages piecewise problem solving. Lots of people who work with data are great with spreadsheets. They can do craaaazy things. Up the complicated end, spreadsheets are capable at data storage and exploration (especially since the advent of pivot tables), help people develop and evaluate complicated multi-variable expressions, explore simulations and what-if scenarios, and visualise results. Spreadsheets are a somewhat generative tool, making possible things far beyond the tool creator’s imagination. They are as close to programming as many people get.

Spreadsheets have their shortcomings though, especially in light of today’s standards for interface and power. They’re poor at handling multi-dimensional data, and you usually need to decide dimensionality up-front, or start over. They can roughly simulate vector/parallel calculations by using a range of cells and repeating calculations crosswise, but they don’t understand the shape of your data enough to offer much help doing so. Spreadsheets conflate the interface of a flat two-dimensional tabular view of data with the data itself and with the formulae evaluated on it. Alphanumeric cell addresses are opaque and brittle; either moving data or altering layout is liable to break the other and affect computation. The formulae are hidden and it’s very difficult to verify the correctness, or even understand the functioning, of a spreadsheet you didn’t author.

A few mid-80′s spreadsheet programs attempted to address some of these shortcomings, primarily by decoupling the data from the tabular display: Javelin, Trapeze and Lotus Improv; but they’re long gone and sadly we haven’t seen anything similar in consumer software.

Personal databases

Sometimes a spreadsheet just doesn’t cut it when you have complex or multidimensional data. Data manipulation, query and reporting are the essence of a large range of problems people want to solve. But unlike spreadsheets, it’s my impression that personal databases have sharply reduced in popularity over the past couple of decades. Have they gone out of fashion, or do I just move in different circles now? Perhaps the presence of programmers in any sizeable organisation has discouraged people from using them, on “expert” advice. I remember the distaste I had for MS Access back in university: point and click query building over my dead body! But I was naive, just high on the power of SQL. The capabilities embodied by personal databases should be taught to everyone; not instead of coding, but maybe before it.

I now discover that MS Access can pretty much build CRUD applications for you, and Filemaker much the same. I’m also pretty keen to try out Zoho Creator next time I need to prototype a data-heavy app. Still, while they have evolved a bit, these tools are still not flexible enough to build a real application, just easy forms and views.

 

There are a few more specific fields where non-programmers have tools by which they perform something very much like programming, but without much code. Game development provides a good example: a game is a computer program providing a particular interactive experience. Games are typically really complicated programs, dominated by “user interface”, but a game development team is typically dominated by artists and designers, not programmers (the mix does vary depending on game requirements). These artists and designers use tools built by programmers to realise much of the creative output a game embodies: art, textures, terrain, models, animation, cinematics, level design, puzzles, interaction, narrative. To propose a process whereby, say, a level designer provides drawings and written guidelines to a programmer who then manually translates the design into code, and then repeats that cycle until the designer gets what they want, would be just ridiculous (yet this is how most application interfaces are built today). No, the programmers build a game engine and level design tool and then the designers can work directly in an environment that closely matches the finished game and produce output to be directly loaded into the engine at runtime.

Sadly, today’s user interface design tools are not usable by non-programmers, nor used by many programmers. Point-and-click has been looked down upon by “real” programmers since the invention of the mouse, just as assembly programmers looked down on early Fortran pioneers, C programmers look down on Java, and Vi/Emacs users look down on those who harness an IDE. Those who have mastered one tool or process have great difficulty letting go to adopt something different enough to be significantly more powerful.

For a long time, GUI builders were crap. GUI builders are still crap: they often provide a poor representation of what the rendered interface will look like, are not powerful enough for developers to achieve exactly what they want, and are too complicated and laden with programming concepts for non-programmers to use them[2]. Programmers understandably decide to just fall back to coding, since they’re going to be doing some of that anyway to work around the tool’s deficiencies. This is a mistake, though an understandable one. Code provides a terrible representation of visual concepts with a huge mismatch in thinking modes, especially when that code is procedural rather than declarative or you’re designing the interface as you build it. Recompiling and launching your program to observe each UI tweak is an inexcusably slow development process. I get the motivations (e.g. here, here) but it’s a scandalous waste of effort that designers do all their work in Photoshop and a developer starts from scratch to replicate it. Our tools must improve so that designers can build the real UI, with programmers taking over later for the back-end (Spark Inspector and Reveal hint at the future).

Other tools providing programmer-like power to non-programmers include batch processors (e.g. in Photoshop), node- and layer-based compositing tools (e.g. Shake, Blender), Apple’s Quartz Composer for node-based image processing and rendering, Automator for scripting Mac OS and applications, Mathematica, Matlab, and LabVIEW for scientific and engineering design and analysis, Yahoo! Pipes and IFTTT for web and API mashups, and wikis for content management and presentation. And I must make a special call-out at this point to HyperCard (1987-2000), one of the most influential application design environments to date. I fondly remember building stacks and writing HyperTalk long before grasping any of the concepts I would now consider fundamental to programming. I made things I was proud of and saw people in both my own and my parents’ generation (i.e. educated pre-computers) do the same[3]. If you missed out, do read this reminiscence. HyperCard’s legacy lives on though its influence on hypertext, the web, wikis, and derivatives like LiveCode.

So we have some data analysis and calculation tools for maths, crappy UI builders for interface, and some application-specific tools for games, graphics and hacks. The next generations of these products should radically expand what non-programmers and programmers can achieve without coding. They won’t write code for you, but they will make coding unnecessary. I hope similar tools emerge to cover most of what is now achieved by writing code, enabling the creation of arbitrary useful and high-quality applications by anyone. In particular, we’ll reach a critical point when these tools become recursively self-improving, so that a non-programmer can create a tool which will in turn be used to create more applications, including better tools.

That six-figure-salary engineers don’t consider translating a Photoshop render and some instructions into a functioning user interface to be a tragic waste of their time shows how valuable this problem is to solve. If you’re a programmer and this offends you, consider how much more value you could create if you didn’t spend half your time as a glorified PSD->HTML translator. Yes, yes, I know, front-end is hard, it’s really complex[4]. But so much of its complexity is due to the tools we use, not essential to the problem. All that deep software engineering insight and hard-won domain knowledge is so valuable because building a UI requires thousands of lines of code. When it doesn’t, you can apply your towering intellect to something better.

Most previous attempts at programs that help non-coders make programs have sucked, especially the more general-purpose ones. But we’ve learned a lot about user interface recently thanks to the billions of people now using our interfaces and consequent value of improving them. The challenge of creative tools is presenting an interface that extends expressive power without crushing the user with complexity. While in every domain there will always be experts working at the boundary between impossible and easy, as tools improve things that once required sophisticated knowledge and technique become accessible to amateurs. Witness the explosion in quantity and quality of amateur music and video as the tools of production became good enough and cheap enough to pick up in a weekend. I’m optimistic that as our ability to design interfaces for complex domains improves we’ll create better and simpler non-programmer tools for designing and implementing a wider range of software. For some, these will be the stepping stone to expertise, but for most the tools need only help them get the job done.

 

Coders have a tendency to make tools for coders. It’s much easier to build a tool that works when you can assume a high level of technical sophistication for your users. But tools usable by non-programmers will help programmers too. Reducing the cognitive load of directing computation will enable coders to solve more complex problems faster. Like the mythical successful employee, we should be aiming to do work so great we put ourselves out of our job. We’ll still need programmers and engineers–experienced and creative practitioners of modelling problems, designing algorithms and data structures, taming complexity, and managing process–but they might become like farmers today: a highly leveraged sliver of the population.

A future where everyone can code would be wonderful, but code is only the means to directing computation for now. When our technology reaches the point where everyone has tools for thinking and creating but few need to code we’ll be far better poised to conquer our society’s challenges. Programmers could start building that technology now.

Teaching more people to code is a great step forward, but a future where few need to is even better.

 

Thanks Jessica, Natalia, Nik and Glen for your feedback on my first draft.

[1] Joel Spolsky (once PM for Excel) recounts learning how people actually used Excel.
[2] Apparently Microsoft’s tools have led the pack for a while, but I haven’t used them for a long time.
[3] James Davies: I still remember your dad proudly showing us his stacks and soliciting feedback. That and him ejecting a bone that was choking me with the Heimlich manoeuvre.
[4] I underestimated this complexity for a long time when I was more interested in back-end engineering. Front end is really hard, and the tools are weak.

Sunk costs

A few months ago I wrote about opportunity cost, the economic term for the value of the best alternative forgone when you make some decision. Opportunity costs aren’t always monetary: mutually exclusive alternatives are hiding in many decisions, and are especially difficult to spot and reason about in decisions involving time and effort. When you take on one project or job, you’re giving up anything else you could be spending that time on.

The concept of sunk cost is another piece of mental machinery I feel lucky to understand. “Sunk cost” is an economic term for a cost that you’ve already paid, irrecoverable regardless of what actions you take in the future. It’s a cost you can’t take back, unlike a prospective opportunity cost about which you have yet to decide. Sunk costs can be difficult to spot in casual decision-making, and much harder emotionally to avoid.

When you’ve already paid a cost, say putting down a deposit on an exotic trip, that money is gone whether you take the trip or not. But we fallible humans are often tempted to take sunk costs into account when making future decisions. If, after paying your deposit, you learn of an even better trip you could take instead, that’s also much cheaper than the first, it can be difficult to ignore the money you would “lose” on the deposit were you to switch. But the money doesn’t care which trip you take. A better option is a better option, and the difficult but rational thing to do is to ignore the sunk deposit (except as far as it makes the first trip a little cheaper) and choose the best option now available. To take the original, inferior trip anyway is throwing good money after bad.

A recent sunk cost I succumbed to: I bought a return train ticket to a social event. On my way home afterwards a friend offered me a lift, but I declined as I’d already paid for my return journey. What insanity! I’d faced a choice between a free train journey and a free, comfortable lift to my door, with bonus conversation, but had allowed a few dollars, already beyond my reach, to propel me to the train station. I realised my error as I plodded to the platform, too late to change my mind.

This hints at a good way of avoiding the emotional trap: re-frame the dilemma so that it doesn’t involve any costs. Treat whatever you bought for those irrecoverable dollars or hours, if anything, as something you can now have for free, should you want it. If this free start makes the associated option better than any alternative, great, but otherwise you shouldn’t feel so bad about giving up something free that you didn’t really want anyway.

Like opportunity costs, sunk costs are often non-monetary. As someone who designs and makes things (in software), it’s not uncommon for me to discover, a little way into a project, a different and much better approach. Perhaps the new technique will result in a superior end product, or the subsequent effort will be halved. I face a difficult decision of whether to continue with my initial, comparatively crude attempt, or throw it away and start fresh with a better way. The hours spent slaving away so far seem deeply relevant to this decision; in fact, the more painful the work has been the more valuable, too, surely! But they’re not. If I were to ask a dispassionate observer they could make a decision without regard to the hours spent so far, only the existence of a partially-complete attempt down a poor path. Reframing the dilemma to imagine that someone else had done the work that I have (perhaps I stumbled across it as an open-source project) helps me evaluate it only for the value it really carries, instead of the sunk cost to obtain it.

Just as with opportunity costs, I feel a little bit super-human every time I spot a sunk cost in my or a colleagues reasoning and can steer us towards a better decision. Take that, puny human brain!

Smaller local governments are more effective, robust, and innovative

The NSW Local Government Review Panel has proposed amalgamating many of NSW’s councils into larger mega-councils, including one Sydney metro behemoth comprising at least Sydney City, Randwick, Waverly, Woollahra and Botany Bay. This council would govern from six to eight hundred thousand residents. The proposal includes two other amalgamations into council areas exceeding half a million residents too (and Blacktown is projected to clear that bar alone in the next 20 years).

Amalgamating successful independent councils into mega-councils runs counter to the review’s stated goals of building a sustainable system of local government up to the challenge of strategic change and rapid innovation. Larger institutions  necessarily have a less local focus. Concentrating more power in fewer hands increases the risks of, temptations to, and damage caused by pursuing human self-interest, necessitating more structure, checks and balances, red tape, and other security mechanisms. And larger councils are less able to experiment, a crucial component of innovation, as the cost of making the necessary mistakes increases beyond that politically justifiable.

Politics is the price we pay for increasing the size of the groups and institutions we form. Mega-councils will mean more politics, more overhead, more corruption, and less change. Read my submission to the review opposing mega-council amalgamations. You, too, can make a submission until the extended deadline of July 19th.

Opportunity cost

Sometimes I think I have an unfair advantage. I have mental machinery that allows me to make better decisions than people without that machinery. But I’m not talking about intelligence or analytical ability or anything with a physiological basis. The machinery is just concepts, ideas, ways of thinking that I can bring to bear on a decision. And it’s not built in, but the product of things I’ve learned. So you can have this machinery too, if you don’t already, and in turn make better decisions.

The most basic decision-making tool that I’m regularly thankful for is the concept of opportunity cost. It’s a high school economics concept, so you’ve probably heard of it before, perhaps even have understood it well. If you regularly look for opportunity costs when making decisions–large and small, financial and non-financial–then you can stop reading now, I’m not going to say anything new. But if you don’t wield the concept regularly, apply it to every big decision and many small ones, then you’re not quite gaining the advantage that you might. Opportunity costs are present with every decision, they accompany every action that excludes others. Consciously looking for and considering the opportunity costs of your (in)action can help you make reliably better decisions than if you hadn’t.

The opportunity cost of an action is the value of the best foregone alternative. In a simple binary decision, where you must choose one of two options, the opportunity cost of taking one option is the value of the other option that you can therefore not take. Most decisions are more complex than a forced two-way choice, though, so opportunity costs can be harder to see.

The basic high-school economics example is buying something. The opportunity cost of spending your money on some good or service is the best alternative thing you could have spent your money on instead. Money works really well here, as it directly represents deferred opportunity–the opportunity to buy something later. Everyone understands that whey they spend their money, they can’t spend it again, and people regularly consider what else they could spend their money on when evaluating a purchase. For small, day to day purchases like groceries, transport and entertainment, many people have enough money that they don’t have to worry about opportunity costs: you can both buy the chocolate and go to see a movie and not have to worry about affording a taxi home. As purchases become more expensive people become more aware of the foregone opportunities they imply: you might not buy the most expensive wine, even if you believe you will enjoy it more, because you understand that that money might be better spent on something else of greater utility, even if that something else remains abstract at the time.

Investments also present opportunity costs, sometimes less easy to recognise. If you invest your savings in a stable term deposit, you forego the potentially higher returns of investing that money in an index fund or stocks or property or precious metals or art or anything else that might bring a higher return. Of course, those alternatives carry varying levels of risk so a higher potential return is not necessarily better, but you can only invest each dollar in one at a time. And just the same in reverse: in buying shares you forego the stability and guaranteed returns of holding cash.

Property is an investment people commonly make without fully considering the opportunity costs. Yes, property is an asset that’s fairly likely to appreciate over the long term, but the monetary costs are higher than just the mortgage repayments (which alone at least double the purchase price over a typical loan period): transaction costs, insurance, property taxes, repairs and maintenance all add up. The opportunity cost of home ownership is the best alternative investment you could have made with all that money, less rent to live somewhere else. Renovations carry the same opportunity cost, often overlooked. Yes, you might “earn it back” when you eventually sell, but you’ll give up the returns of investing the removation’s cost somewhere it might otherwise earn compounding returns until that time. Most people still end up ahead, of course, but property is not the obvious investment slam dunk it’s often conceived to be. Especially not if you can’t guarantee a value doubling every 10-15 years.

A car is a terrible investment. Not only will its value plummet and maintenance costs increase, but the true opportunity cost includes the return you could have earned investing that twenty thousand dollars plus upkeep somewhere else for the duration of ownership. The transportation benefits are real, but these can be obtained in other ways, too, including public transport, taxis, and car-share schemes, all of which leave your money free to invest elsewhere.

Opportunity costs arise in every decision, not just purchases and investments. In non-purchasing decisions, the opportunity costs can be even harder to see. This is where having the mental label “opportunity cost” and knowing to look for it starts to feel like an unfair advantage.

Many decisions have exclusive options: you can only choose one. People can generally only take one full-time job at a time, have one romantic partner, live in one city. It’s tempting to take the first good option that presents itself, but the opportunity cost of doing so includes the value of the best alternative, including those you haven’t discovered yet. Your time and energy are limited and valuable resources, and in devoting them to the first option that doesn’t suck you’re likely paying a huge opportunity cost. This is painfully obvious in employment where, anecdotally at least, so many people seem to end up dissatisfied with their career, which is often the first career they pursued, and after which commencing they stopped exploring other options.

It’s a big and under-acknowledged problem for entrepreneurs as well. Most people can only start one company at at time (it’s hard!) and so in starting a company based on the first idea you get excited about, you incur the opportunity cost of not pursuing the best idea you might have had if you had kept exploring. Your first idea is unlikely your best. The cost of building a product is not just the development expenses, it’s the impact and value of the best product you didn’t build instead.

Full-time work has a big opportunity cost for me. I tend to pour all my energy in to whatever project and team I’m working with, and the cost is that I have nothing left to give to my own ideas. Employment costs me time for high-level strategic thinking about my life, not just during the office hours each day, but in the hours at home recovering and recharging too. I don’t think it’s just me: many engineers complain about not wanting to code when they get home from work. Working on a major project is exclusive of any other projects, major or not. The opportunity cost of employment is not just the hours you’re paid for, it’s the value (monetary and intangible) of the best alternative project you could put any effort into at all.

So, opportunity costs arise in nearly every decision, but they can be hard to see. The opportunity cost can be hidden, or unknown, or implicit. Avoiding decisions carries costs too: the cost of enjoying the benefit of whichever option you choose (although postponing is often advantageous if you’ll gather more information). If you’re unsatisfied in your job, then doing nothing about it isn’t free, even though you can’t see the cost. The opportunity cost of turning up every day includes the foregone enjoyment of the best alternative job you might reasonably get (including those you don’t know about yet).

Having the concept of opportunity cost in my mental arsenal helps me seek out these costs when they’re not obvious, which in turn helps me make more informed decisions. Understanding foregone alternatives as costs can also help reframe dilemmas that evoke emotional responses too. For example, in deciding whether I can discard some piece of old clothing I never wear, I find it helps to frame the decision as: would I buy it for a dollar if it was in a store today? This suppresses the hoarding instinct and helps me realise that no, it’s not worth the space it would fill. Similarly, if you’re unhappy in your job or working on a project you don’t believe in, try reframing the dilemma: would you take this job/project if it were offered to you anew today? This framing can help balance your current situation against the now-more-obvious alternatives you are foregoing.

More mental machinery coming soon!