Tag Archives: Complexity

The end of coding

During past couple of years there’s been a strong push from the technology industry to teach everyone to code.

“Every student in every school should have the opportunity to learn computer science” — code.org

Everyone should have the opportunity to learn computer science. Understanding computation changes the way you think, and directing it gives you amazing power to realise your ideas. Understanding concepts like abstraction, coupling, generality, complexity and scale change the way you understand and approach problems. Wielding general purpose programming tools changes the way you solve them.

Software is transforming the world more and faster than anything since agriculture. It is at the heart of business growth and innovation today, both in the technology industry and outside it, and is rapidly changing the way individuals live their lives. Software has taken over our ways of accessing knowledge, of storing and processing information, of publishing and receiving news and media, of executing commercial transactions, and of communicating with our friends, colleagues and communities. The world’s largest bookseller and video service are software companies; our dominant music companies are software companies; our fastest growing entertainment companies and telecom companies are software companies. Companies that aren’t software companies are increasingly depending on software to optimise logistics, supply chains, manufacturing processes, and advertising or provide tools for their employees to produce ever more value. Software is on the brink of disrupting the way we teach and learn, borrow and lend, learn about and care for our heath, and find and consume services of all types.

But despite this unprecedented transformation, one day, coding will be rare. The current enthusiasm for and growth of coding is temporary, an artefact of our tools. Coding is, right now, our best technology for directing computation, but coding itself is not the essence of computer science. Computing is manipulating data and directing algorithmic computation in order to solve problems. Code is our current tool of choice, but we must make better tools. One day, it will be commonplace for people to manipulate data and direct computation without writing a line of code. I can’t wait.

 

Programming is a highly specialised skill. Solving complex problems is naturally difficult, and as a coder, I frequently write programs to solve problems of all sizes. I cringe at the techniques non-programmers bring to bear on easily automated tasks. I happen to be blessed with particular logical and linguistic facilities which mean I can crudely simulate a computer in my head and talk to it in an unnatural language with weird, unforgiving rules (I’m less good at simulating humans). Many people are less well adapted to be good at coding, but not much less likely to benefit from solving complex problems. The tools and methods of programming introduce much of the complexity associated with solving a problem with code, and take those solutions out of reach of the majority of us who aren’t trained in the craft. Programming is not easily learnable, and is an unnecessarily distant abstraction from many problems people may want to solve. People shouldn’t have to learn to code to apply software to these problems.

There are a few tools I can think of that today give non-programmers some programming-like general problem solving power.

Calculators

Calculators have come a long way since the introduction of pocket calculators in the ’70s. Programmable calculators allowed scientists and engineers to solve problems more complicated than simple arithmetic could handle (though they might have used some code to do so), and graphing calculators helped them understand the answers visually. Since the popularity of personal and mobile computers, software calculator interfaces have evolved towards representing the problem the user is expressing, rather than the anachronistic accumulator-style implementation (e.g. typing a whole expression left-to-right at once rather than one term and operator at a time and inside out). Innovative designs like Soulver and Calca embed the calculation in its context and show working on the surface, providing some ability to vary inputs and watch results change live.

Spreadsheets

Spreadsheets are some 30 years old but still fundamentally pretty similar to their first ledger book-inspired ancestors. They’re still the best lightweight accounting tool but also turned out to be a great general purpose calculation and modelling tool, and are good at representing tabular data, too. The tabular format is nonthreatening yet general enough to wrangle into so many roles[1], and the live recalculation encourages piecewise problem solving. Lots of people who work with data are great with spreadsheets. They can do craaaazy things. Up the complicated end, spreadsheets are capable at data storage and exploration (especially since the advent of pivot tables), help people develop and evaluate complicated multi-variable expressions, explore simulations and what-if scenarios, and visualise results. Spreadsheets are a somewhat generative tool, making possible things far beyond the tool creator’s imagination. They are as close to programming as many people get.

Spreadsheets have their shortcomings though, especially in light of today’s standards for interface and power. They’re poor at handling multi-dimensional data, and you usually need to decide dimensionality up-front, or start over. They can roughly simulate vector/parallel calculations by using a range of cells and repeating calculations crosswise, but they don’t understand the shape of your data enough to offer much help doing so. Spreadsheets conflate the interface of a flat two-dimensional tabular view of data with the data itself and with the formulae evaluated on it. Alphanumeric cell addresses are opaque and brittle; either moving data or altering layout is liable to break the other and affect computation. The formulae are hidden and it’s very difficult to verify the correctness, or even understand the functioning, of a spreadsheet you didn’t author.

A few mid-80’s spreadsheet programs attempted to address some of these shortcomings, primarily by decoupling the data from the tabular display: Javelin, Trapeze and Lotus Improv; but they’re long gone and sadly we haven’t seen anything similar in consumer software.

Personal databases

Sometimes a spreadsheet just doesn’t cut it when you have complex or multidimensional data. Data manipulation, query and reporting are the essence of a large range of problems people want to solve. But unlike spreadsheets, it’s my impression that personal databases have sharply reduced in popularity over the past couple of decades. Have they gone out of fashion, or do I just move in different circles now? Perhaps the presence of programmers in any sizeable organisation has discouraged people from using them, on “expert” advice. I remember the distaste I had for MS Access back in university: point and click query building over my dead body! But I was naive, just high on the power of SQL. The capabilities embodied by personal databases should be taught to everyone; not instead of coding, but maybe before it.

I now discover that MS Access can pretty much build CRUD applications for you, and Filemaker much the same. I’m also pretty keen to try out Zoho Creator next time I need to prototype a data-heavy app. Still, while they have evolved a bit, these tools are still not flexible enough to build a real application, just easy forms and views.

 

There are a few more specific fields where non-programmers have tools by which they perform something very much like programming, but without much code. Game development provides a good example: a game is a computer program providing a particular interactive experience. Games are typically really complicated programs, dominated by “user interface”, but a game development team is typically dominated by artists and designers, not programmers (the mix does vary depending on game requirements). These artists and designers use tools built by programmers to realise much of the creative output a game embodies: art, textures, terrain, models, animation, cinematics, level design, puzzles, interaction, narrative. To propose a process whereby, say, a level designer provides drawings and written guidelines to a programmer who then manually translates the design into code, and then repeats that cycle until the designer gets what they want, would be just ridiculous (yet this is how most application interfaces are built today). No, the programmers build a game engine and level design tool and then the designers can work directly in an environment that closely matches the finished game and produce output to be directly loaded into the engine at runtime.

Sadly, today’s user interface design tools are not usable by non-programmers, nor used by many programmers. Point-and-click has been looked down upon by “real” programmers since the invention of the mouse, just as assembly programmers looked down on early Fortran pioneers, C programmers look down on Java, and Vi/Emacs users look down on those who harness an IDE. Those who have mastered one tool or process have great difficulty letting go to adopt something different enough to be significantly more powerful.

For a long time, GUI builders were crap. GUI builders are still crap: they often provide a poor representation of what the rendered interface will look like, are not powerful enough for developers to achieve exactly what they want, and are too complicated and laden with programming concepts for non-programmers to use them[2]. Programmers understandably decide to just fall back to coding, since they’re going to be doing some of that anyway to work around the tool’s deficiencies. This is a mistake, though an understandable one. Code provides a terrible representation of visual concepts with a huge mismatch in thinking modes, especially when that code is procedural rather than declarative or you’re designing the interface as you build it. Recompiling and launching your program to observe each UI tweak is an inexcusably slow development process. I get the motivations (e.g. here, here) but it’s a scandalous waste of effort that designers do all their work in Photoshop and a developer starts from scratch to replicate it. Our tools must improve so that designers can build the real UI, with programmers taking over later for the back-end (Spark Inspector and Reveal hint at the future).

Other tools providing programmer-like power to non-programmers include batch processors (e.g. in Photoshop), node- and layer-based compositing tools (e.g. Shake, Blender), Apple’s Quartz Composer for node-based image processing and rendering, Automator for scripting Mac OS and applications, Mathematica, Matlab, and LabVIEW for scientific and engineering design and analysis, Yahoo! Pipes and IFTTT for web and API mashups, and wikis for content management and presentation. And I must make a special call-out at this point to HyperCard (1987-2000), one of the most influential application design environments to date. I fondly remember building stacks and writing HyperTalk long before grasping any of the concepts I would now consider fundamental to programming. I made things I was proud of and saw people in both my own and my parents’ generation (i.e. educated pre-computers) do the same[3]. If you missed out, do read this reminiscence. HyperCard’s legacy lives on though its influence on hypertext, the web, wikis, and derivatives like LiveCode.

So we have some data analysis and calculation tools for maths, crappy UI builders for interface, and some application-specific tools for games, graphics and hacks. The next generations of these products should radically expand what non-programmers and programmers can achieve without coding. They won’t write code for you, but they will make coding unnecessary. I hope similar tools emerge to cover most of what is now achieved by writing code, enabling the creation of arbitrary useful and high-quality applications by anyone. In particular, we’ll reach a critical point when these tools become recursively self-improving, so that a non-programmer can create a tool which will in turn be used to create more applications, including better tools.

That six-figure-salary engineers don’t consider translating a Photoshop render and some instructions into a functioning user interface to be a tragic waste of their time shows how valuable this problem is to solve. If you’re a programmer and this offends you, consider how much more value you could create if you didn’t spend half your time as a glorified PSD->HTML translator. Yes, yes, I know, front-end is hard, it’s really complex[4]. But so much of its complexity is due to the tools we use, not essential to the problem. All that deep software engineering insight and hard-won domain knowledge is so valuable because building a UI requires thousands of lines of code. When it doesn’t, you can apply your towering intellect to something better.

Most previous attempts at programs that help non-coders make programs have sucked, especially the more general-purpose ones. But we’ve learned a lot about user interface recently thanks to the billions of people now using our interfaces and consequent value of improving them. The challenge of creative tools is presenting an interface that extends expressive power without crushing the user with complexity. While in every domain there will always be experts working at the boundary between impossible and easy, as tools improve things that once required sophisticated knowledge and technique become accessible to amateurs. Witness the explosion in quantity and quality of amateur music and video as the tools of production became good enough and cheap enough to pick up in a weekend. I’m optimistic that as our ability to design interfaces for complex domains improves we’ll create better and simpler non-programmer tools for designing and implementing a wider range of software. For some, these will be the stepping stone to expertise, but for most the tools need only help them get the job done.

 

Coders have a tendency to make tools for coders. It’s much easier to build a tool that works when you can assume a high level of technical sophistication for your users. But tools usable by non-programmers will help programmers too. Reducing the cognitive load of directing computation will enable coders to solve more complex problems faster. Like the mythical successful employee, we should be aiming to do work so great we put ourselves out of our job. We’ll still need programmers and engineers–experienced and creative practitioners of modelling problems, designing algorithms and data structures, taming complexity, and managing process–but they might become like farmers today: a highly leveraged sliver of the population.

A future where everyone can code would be wonderful, but code is only the means to directing computation for now. When our technology reaches the point where everyone has tools for thinking and creating but few need to code we’ll be far better poised to conquer our society’s challenges. Programmers could start building that technology now.

Teaching more people to code is a great step forward, but a future where few need to is even better.

 

Thanks Jessica, Natalia, Nik and Glen for your feedback on my first draft.

[1] Joel Spolsky (once PM for Excel) recounts learning how people actually used Excel.
[2] Apparently Microsoft’s tools have led the pack for a while, but I haven’t used them for a long time.
[3] James Davies: I still remember your dad proudly showing us his stacks and soliciting feedback. That and him ejecting a bone that was choking me with the Heimlich manoeuvre.
[4] I underestimated this complexity for a long time when I was more interested in back-end engineering. Front end is really hard, and the tools are weak.

Complexity threshold

“Technology has a lot to answer for.”

My mother has a good point. For all the wonderful opportunities that improving technology opens up, humans still have fundamental problems applying it to their world, and things aren’t getting any better.

This particular charge was voiced on Boxing Day, as my family were on our way home from a Christmassy visit to the paternal aunts, uncles and cousins. We had enjoyed (to begin with) a viewing of photos from my cousin’s recent 21st birthday party (which we had attended). These were followed by shots from a day out with friends who had travelled down to Sydney for a couple of days around the party. Not printed photos, of course, but digital photos displayed on a good sized television.

It seems to be a general rule of the universe that a mandatory five or ten minutes are required at any family visual presentation to get the technology working. In our case, the males and twenty-somethings in the room attempted to get the images off my cousin’s camera and onto her laptop, then hook up the laptop to the television and play around with the laptop monitor resolution and the television’s aspect ratio before finally the photos were presentable. I’m sure that no less time is required today than in days of yore when slides would need to be loaded into a still projector (upside-down and back-to-front of course), or a film threaded tenaciously through the wheels and cogs of a film projector.

We were then treated to more than two hundred photos of the party and surrounding events. Two hundred. Many were repeats, “just one more” -style. Many more were of the same group of people in front of different non-descript backgrounds. Some were great, catching a moment or providing a glimpse of someone we haven’t seen for a while. But most were not. As a digital camera user I am forever grateful for the flexibility that solid-state memory provides – to take as many photos as I want without thought for running out of film – but taking the photos is only part of the job. Even die-hard film fanatics don’t keep every photo they shoot, and they certainly don’t show them all to family. But the combined technologies of digital cameras and mass storage seem to have conspired against humanity to subject us to nearly endless streams of humdrum photos, the a jewels buried deep within.

Perhaps because taking a shot is now free of cost, many people pay little attention to the composition or actual photography of it. Just because we don’t need to discard photos now, to fit the bulging albums on our shelves, it seems that most people think we shouldn’t. The constraints that physical storage and film imposed on us have disappeared, and without them we are drowning in the sea of easy, cheap, second-rate photos. We used to need to put a little labour and money into snapping and printing a picture; now we don’t, but that doesn’t mean we’re home free. Instead, that effort should be transferred to the editing and culling of images after the fact. Such an arrangement has the potential to improve everyone’s photography, but it’s a lot easier to forget about that that all-important selection, or to put it off indefinitely. It’s easier instead to subject our friends and family to the raw footage, as it were. Which must be annoying in at least two ways for people like Mum. Photography was a lot more effort in her day, with better results, but now she has to endure the mediocrity that progress has begat.

Digital photography is just one example of what seems to be a general trend across many areas of technology, at least where it has become commoditised and reached the eager hands of “everyday” people. Technology is providing increasingly complex tools to people who are increasingly incapable of using them properly. Desktop computers themselves are a prime example. Their commodity nature, and the simply unimaginable potential offered by an Internet-connected computer, make them practically a necessity for modern-day life. To be without access to this global network is to be truly disadvantaged. Cheap access to such an important resource should perhaps be classified with food, water and shelter as a basic necessity of life, and OLPC are pursuing a truly worthy goal. But that is not my point. As accessible and necessary as computers are, most people remain incapable of using them safely and correctly. Or perhaps, to be fair, computer designers remain incapable of creating a computer that normal people can use.

A PC can be used for good and evil. Like scissors, a car, or nuclear power, in the right hands a computer is a fantastic tool. But in the hands of someone who doesn’t understand how it works, a computer can be dangerous. To the user, a computer is a danger to their work, their photos, address book and sensitive personal and financial information. A poorly protected computer is also a danger to others. The PC users of the world collectively wield a weapon powerful enough to bring any organisation to its knees, but very few computer users know how to keep their computer safe from viruses and trojans. Most probably don’t even consider the danger to others, and simply endure the inconvenience to themselves of a regular re-installation.

A computer lets you make more mistakes faster than any invention in human history – with the possible exceptions of handguns and tequila.” – Mitch Ratcliffe

The ubiquity of digital technology has lead to its use by many people who, in previous ages, would not have been allowed to. Were computers not so useful, offering so much potential for productivity an enjoyment, they might be regulated like other, similar technologies; technologies that are complex and easy to misuse such as cars, guns and heavy machinery. Such technology requires a license to use, an assurance to others that the user has training and knowledge appropriate to the task. Computers, on the other hand, have become cheap, common and (just) usable enough that they have been given to a huge group of people who, in previous ages, would not have been allowed near anything so complex and powerful. Technology has accelerated away from our ability to teach people how to use it.

“Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.” – Rich Cook

This is a popular sentiment, especially among software engineers and designers, with whom rests most of the blame that can’t be placed at the feet of the idiots. There will, of course, always be a bigger and better idiot, but that’s not the problem. The problem is that the vast majority of people can’t effectively use technology that has almost suddenly become so pervasive that we can no longer do without it. And it’s not because the software engineers can’t build good systems. The complexity of technology being developed today absolutely dwarfs the complexity of any previous technology. We have never had the ability to build such powerful and sophisticated tools, never realised that we would approach some complexity threshold whereupon suddenly the majority of the population is left behind. As technology continues to become more and more capable, and correspondingly more complex, it is a simple fact that we won’t be able to build interfaces to it simple enough for everyone to use.

If the trend continues then a very real digital divide will form, not between those who have the technology and those who don’t (it will be cheap), but between those who can use it and those who can’t. Knowing how to use information and technology will become a phenomenal advantage in life, equivalent to being one of the best hunters, strongest warriors or most adept craftsman. As societies used to rely on the hunters, then on the farmers, then on the industrialists, society will come to rely on the technologists for continued improvement in life. Given the tools available to the technologists, this will place them in a position of rather unique influence.

But it’s still going to take your dad five minutes to set up the holo-visual reality field.