Written for the Patricia Seybold Group (www.psgroup.com)
I started my career as an experimental Psychologist working as a research assistant in a laboratory where we studied how animals learnt in a very controlled learning environment (think Pavlov’s dogs and Skinner Boxes), and we then applied this data to theoretical learning models which could be modeled in computer simulations. I would develop the simulations and run them on the computer, then we would run the experiments with the animals and compare the data. In other parts of the lab we actually looked at how the brain pathways changed between experimental groups and control groups.
Nearly 25 years later and I am still reading articles where the predominant metaphor for the brain is that it is like a computer, and the computer is like a brain. I am definitely out of touch with the detail of the research today, but even then we knew that a binary model of computing was too simplistic to explain the brain. The central nervous system certainly has aspects that can be described as being like a digital computer, but it also sits inside a body that functions with a very analogue endocrine system. And given what we are learning about quantum physics and how cellular systems function like nano machines rather than as little test tubes for chemical reactions, I believe I can safely conclude that we will not see an algorithm running on a Von Neumann architecture (ie. today’s computers) that comes anywhere near imitating the human brain. Systems may demonstrate intelligent behaviours, but they are not any closer to being analogues of human intelligence. And that is because the metaphor does not work beyond the simple explanation.
I moved from research to finance in an almost stereotypical fashion – a headhunter working for a US bank convinced me that double pay (not hard to do when you work as a researcher on a government grant) and a new car was a good enough reason to retire the lab coat and work on “intelligent” trading systems. The project I worked on was a failure because it was too fussy and took too long and trading businesses are not famous for their patience with either time or detail. Banks may want intelligent systems, but they don’t always have intelligent management. Thankfully I was recruited over into the funds management business of the firm where they had some very big and hard technical problems and a motivation to be patient. It was there I learnt all about enterprise computing and then ultimately ended up in executive roles before deciding I wanted to do some detailed work again. The only person who would pay me to do that was me, so I exited corporate life to become an entrepreneur.
As a rule I do not normally appeal to the authority of my experience, however I believe that I can assert that I have seen many, many metaphors over my career. Science metaphors (it’s like a computer), system metaphors (it’s like a brain), business metaphors (it’s like a person), social metaphors (it’s like a family), political metaphors (it’s like a tribe); to name a few. The human need to explain things results in many metaphors because metaphors are useful in this regard.
Of course the technology industry abounds in metaphors, and that is because computers are complicated. At its very fundamental level, a computer system is simple enough – it is just a general purpose digital machine that takes inputs, performs a calculation, and produces outputs. All inputs are machine generated (Eg a button click or a network device), and all outputs are machine interpreted (Eg. a display monitor or a sound device). The only way a human can interact with a computer is when there is a conversion of a human output to a machine input, and the conversion of a machine output to a human input. So computer human interaction at its most basic is a human manipulating a device to create inputs to a machine, and a human interpreting the outputs from a device attached to a machine.
But while computers at the fundamental level are easy enough to describe in simple terms such as machines and physical interactions, when they are large, fast, and part of an integrated network of machines; they rapidly become very complex to the point where knowing how the fundamentals work is not a meaningful way to describe the layers of complexity.
So to make it easier to understand systems so that we can explain computer systems to ourselves and others, we create and evolve metaphors that allow us to share concepts with each other by describing a new idea in terms of another. The classic example is the “Window” which explains the new idea of a region on a screen with some content in it. It’s not a window – it’s a region on a plate of glass that has colored dots turned on or off based on the bits in the machines memory allocated to describe the location and color of the dots on the glass. But to call it a window works as a metaphor and so we use it. Likewise a window that contains a drawing program is not a canvas – technically it’s the same as a window (which is not really a window). And the tools you use are not paint brushes; they are just bits created from a device called a mouse (which is not a mouse but looks like one as it has a tail) that transform the memory that is represented on the window. It looks like we are painting, we call it painting, but it is a metaphor of painting that we use to describe some processes occurring in a computer.
The use of the metaphor also continues into computer devices that don’t have a screen attached; we call these devices “servers” because they serve the “client” computer that does have a screen and a person in front of it. Today servers are often not even physical devices; they are virtual devices running as a “tenant inside a container”; software that is actually running on the real machine that is pretending to be a virtual machine. And in this connected world, these virtual devices are often housed in a place metaphorically called “the cloud”.
Metaphors are also used to describe how humans interact with machines. Human users talk about our online “identity” and talk about security and verification as if we are passing through passport control at a border. We can have an online “personality” and play games represented by our “avatars”.
And metaphors are also at work in the programming world. We have business objects which are also interchangeably called components. Objects and components are metaphors created by programmers because we had to divide and conquer the complexity of analyzing, programming and managing large systems that implement business processes. We also talk about “sockets” and “ports” and we “call remote procedures” which technically are the same as “messages” and “publications” as they are all metaphors to describe how we send and or receive bits between two machines.
And human users that create systems use the metaphor of the “architect” as someone who designs and builds computer systems like someone who designs and builds real houses. Architects become the creators and defenders of metaphors as if their metaphors are real things bound by the laws of nature and physics. Computer “architects” expect their models of the world to be adhered to by other software builders just as real architects and builders are actually bound and constrained by the real laws of what is physically possible when you design a real building. In the real world, foundations are at the bottom of the house and the roof at the top. In the computer world a server can be physical or virtual, a server can run another server, an object can be a component, a component can be made of objects, a service can be a function, and a function can be a service. There are no laws of computer science architectures; only agreements to agree amongst a community of like minded individuals, or compliance requirements within an organisation.
And so while we have developed standard patterns and “architectures” in technology, many of our technology metaphors are running out of usefulness. The windows and client/server narrative breaks down in the smartphone and tablet world. Even though they have a screen with windows and can also run services such as web servers that can be connected to by other client devices – we are suddenly literal and call these “mobile devices” that have “touch screens”.
Metaphors are also getting overused to the point where they become recursive and confusing. Today you can have a real machine running a virtual machine which is hosting several Java virtual machines which are running application servers which have separate containers which run components which are providing services – which if you wanted to you could program directly to run on a single standalone machine. And increasingly the standalone machine is also a device performing server functions, rather than being a traditional server.
Our definitions are also resulting in highlighting miniscule differences that obscure the function of a device. We describe central processing units (CPUs) as being different to graphics processing units (GPUs) when they are both processing units that are specialized by function. So we can have a clusters of CPUs communicating over networks to do faster math when you can also use GPUs which are clusters of small specialized CPUs that communicate over an internal bus. These days we also have clusters of CPUs connected to GPUs where you have network and bus communications working in very large clusters – over the cloud. These functional definitions make us forget that GPUs are math engines, CPUs are math engines, and clusters are math engines and when we want a math engine, we have more than one solution available to us.
If you take the view that we talk about our systems using metaphors, and that our metaphors and our relationship to them (our roles) have been defined incrementally over time, then how we create, manage and use systems is the sum of history and happenstance rather being designed in the moment for the problems at hand. So while there is benefit in us all having a shared view of systems and can coalesce our thinking about how systems are created and deployed, there is an open question as to whether our metaphors are optimal and can adapt quickly enough to new problems requiring new solutions.
Take for example virtual machines. Virtual machines are metaphors – they are not real machines. They are just software applications. But by continuing to use the machine metaphor, IT departments continue to manage the virtual machine as if it is a real machine. And so while the marginal cost of “manufacturing” a new machine becomes near zero, the deployment and administration costs stay the same as if the software machine is a real machine. When stories like this happen, it’s time for a new metaphor!
But the need for new metaphors goes even deeper than existing problems. As our devices become more capable and they become more integrated into our lives we are starting to talk about our digital devices in more anthropomorphic terms such as “agents”, “helpers”, or “indispensable partners”. We are creating emotional attachments to our devices and the concept of “device” is becoming too narrow to explain what our devices mean to us.
So my question is – can we do more with our technology with new technology metaphors? The mirror image of this question is are we inefficient in our use of technology because we are trapped in our metaphors? This is an open question and I think that there is a lot of potential innovation to be found by questioning metaphors and pursuing new ones. Here are some ideas. If you are in banking – what if you actually use the “Teller” metaphor to the extreme for your online service? It’s not “like” a teller; it is a teller. Where does that take you? Or if you are in designing control systems – what if you use the “pilot” metaphor to its extreme? It’s not like an autopilot – it is a pilot. Where does that take you? Or if you are creating an analytics system, what if you say that it’s not for a quant, it is a quant? Where can that take you?
One of the most interesting metaphors I have been following lately has been the genetic metaphor for systems creation and deployment. My challenge has been to stop thinking about my systems as being “like” an organism; I have been thinking about my systems as being real organisms. Unlike the brain as computer metaphor that has reached it’s limits, I have found that the DNA and genetics metaphor has provided very rich grounds for my work as I try to “evolve” and “grow” large scale, distributed systems that are resilient and self regulating.
The irony is that I was also talking to a compatriot in the nano biology sphere, and he was saying how the greatest innovations in his field has not been from thinking more about the inner workings of cells as being about physics and chemistry, but rather as thinking about them as nano machines. There have been significant innovations from this line of thought as proteins are “folded” and ribsomes are created that manufacture proteins using nano scale, atomic engines. As we talked it dawned upon both of us that by explaining machines as biology, and biology as machines, we were both moving closed to the goal of having machines integrated in to biology and biology integrated with machines. Neither of us see us reaching the “singularity”, but by changing our metaphors we are moving ourselves forward.
If you are using metaphors, and you are, think about changing them. It will feel uncomfortable and it will take some effort to take others with you, but the evidence is emerging that it will be worth the effort to break free and create new metaphors for a new world.