Nicholas Carr [ Outspoken Critic of IT Megatrends ] I started writing about technology about 15 years ago or so, more or less the same time that Google appeared on the scene. And I think it was good timing for Google. And it was also good timing for me, because there’s been plenty, obviously, to write about. And like, I think, most technology writers, I started off writing about the technology itself– features, design, stuff like that– and also about kind of the economic and financial side of the business, so competition between technology companies and so forth.

But over the years, I became kind of frustrated by what I saw as the narrowness of that view, that just looks at technology as technology or as a economic factor.

Because what was becoming clearer was that computers, as they became smaller and smaller and more powerful and more connected, and as programmers became more adept at their work, computing, computation digital connectivity and everything was infusing more and more aspects of everybody’s life– at work, during their leisure time. And so it struck me that, as is always true, and as Peter said, with technology we kind of– technology frames, in many ways, the context in which we live. And it seemed to me important to look at this phenomenon, the rise of the computer as kind of a central component of our lives, from many different angles.

See what sociology could tell us, what philosophy could tell us, and all these different ways we can approach an important phenomenon that’s influencing our life.

Four or five years ago I wrote a book called “The Shallows” that examined how the use of the internet as an informational medium is influencing the way we think, and how we’re adapting to this, not only availability of vast amounts of information, but more and more an actual active barrage of it, and what that meant for our ability to tune out the flow when we needed to, and really engage attentively in one task, or one train of thought.

As I was writing “The Shallows,” I also started becoming aware of this other realm of research into computers that struck me as dealing with an even broader question, which is  what happens to people and their talents and their engagement with the world when they become reliant on computers in their various forms to do more and more things?  So what happens when we automate not just factory work, but lots of white-collar, professional thinking, and what happens when we begin to automate a lot of just the day to day activities that we do?

We’ve become more and more reliant on computers, not necessarily to take over all of the work, but to become our aid to help shepherd us through our days. And that was the spark that led to “The Glass Cage,” my new book, which tries to look broadly at the repercussions of our dependence on computers and automation in general, but also looks at the question of, are we designing this stuff in an optimal fashion?

If we want a world in which we get the benefits of computers but we also want people to live full, meaningful lives; develop rich talents; interact with the world in diverse ways, are we designing all of these tools– everything from robots to simple smartphone apps– in a way that accomplishes both those things?

What I’d like to do is just read a short section from the book that, to me, provides both an example of a lot of the things I’m talking a lot, a lot of tensions I’m talking about, but also provides sort of a metaphor for, I think, the circumstances we’re in and the challenges we face.

This section, which comes in the middle of the book, is about the use of computers and automation, not in a city or even in a kind of Western country, where there’s tons of it, but in a place that looks like this– up in the Arctic Circle, far, far away, where you might think is shielded from computers and automation but in fact is not. So let me just read this to you.

“The small island of Igloolik, lying off the coast of the Melville peninsula in the Nunavut territory of the Canadian north, is a bewildering place in the winter.

The average temperature hovers around 20 degrees below zero. Thick sheets of sea ice cover the surrounding waters. The sun is absent.

Despite the brutal conditions, Inuit hunters have for some 4,000 years ventured out from their homes on the island and traversed miles of ice and tundra in search of caribou and other game. The hunters’ ability to navigate vast stretches of barren, Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed voyagers and scientists for centuries.

The Inuits’ extraordinary wayfinding skills are born not of technological prowess– they’ve eschewed maps, compasses, and other instruments– but of a  profound understanding of winds, snow drift patterns, animal behavior, stars, tides, and currents .

The Inuit are masters of perception. Or at least they used to be.

Something changed in Inuit culture at the turn of the millennium. In the year 2000, the US government lifted many of the restrictions on the civilian use of the global positioning system. The Igloolik hunters, who had already swapped their dog sleds for snowmobiles, began to rely on computer-generated maps and directions to get around.

Younger Inuit were particularly eager to use the new technology. In the past, a young hunter had to endure a long apprenticeship with their elders, developing their wayfinding talents over many years. By purchasing a cheap GPS receiver, they could skip the training and offload responsibility for navigation to the device. The ease, convenience, and precision of automated navigation made the Inuits’ traditional techniques seem antiquated and cumbersome by comparison.

But as GPS devices proliferated on the island,  reports began to spread of serious accidents during hunts, some resulting in injuries and even deaths . The cause was often traced to an over-reliance on satellites.

When a receiver breaks or its batteries freeze, a hunter who hasn’t developed strong wayfinding skills can easily become lost in the featureless waste and fall victim to exposure. Even when the devices operate properly, they present hazards. The route, so meticulously plotted on satellite maps, can give hunters a form of  tunnel vision .

Trusting the GPS instructions, they’ll speed onto  dangerously thin ice , or into other environmental perils that a skilled navigator would have had the sense and foresight to avoid.

Some of these problems may eventually be mitigated by improvements in navigational devices, or by better instruction in their use. What won’t be mitigated is the loss of what one tribal elder describes as “the wisdom and knowledge of the Inuit.”

The anthropologist Claudio Aporta, of Carleton University in Ottawa, has been studying Inuit hunters for years. He reports that while satellite navigation offers attractive advantages, its adoption has already brought a deterioration in wayfinding abilities, and more generally, a weakened feel for the land. As a hunter on a GPS-equipped snowmobile devotes their attention to the instructions coming from the computer, they lose sight of their surroundings.

They travel blindfolded, as Aporta puts it.

A singular talent that has defined and distinguished a people for thousands of years may well evaporate over the course of a generation or two.”

When I relate that story to people, they tend to have one of two reactions. And my guess is both of those reactions are probably represented in this room. One of the reactions is a feeling that this is a poignant story. It’s a troubling story, story about loss, about something essential to the human condition. And that tends to be the reaction I have to it.

But then there’s a very different reaction, which is, well, welcome to the modern world. Progress goes on, we adapt, and in the end, things get better.

If you think about it, most of us, probably all human beings, once had a much more sophisticated navigational sense, inner navigational sense, much more sophisticated perception of the world, the landscape. And for most of us, we’ve lost almost all of that. And yet, we didn’t go extinct. We’re still here. By most measures, we’re thriving. And I think that is also a completely valid point of view. It’s true that we lose lots of skills over time, and we gain new ones and things go on.

So in some ways, your reaction to this is a value judgment about what’s meaningful in human life.

But beyond those value judgments, I think one thing, or a couple things that this story, this experience tells us, is how powerful a new tool can be when introduced into a culture. It can change the way people work, the way people operate,  the way they think about what’s important , the way they go about their lives in many different ways.

And it can do this very, very quickly, overturning some skill or some talent or some way of life that’s been around for thousands of years, just in the course of a year or two.

Introducing computer tools, introducing automation, any kind of technology that redefines what human beings do, and redefines what we do versus what we hand off to machines or computers can have very, very deep and very, very powerful effects.

A lot of these effects are very difficult to anticipate.

So the Inuit hunters, the young hunters, didn’t go out and buy GPS systems because they wanted to increase the odds that they’d get lost and die. And they probably weren’t thinking about eroding some fundamental aspect of culture. They wanted to get the convenience, the ease of the system, which is what many of us are motivated by when we decide to adopt some kind of new form of automation in our lives.

When you look at all these unanticipated effects, you can see a very common theme that comes out in research about automation, and particularly about computer automation.

It’s something that’s been documented over and over again by human factors, scientists and researchers, the people who study how people interact with computers and other machines. And the concept is referred to as “the substitution myth.”

It’s very simple.

It says that whenever you automate any part of an activity, you fundamentally change the activity. And that’s very different from what we anticipate.

Most people, either users of software or other automated systems or the designers, the makers, they assume that actually you can take bits and pieces of what people do. You can automate them. You can turn them over to software or something else. And you’ll make those parts of the process more efficient or more convenient or faster or cheaper. But you won’t fundamentally change the way people go about doing their work. You won’t change their behavior.

In fact, over and over again we see that even small changes, small shifts of responsibility from people to technology, can have very big effects on the way people behave, the way they learn, the way they approach their jobs.

We’ve seen this recently with the increasing automation of medical record keeping.

As you probably know, we’ve moved fairly quickly  over the last 10 years  from doctors taking patient notes on paper, either writing them by hand or dictating them, to digital records. So doctors, usually as they’re going through an exam, will take notes, usually going through a template on a computer or on a tablet.

For most of us, our initial reaction is, thank goodness for that. Because having records on paper was a pain in the neck. You’d have to enter the same information, depending on when you went to different doctors. And God forbid you got sick somewhere else in the country, or something, and doctors couldn’t exchange, had no way to share your old records. So it makes all sorts of sense to automate this and to have digital records.

And indeed, 10 years ago when we started down this path, the US started down this path, there were all sorts of studies that said, oh, we’re going to save enormous amounts of money. We’re going to increase patient care, quality of health care, as well as make it easier to share information.

There was a big study by the Rand Corporation that documented all this.

They had modeled the entire health care system in a computer, output various things. This was going to be all to the good. The government went on to subsidize the adoption of electronic medical records to the tune of something like  $30 billion  since then. And now we have a lot of information about what’s really happened.

Nothing that was expected has actually played out. All sorts of things that weren’t expected, have.

For instance, the cost savings have not materialized.  Cost has continued to go up . And there’s even some indications that beyond the expense required for the systems themselves, this shift may increase health care costs rather than decrease them.

The evidence on quality of care is very, very mixed.

There seems to be no doubt that for some patients, those with chronic diseases that require a lot of different doctors, quality goes up. But for a lot of patients, there hasn’t been a change. And there may even have been an erosion of quality, in some instances.

Finally, we’re not even getting the benefits of broad sharing of the records, because a lot of the systems are proprietary. So you can’t transfer the records quickly or easily from one hospital to the next or one practice to the next. And now some of these problems just come from the fact that a lot of software is crappy.

We’ve rushed to spend huge amounts of money on it. Lots of big software companies that supply this have gotten very wealthy. And doctors are struggling with it. Patients are struggling with it. Some of those things will be fixed at more expense over time.

But if you look down lower, you see changes in behavior that are much more subtle and much more interesting and go beyond the quality of the software itself.

For instance, one of the reasons that everybody expected that health care costs would go down– the assumption was that as soon as doctors can call up images and other test results on their computers when they’re in with a patient, they wouldn’t order more tests. So we’d see fewer diagnostic tests and fewer costs from those  diagnostic tests– big part of the health care system’s costs .

Actually, exactly the opposite seems to be happening.

You give the doctor an ability to quickly order tests and quickly pull up the results, doctors actually order more of them because they know it’s going to be easier for them. And so the quality of the outcomes doesn’t go up. We’re just seeing more diagnostic tests and more costs, exactly the opposite of what we expected.

You see changes in the doctor/patient relationship.

If you’ve had the experience, if you’ve been around for awhile and had the experience of going from a world in which you went into a doctor’s office for a physical or whatever and the doctor paid their whole attention to you, to the world of electronic medical records, when the doctor has a computer, you know that it intrudes in the doctor/patient relationship.

Studies show that doctors now spend, if they have a computer with them, about 25% to 50% of the time during an exam looking at the computer, rather than the patient.

And doctors aren’t happy about that. Patients don’t tend to be happy about it. But it’s kind of a necessary consequence, at least how we’ve designed these systems, of this transfer.

The most interesting– I’m just going to give you three examples of unexpected results– but the most interesting to me is the fact that  the quality of the records themselves has gone down . And the reason is that first of all, now doctors use templates,  checkboxes  lots of times. And then when they have to put in text describing the patient’s condition, rather than dictating it from what they’ve just experienced or hand writing it, they  cut and paste .

They cut and paste paragraphs and other stuff from other visits that the patient has had, or  from visits by other patients that have similar conditions .

This is referred to as “the cloning of text.”

More and more of personal medical records consist of cloned text these days, which makes the records less useful for doctors, because it has less rich and subtle information. And it also undermines an important role that records used to play in the exchange of information and knowledge.

A primary care physician used to get a lot of information, a lot of knowledge by reading rich descriptions from specialists. And now, more and more, as doctors say, this is just boilerplate, just cloned text.

So we’ve created this system that eventually will probably have the very important benefit of allowing us to exchange information more and more quickly, more and more easily. But at the same time, we’re reducing the quality of the information itself and making what’s exchanged less valuable.

Now those are three examples of how the substitution myth has played out in this particular area of automation.

They’re very specialized, and you see all sorts of these things anywhere you look. But there are a couple of bigger themes that tend to cross all aspects of automation when you introduce software to make jobs easier or to take over jobs.

What you tend to get, in addition to the benefits, are a couple of big negative developments.

Human factors, experts, researchers on this refer to these as “automation complacency” and “automation bias.”

Automation complacency means exactly what you would expect. When people turn over big aspects of their job to computers, to software, to robots, they tune out. We’re very good at trusting a machine, and certainly a computerized machine, to handle our job, to handle any challenge that might arise. And so we become complacent. We tune out. We space out. And that might be fine until something bad happens, and we suddenly have to re-engage with what we’re doing, and then you see people make mistakes.

Everybody experiences automation complacency in using computers.

A very simple example is autocorrect for spelling. When people have autocorrect going, when they’re texting or using a word processor or whatever, they become much more complacent about their spelling. They don’t check things. They let it go. And then most people have had the experience of sending out a text or an email or a report that has some really stupid typo in it, because the computer misunderstood your intent.

That causes maybe a moment of embarrassment.

But you take that same phenomenon of complacency and put it into an industrial control room, into a cockpit, into a battlefield, and you sometimes get very, very dangerous situations.

One of the classic examples of automation complacency comes in the cruise line business. A few years ago, a cruise ship called the Royal Majesty was on the last leg of a cruise off New England. It was going from Bermuda, I think, to Boston. It had a GPS antenna that was connected to an automated navigation system. The crew turned on the automated navigation system, and kind of became totally complacent– just assumed, OK, everything’s going fine. Hey, the computer’s plotting our course, don’t have to worry about it.

At some point the antenna, the line to the GPS antenna, broke. And this was way up somewhere and nobody saw it. Nobody noticed. There were increasing environmental clues that the ship was drifting off course. Nobody saw it. At one point, a mate whose job it was to watch for a locational buoy and report back to the bridge that, yeah, we passed this as we should have, they were out there watching for it and they didn’t see it. And they said, well, it must be there because the computer’s in charge here. I just must have missed it.

So they didn’t bother to tell the bridge.

Hours go by, and ultimately the ship crashes into a sandbar off Nantucket Island many miles off course. Fortunately, no one was killed or injured that bad, but there was millions of dollars of damage.

It shows how easily, if you give too much responsibility to the computer, the people will tune out. They won’t notice things are going wrong, or if they do notice, they might make mistakes in responding. Automation bias is closely related. to automation complacency. And it just means that you place too much trust in the information coming from your computer, to the point where you begin to assume that the computer is infallible. And so you don’t have to pay attention to other sources of information, including your own eyes and ears. And this too is something we see over and over again when you automate any kind of activity.

A good example is the use of GPS by truck drivers.

A truck driver starts to listen to the automated voice of the GPS woman telling them where to go and whatever. And they begin to ignore other sources of information like road signs. So we’ve seen an increase in the incidence of trucks crashing into low overpasses as we’ve increased the use of GPS.

In Seattle a few years ago, there was a bus driver carrying a load of high school athletes to a game somewhere in a 12-foot-high bus approached a nine-foot-high overpass. And there were all these signs along the way– “Danger– Low Overpass” or even signs that had blinking lights around them. They smashed right into it. Luckily, no one died.

A bunch of students had to go the hospital.

The police said, what were you thinking? And the bus driver said, well, I had my GPS on and I just didn’t see the signs. We ignore, or don’t even see, other sources of information.

In another very different area, back to health care, if you look at how radiologists read diagnostic images today, most of them read them as digital images, of course. But also there’s now software that is designed as a decision support aid and analytical aid. And what it does is it gives the radiologist prompts. It highlights particular regions of the image that the data analysis, past data, suggests are suspicious. In many cases, this has very good results. The doctor focuses attention on those particular highlighted areas, finds a cancer or other abnormality that the doctor may have missed. And that’s fine.

But research shows that it also has the exact opposite effect.

Doctors become so focused on the highlighted areas that they only pay cursory attention to other areas, and often miss abnormalities or cancers that aren’t highlighted. And the latest research suggests that these prompt systems, which, as you know, are very, very common in software in general,  these prompt systems  seem to improve the performance of less expert image readers on simpler challenges, but  decrease the performance of expert readers  on very, very hard challenges.

The phenomena of automation complacency and automation bias points to, I think, an even deeper and more insidious problem that poorly designed software or poorly designed automated systems often triggers. And that is that in both of those cases, with complacency and bias, you see a person disengaging from the world, disengaging from their circumstances, disengaging from the task at hand, simply assuming that the computer will handle it.

Indeed, the computer has been designed– whatever system we’re talking about– has been designed to handle as much of the chore as possible.

What happens then is we see an  erosion of talent  on the part of the person.

Either the person isn’t developing strong, rich talents, or their existing talents are beginning to get rusty. And the reason of is pretty obvious. We all know, either intuitively or if you’ve read anything about this, how we develop rich talents, sophisticated talents. It’s by practice. It’s by doing things over and over again, facing lots of different challenges in lots of different circumstances, figuring out how to overcome them.

That’s how we build the most sophisticated skills and how we continue to refine them.

And this element, this crucial element in learning, in all sorts of forms, is often referred to as “the generation effect.” What that means is, if you’re actively engaged in some task, in some form of work, you’re going to not only perform better, but learn more and become more expert than if you’re simply an observer, simply passively watching as things progress.

The generation effect was first observed in this very simple experiment involving people’s ability to expand vocabulary, learn vocabulary, remember vocabulary. And what the researchers did back in the ’70s is they got two groups of people to try to memorize lots of pairs of antonyms, lots of pairs of opposites. And the only difference between the two groups was that one group used flash cards that had both words spelled out entirely– hot, cold– the other had flashcards that just had the first word, “hot,” but then provided only the first letter of the second word, so “c.”

What they found was that indeed, the people who used the full words remembered much fewer of the antonyms than the people who had to fill in the second word. The reason? There’s a little bit more brain activity involved here. You actually have to call to mind what this word is. You have to generate it.

Just that  small difference gives you better learning, better retention .

A few years later, some other researchers, some other professors in this area, realized that actually, this is kind of a form of automation. What this does, giving the full word in essence automates the work of filling in the word. And they explained this as, in fact, a phenomenon related to automation complacency. You might be completely unconscious of it, but your brain is a little more complacent. It doesn’t have to work as hard in this mode.

And that makes a big difference.

It turns out that the generation effect explains a whole lot about how we learn and develop skill in all sorts of places. It’s definitely not just restricted to studies of vocabulary. You see it everywhere. If you’re actively involved, you learn more. You become more expertise. If you’re not, you don’t.

Unfortunately, with software, more and more, the programmer, the designer, actually gets in the way of the generation effect. And not by accident, but on purpose. Because of course, the things we tend to automate, the things we tend to simplify for people, are the things that are challenging. You look at a process. You look where people are struggling. And that is both often the most interesting thing to automate but also the place that whoever’s paying you to write the software is encouraging you to do, because it seems to create efficiency.

It seems to create productivity.

But what we’re doing is designing lots of systems, lots of software that actually deliberately– if you look at it in that sense– gets in the way of people’s ability to learn and create expertise.

There was a series of experiments done beginning about 10 years ago by this young cognitive psychologist in Holland named Christof van Nimwegen. And he did something very interesting. He got a series of different tasks. One of them was solving a difficult logic problem. One of them was organizing a conference where you had a large number of conference rooms, large number of speakers, large number of time slots, and you had to optimize how you put all those things together.

A number of tasks that had lots of components, required a certain amount of smarts, and required you to work through a hard problem over time.

In each case, he got groups of people, divided them into two, created two different applications for doing each of these. One application was very bare bones. It just provided you with the scenario and then you had to work through it. The other was very helpful. It had prompts. It had highlights. It had advice, on-screen advice. When you got to a point where you could do some moves but you couldn’t do others, it would highlight the ones you could do and gray out the ones you couldn’t.

And then he let them go and watched what happened.

As you might expect, the people with the more helpful software got off to a great start. The software was guiding them, helping them make their initial decisions and moves. They jumped out to a lead in terms of solving the challenges. But over time, the people using the bare bones software, the unhelpful software, not only caught up but actually, in all the cases, ended up completing the assignment much more efficiently, made much fewer incorrect moves, much fewer mistakes.

They also seemed to have a much clearer strategy, whereas the people using the helpful software kind of just clicked around.

Finally, van Nimwegen gave them tests afterwards to measure their conceptual understanding of what they had done.  People with the unhelpful software had a much clearer conceptual understanding . Then eight months later, he invited just the logic puzzle group. He invited all the people who did that back, had them solve the problem again.

The people who had, eight months earlier, used the unhelpful software solved the puzzle twice as fast as the people who used the unhelpful software.

The more helpful the software, the less learning, the weaker performance, the less strategic thinking of the people who used it. Again, this underscores a fundamental paradox that people face, people who develop these programs and people who use them, where our instinct to make things easier, to find the places of friction and remove the friction, can actually lead to counterproductive results, where you’re eroding performance and eroding learning.

If you look at all the psychological studies and the human factor studies of how people interact with machines and technology and computers, and you also combine it with psychological understanding of how we learn, what you see is that there’s a very complex cycle involved. If you have a high degree of engagement with people, if they’re really pushed to engage with challenges, work hard, maintain their awareness of their circumstances, you provoke a state of flow.

If you’ve read Mihaly Csikszentmihalyi’s book “Flow” or are familiar with it, we perform optimally when we’re really immersed in a hard challenge, when we’re stretching our talents, learning new talents. That’s the optimal state to be in. It gives us more skills, pushes us to new talents, and it also happens to be the state in which we’re most fulfilled and most satisfied.

Often, people have this feeling that if they were relieved of work, relieved of effort, they’ll be happier– turns out they’re not. They’re more miserable. They’re actually happier when they are working hard, facing a challenge. And so this sense of fulfillment prolongs your sense of engagement, intensifies it. And you get this very nice cycle.

People are performing at a high level. They’re learning talents. And they’re fulfilled. They’re happy, they’re satisfied, they like their experience.

All too often, you stick automation into here — particularly if you haven’t thought through all of the implications — and you break this cycle. Suddenly you decrease engagement, and all the other things go down as well. You see this today in all sorts of places. You see it with pilots, whose jobs have been highly, highly automated. Automation has been a very good, very positive development for 100 years in aviation.

But recently, as pilots’ role in control of the aircraft, manual control, has gone down to the point where they may be in control for three minutes during a flight, you see problems with the erosion of engagement, the erosion of situational awareness, and the erosion of talent.

Unfortunately, on those rare occasions when the autopilot fails for whatever reason, or there’s very weird circumstances, you increase the odds that the pilots will make mistakes sometimes, with dangerous implications. So why do we go down this path so often? Why do we create computer programs, robotic systems, other automated systems, that instead of raising people up to their highest level of talent, highest level of awareness and satisfaction, has the opposite effect?

I think much of the blame can be placed on what I would argue is the dominant design philosophy or ethic that governs the people who are making these programs and making these machines. And it’s what’s often referred to as “technology-centered design.”

Basically what that means is, the engineer or the programmer or whatever starts by asking, what can the computer do? What can the technology do? And then anything that the computer or the technology can do, they give that responsibility to the computer. And you can see why this is what engineers and programmers would want to do, because that’s their job– to give interesting, to simulate or automate interesting work with software or with robots.

It’s a very natural thing to do.

But what happens then is what the human being gets is just what the computer can’t do, or what we haven’t yet figured out how the computer can do it. And that tends to be things like monitoring screens for anomalies, entering data, and, oh by the way, you’re also the last line of defense. So if everything goes to hell, you’ve got to take over and get us out of the fix.

Those are things that people are actually pretty bad at.

We’re terrible at monitoring things, waiting for an anomaly. You can’t focus on it for more than about a half an hour. Entering data, becoming the kind of sensor for the computer, is a pretty dull job in most cases. And if you set up a system that ensures that the operator is going to have a low level of situational awareness, then that is not the person you want to be having as the last line of defense.

The alternative is something called– surprise– “human-centered design,” where you start by saying, what are human beings good at? And you look at the fact that there’s lots of important things we’re actually still much better than computers at.

We’re creative. We have imagination. We can think conceptually. We have an understanding of the world. We can think critically. We can think skeptically.

Then you bring in the software, you bring in the automation, first to aid the person in exploiting those capabilities, but also to fill in the gaps and the flaws that we all have as human beings.

We’re not great at processing huge amounts of information quickly.

We’re subject to biases in our thinking.

You can use software to counteract these, or to provide an additional set of capabilities. And if you go that path, you get both the best of the human and the best of the machine, or the best of the technology. Some of the ideas here are very, very simple.

For instance, with pilots, instead of allowing them to turn on total flight automation once they’re off the ground and then not bother to turn it off until they’re about ready to land, you can design the software to give control back to the pilot every once in awhile at random instances. And just when you know that you’re going to be called upon at some random time to take back control, that improves people’s awareness and concentration immeasurably.

It makes it less likely that they’re going to completely space out.

Or in the example of the radiologist– and this goes for examples of decision support or expert system or analytical programs in general– one thing you can do is instead of bringing in the software prompts and the software advice right at the outset, you can first encourage the human being to deal with the problem, to look at the image on their own, or to do whatever analytical chore is there. And then bring in the software afterwards, as kind of a further aid, bringing new information to bear. And that too means you get the best of both people.

Unfortunately, we don’t do that, or at least not very often.

We don’t pursue human-centered design. And I think it’s for a couple of reasons. One is that we human beings, as I said before, are very eager to hand off any kind of work to machines, to software, to other people, because we are afflicted by what psychologists term “ miswanting .”

We think we want to be freed of labor, freed of hard work, freed of challenge. And when we are freed of it, we feel miserable, we feel anxious, we get self-absorbed.

Actually, our optimal experience comes when we are working hard at things. So there’s something inside of us that is very eager to get rid of stuff, even if it’s get rid of effort, even if it’s not in our own benefit.

The other reason, which I think is one that’s even harder to deal with, is the pursuit of efficiency and productivity above all other goals. And you can certainly see why. Hospitals who want the highest productivity possible from radiologists would be averse to saying, well, we’ll let the radiologist look at the image and then we’ll bring in the software. Because that extends the time that a radiologist is going to look at this.

And that’s true of any of these kind of analytical chores.

There’s this tension between the pursuit of efficiency above all other things and productivity, and the development of skill, the development of talent, the development of high levels of human performance, and ultimately the sense of satisfaction that people get.

In the long run, you see signs that that begins to backfire.

Toyota earlier this year announced that it was replacing some of its robots in its Japanese factories with human beings, because even though the robots are more efficient, the company has struggled with quality problems.  It’s had to recall 20 million cars in recent years . And not only is that bad for business, but Toyota’s entire culture is built around quality manufacturing, so it erodes its culture.

By bringing back human beings, it wants to bring back both the spirit and the reality of human craftsmanship, of people who can actually think critically about what they’re doing.

One of the benefits it believes it will get is that it will be smarter about how it programs its robots. It will be able to continually take new human thinking, new human talent in insight, and then incorporate that into the processes that even the robots are doing.

That’s a good news example. But I’m not going to oversimplify this or lie to you. I think this tension between placing efficiency above all other things, immediate efficiency, is a very hard instinct, very hard economic imperative to overcome. Nevertheless, it’s absolutely imperative that everyone who designs software and robotics and all of us who use them are conscious of the fact that there is this trade-off, in that technology isn’t just a means of production, as we often tend to think of it.

It really is a means of experience.

And it always has been, since the first technologies were developed by our distant ancestors. Technology at its best, tools at their best bring us out into the world, expand our skills and our talents, make the world a more interesting place. We shouldn’t forget that about ourselves as we continue at high speed into a future where more and more aspects of human experience are going to be offloaded to computers and to machines.

Tagged on:     

Leave a Reply

Your email address will not be published. Required fields are marked *