Showing posts with label future. Show all posts
Showing posts with label future. Show all posts

Monday 30 June 2014

Why computers of the next digital age will be invisible



The author Douglas Adams once made a witty point about technology: the inventions we label “technologies” are simply those which haven’t yet become an invisible, effortless part of our lives.

“We no longer think of chairs as technology,” he argued. “But there was a time when we hadn’t worked out how many legs chairs should have, how tall they should be, and they would often ‘crash’ when we tried to use them. Before long, computers will be as trivial and plentiful as chairs…and we will cease to be aware of the things.”

Adams’s prediction was prescient. Computers have been such a prominent, dazzling force in our lives for the past few decades that it’s easy to forget that subsequent generations might not even consider them to be technology. Today, screens draw constant attention to themselves and these high-visibility machines are a demanding, delightful pit into which we pour our waking hours. Yet we are on the cusp of the moment when computing finally slips beneath our awareness – and this development will bring both dangers and benefits.

Computer scientists have been predicting such a moment for decades. The phrase “ubiquitous computing” was coined at the Xerox Palo Alto Research Center in the late 1980s by the scientist Mark Weiser, and described a world in which computers would become what Weiser later termed “calm technologies”: unseen, silent servants, available everywhere and anywhere.

Although we may not think about it as such, computing capability of this kind has been a fact of life for several years. What we are only beginning to see, however, is a movement away from screens towards self-effacing rather than attention-hungry machines.

Take Google Glass. Recent news stories have focused more on intrusion than invisibility. (There’s even a young word, “Glassholes”, describing the kind of users who get kicked out of cafes). Beyond the hand-wringing, though, Glass represents the tip of a rapidly-emerging iceberg of devices that are “invisible” in the most literal sense: because a user’s primary interface with them is not through looking at or typing onto a screen, but via speech, location and movement.

This category also includes everything from discrete smartwatches and fitness devices to voice-activated in-car services. Equally surreptitious are the rising number of “smart” buildings – from shops and museums to cars and offices – that interface with smartphones and apps almost without us noticing, and offer enhancements ranging from streamlining payments to “knowing” our light, temperature and room preferences.

Intelligent cloud

The consequences of all this will be profound. Consider what it means to have a primarily spoken rather than screen-based relationship with a computer. When you’re speaking and listening rather than reading off a screen, you’re not researching and comparing results, or selecting from a list – you’re being given answers. Or, more precisely, you’re being given one answer, customised to match not only your profile and preferences, but where you are, what you’re doing, and who with.

Google researchers, for example, have spoken about the idea of an “intelligent cloud” that answers your questions directly, adapted to match its increasingly intimate knowledge about you and everybody else. Where is the best restaurant nearby? How do I get here? Why should I buy that?

Our relationships with computers, in this context, may come to feel more like companionship than sitting down to “use” a device: a lifelong conversation with systems that know many things about us more intimately than most mere people.

Such invisibility begs several questions. If our computers provide such firm answers, but keep their workings and presence below our awareness, will we be too quick to trust the information that they provide – or too willing to take their models of the world for the real thing? As motorists already know to their cost, even a sat-nav’s suggestions can be hopelessly wrong.

That’s not to mention the potential for surveillance. More than a decade ago, critics of ubiquitous computing suggested it is “the feverish dream of spooks and spies – a bug in every object”. Given this year’s revelations about the NSA monitoring our communication, it was a prescient fear, and one that has had recent commentators reaching for that familiar adjective “Orwellian.”

There are, of course, causes for celebration about this technology too: hopes for a world in which computers, like chairs, simply support us without draining a particle more of our time, attention or effort than required. And in any case, subsequent generations may not share the same concerns as us. As Douglas Adams put it, everything that already exists when you’re born is just normal – but “anything that gets invented after you’re 30 is against the natural order of things and the beginning of the end of civilisation as we know it.”

Yet as computers slip ever further beneath our awareness, it is important that we continue to ask certain questions. What should an unseen machine be permitted to hear and see of our own, and others’, lives? Can we trust what they tell us? And how do we switch them off?

Invisible computers are here. But we must remember to keep at least some of their facets within sight.

Future Soldiers May Wear Bulletproof Spider Silk

Ultra-strong spider silk, one of the toughest known natural fibers,  could one day protect soldiers on the battlefield from bullets and other threats, one company says.
Spider silk is light and flexible, and is stronger by weight than high-grade steel. Its potential applications span a wide range of industries, from surgical sutures for doctors to protective wear for the military. But producing and harvesting enough spider silk to make these types of products commercially available has posed a challenge.
Kraig Biocraft Laboratories, based in Lansing, Michigan, genetically engineered silkworms to produce spider silk, and has used the material to create gloves that will soon undergo strength testing.
Spider Webfu

Monday 21 April 2014

VIDEO INNOVATIONS IN THE YEAR 2020

Futuristic montage of an eyeball, thumbprint and computer text on a screen
What will video production look like in the year 2020? Seeing what’s to come tomorrow requires nothing more than watching today’s movies. Featured technology presents instant streaming video information at the wave of a hand and touch of a finger. It’s a future where our eyes will control camera focus and video uploads will be so 2013. We no longer have to look decades ahead to see what’s coming in video technology. The future is now.
Much of what will be is also seen on today’s trade show floors as innovative companies from the most popular brand manufacturers to new companies you’ve never heard of, show tomorrow’s video technology.

Things to Watch for in the Year 2020

There’ll be no mouse in your future, no keyboard either. You won’t need them because your touch device does it all — camera included. You’ll stream and share home video, storing it in the cloud for instant access and distribution. You’ll have to fight to create shaky camera movement, technology won’t let you do that. And, no more consumer camcorder market. Except for durable action sports units like GoPro’s HERO cameras, and those who need them.
In case you’ve not attended a trade show before, here’s a few things to watch for in 2020, and what to save up for, if some of them aren’t already on your birthday wish list — no crystal ball necessary.

Consumer Camcorders in 2020

Remember the dance company you once produced video for? See that mom sitting next to you at a dance recital for that same company today? Yeah, the one recording video with that nice touch device and the big, bright screen. There’ll be even more of them next year and every parent will be using one in 2020. Yes, they were told the event is being professionally recorded and edited; no cameras needed. But there are no dedicated cameras, everyone now uses a touch device or super smartphone, editing and instantly posting their child’s performance to YouTube before leaving their seats for the drive home. There will be no more uploading.
Consumer and hobbyist video production is all touch-oriented now and streaming video as a method of sharing and communication is an everyday thing. Even grandma is doing it or viewing it. The new home video features instant touch screen editing and Internet sharing. The DVD and their players are mostly stacked next to vinyl records, cassette tapes and CDs in the storage unit. Video production will get easier, creating quality content might still be a challenge.
Produce Next-level Video with Your Smartphone
Smartphones are becoming increasingly more capabile, so take advantage of the device in your pocket and learn how to capture great footage with this exclusive training video. Read more...

Professional Camcorders in 2020

You forgot to bring extra cards and the wedding went way longer than anticipated. Not a problem. Professional camcorders take a chapter from the consumer arena, providing seamless connection to editing computers, tablet devices, even other camcorders for instant backup. You’ve also expanded your business model to include streaming since this seamless connection allows you to set up in minutes now, so running wire and making hard connections isn’t an issue.
That new super lens makes getting a variety of shots and performing depth of field tricks a piece of cake, shifting at a touch of the camera’s screen to instantly change aperture or zoom setting without twisting a ring or searching for that tiny button you forgot to preset.
Top-of-the-line but within budget, professional camcorders have sensors that are four times the size of today’s full-frame sensors yielding megapixel counts in the 100s. In-camera stabilization in the professional units also will make it next to impossible to intentionally introduce shaky cam footage into your productions.
And while consumer cameras are no longer available or necessary, the advent of quality performance for mobile phones and tablets, along with incredibly affordable pricing, make having a few on hand for those extra POV shots a no-brainer, notwithstanding audio quality issues.

Recording Audio in 2020

High quality audio recording devices from the microphone to stand-alone digital recording devices remains at the front of quality professional work standards, at prices unheard of today, but most consumers still won’t bother with it. The future for smartphone and portable pad audio might be the only ongoing issue in 2020.
For professionals, and through more affordable technology, you get stick-on mic placement virtually anywhere you want, at any venue. The auto-sensing devices lock onto your system settings, sending clear, crisp audio to not only the camera but a cloud editing base as well. The various channels of audio automatically adjust for surround sound, adding additional speaker sources as well for beyond-surround sound. This will all be automatic, with level controls that no longer build the gain to a point of overwhelming noise levels. Quiet spots are smooth and clean, while there’s virtually no delay in the limiter or boost for unexpected low/high levels.
Audio from cameras and TVs will be focused to a single user, no wires, like an invisible phone booth. The reception, clarity and signal quality will move beyond hardwired to a pristine level. No more riding the sliders for accurate level control.
Consumer and hobbyist video production is all touch-oriented now and streaming video as a method of sharing and communication is an everyday thing. Even grandma is doing it or viewing it.

Internet Video in 2020

YouTube and its rivals totally overtake traditional TV as the world’s preferred source for consuming, using, and sharing video content. The increased quality of production, range of special interest programming and overall entertainment value will move conventional production and programming to a list of things of the past. With, that is, the possible exception of some traditional broadcasting affiliates successfully making the transition to Internet programming.
You see it happening now, video at consumer hotspots among the fresh produce aisles, at the checkouts. There’s video at the gas pumps and your favorite bank’s ATM units. Excuse me, officer, but is that a camera you’re wearing on your chest? Ubiquitous, right? As common as it has become, everything will have a camera in 2020. You’ll be watching from one or looking into one wherever you go, whatever you do. How about those new vehicles with the backup safety units, look for them to be everywhere else on the 2021 vehicle you buy in 2020.
Novelties today, by 2020, cameras on wrist watches, glasses, the top button on your shirt, blouse or coat will be commonplace. Excuse me, citizen, is that a camera you’re wearing on your chest? The patrol officer who pulled you over may ask you to turn it off while making out your ticket — conflicting with the signal on his chestcam.
Camera-equipped drones will become as prevalent as bicycles. Everybody into any facet of video production, or surveillance, will have one. Bicycles will universally feature built-in camera units, as will skateboards, snowboards and vehicle dashboards. If it’s a device that moves, even a conveyor belt, it will have a camera.

Editing in 2020

All your digital devices will work together on a single task. You think that smartphone or tablet gives you control choices now, in 2020 you’ll use them to control the camera and your lights, wirelessly connect to your audio sources, adjust for tilt, pan or zoom while monitoring your shot, framing and quality, multi-multitasking. You’ll be able to use several affordable tablets to edit video on a giant screen.
The genius of Adobe will manifest as all professional level editing software manufacturers will move to the subscription model. This will become the norm for videographers seeking affordable professional programs that also feature Internet-based collaboration in an instant, and also a renewable revenue stream for the manufacturers. This consistent, predictable level of monthly cash flow from the consumer means even more frequent updates to their software.

HD in 2020

Another one of those “that’s so 2013,” with resolution now in the 4k and 8k range. All prosumer DSLRs and camcorders will shoot at minimum, RAW 4k. Your smartphone will feature 4k video, and that level will become the common workflow resolution in 2020. It will be easily edited within Final Cut, Premiere Pro and Vegas, as well as a host of others. The frustration and clamor of dealing with such levels of resolution will quieten to nothing even as the noise and complaints regarding 8k production issues begins to rise.
Like its predecessors, 8k will earn the love and ire of professional producers. The upgrade path for many will be daunting at first. Discussion groups will wonder if the level of resolution even matters, something the human eye cannot hope to discern. Butwhat about those video devices that are now replacing missing eyes or bringing vision to the blind? Will the 2020’s bionic viewing devices see what 8k has to offer?

Independent Video Production in 2020

Due to the technological advantages and affordability of production equipment, and the ubiquitous role online video entertainment plays in 2020, the most successful emerging video producers and filmmakers will gain their fame without the help of big studios. The last of the major studios have merged by now, creating a colossal production operation that has lost all control over the effect independent production has had on its bottom line.
Top content producers achieve success by starting their own projects and posting them online. Hollywood will return to valuing good writers above all, maybe even more than the A-list stars who haven’t been replaced by computer generated characters. Because the technology is so advanced and so affordable, more creative individuals will be able to shoot and edit, distribute and share quality video. Truly great ideas, powerful storylines, however, will be as rare as ever.

Video in 2020

It’s not too hard to see what a few more years will bring us in video technology. From where video is now to where the above begins, is a tiny step into the future. What other options will be envisioned, invented and implemented by 2020? Or are they already here? We could be speaking of DVDs, hard-wired equipment, uploading video in 2015, “That is so yesterday.”

 

SIDEBAR

Sports, Surveillance, Law Enforcement and the Military

Innovations in video technology often get their start in military applications, but professional sports, surveillance and law enforcement applications are never very far behind. Google Glass has a potential game changer in play that could pull the Year 2020 video innovations and throw them into next season.
According to the CNET article, Oakland Raiders punter Chris Kluwe is a Google Glass Explorer, wearing the device during practice sessions. Could games be next? Writer Chris Matyszczyk thinks even linebackers will be giving armchair players a front-end view of all the intense action. It’s not award-winning video production yet. But soon?
In law enforcement, NBC News Technology’s Devin Coldewey, reports on police officers around the country who wear video cameras along with their badges and guns like having “digital eyes” to shield them from citizen complaints.
Cop watch: Who benefits when law enforcement gets body cams?” is an interesting read on yet another video innovation. The popular technology news channel also features an article by Bill Briggs, who notes that summer blockbuster movies are featuring “gadget-draped” future fighters and so is the military. Matt Damon in Elysium is the featured flicker of the future, according to Briggs, but a military technology developer in Florida is working on a “... gizmo-rich, body-shielding uniform.” Is video going to come with that? Of course.
And somewhere in all this is an ultra-expensive curved screen television that is going to be the viewing implement of the future, according to CBS News in an interview with CNET editor-at-large Brian Cooley. It’s $9,000 right now, but prices are sure to drop, someday.
 

Sunday 20 April 2014

The future techs will change us

Buckle down for the ride.
Just when you thought speech-enabled tablets were cool. In the future, your car won't just find a parking spot, it will know where you like to park. The dollar will be replaced by not just an encrypted currency like bitcoin, but by a currency of knowledge and social connection. And your home will become a digital, customizable expression of your tastes.
1. The currency of youTech pundits have predicted the age of encrypted currency for years. And it makes sense: you’ll purchase a piece of encrypted data like a bitcoin, storing them in a protected digital vault. Bitcoins could eventually replace the digital (or paper) currency we all use.
'Robots will protect us, cultivate our raw food, and take care of our health -- and look after our parents.'
- Dmitry Grishin from Grishin Robotics
Yet, according to security expert Tal Klein at Bromium, the far-future trend will shift even further. Someday, your currency might be tied to your own identity.
“You will be worth what you know and can contribute,” he told FoxNews.com. “That will be measured on an open exchange that will remind you of your real-time worth. It will be like a mash-up of NASDAQ, bitcoin and LinkedIn.”
2. Robots everywhere!
Robots are already popping up everywhere -- the iRobot vacuum cleaner, a swimming pool bot that doesn’t need any oxygen to go underwater, or that Audi A8 that brakes for you.
Dmitry Grishin from Grishin Robotics says robots in the future will be even more common than phones and tablets today: there will be small home bots for cooking, laundry, and decorating. But, they’ll be like the vacuum bots, not androids you see in movies. (Think: small and mobile enough to move around the whole house.)
“There will be robots in agriculture, defense and medicine,” Grishin said. “These robots will protect us, cultivate our raw food, and take care of our health -- and look after our parents.”
3. Cars with an “intent engine”
The term “intent engine” is a little hard to understand. After all, we have nothing quite like it today. Yet, the car of the future will know your intentions and predict what you like.
Nick Pudar, a vice president at OnStar, says the future car will keep track of your day, recording where you go and bookmarking interesting sights. For example, you might pass a restaurant and log (probably by voice) that you’d like to dine there someday. A few months later, the car might remind you of your intent. It might even direct you to the parking spot you usually like, say, by a shady oak tree.
“[The future car could offer] geolocation bookmarking for later serendipitous retrieval,” he told FoxNews.com. “It could track not just where I’ve been but also where I want to go.”
4. Direct brain interfaces
We might not all have bald heads and power cords stuck to our ears, but we could be wirelessly connected to computers at some point in the future -- much like the Borg on "Star Trek." (Hopefully, we won’t be as scary or bent on world conquest.)
Tom Furness, a University of Washington engineering professor and co-inventor of the Visualant ChromaID, a chemical scanner, told FoxNews.com that a direct brain interface will mean “typing” a document with our minds, thinking of a command and making it happen (“turn on sprinkler system”), and even imagining something and then printing it on a 3D printer.
Robots will be everywhere, of course -- and we’ll have them do our bidding without saying a word. “Computers will communicate with humans in the form of interactive robots that can serve as counselors, playmates and teachers,” he said.
5. The customizable homeThe connected home of today already senses when you get home from work and can turn on the lights or raise the temperature to a desired level. In the future, much like how your car can predict what you want, your home will be more automated to meet your needs.
Jeremy Warren, the vice president of innovation at Vivint, a home automation and security company, says home customization will change in subtle but important ways. One example of this: a new form of paint might emit a soft glow and change during the day to match your mood or the weather conditions outside. He says new research will show how lighting affects us, and the home will respond in kind.
Display technology and security features will also evolve. We might not have a fixed camera on a wall or on a desk; the entire home might be able to show information. “There will be a paradigm shift to a display in the home that’s more flexible and does what you want -- say, a kitchen countertop that makes a recipe appear as soon as you look at it.

Saturday 12 April 2014

15 Hot New Technologies That Will Change Everything

The Next Big thing? The memristor, a microscopic component that can "remember" electrical states even when turned off. It's expected to be far cheaper and faster than flash storage. A theoretical concept since 1971, it has now been built in labs and is already starting to revolutionize everything we know about computing, possibly making flash memory, RAM, and even hard drives obsolete within a decade.
The memristor is just one of the incredible technological advances sending shock waves through the world of computing. Other innovations in the works are more down-to-earth, but they also carry watershed significance. From the technologies that finally make paperless offices a reality to those that deliver wireless power, these advances should make your humble PC a far different beast come the turn of the decade.
In the following sections, we outline the basics of 15 upcoming technologies, with predictions on what may come of them. Some are breathing down our necks; some advances are still just out of reach. And all have to be reckoned with.

Friday 14 March 2014

Myth of the ‘real-life Robocop’


(MGM/Columbia)

Reports that the ultimate crime enforcer may be on our streets soon is largely news hype, says Quentin Cooper. We’re more likely to see Robosnoop, not Robocop.
In the new reboot he’s called the “future of American justice”. In the far superior 1987 original he’s the “future of law enforcement”*. But is Robocop the future of anything?
Both versions of the movie explore how the war against crime might be turned by a man-machine cyborg, programmed to “serve the public trust, protect the innocent, uphold the law”. Even in 1987 this idea of robotically-enhanced policing wasn’t new, at least in fiction –I’m particularly fond of the late 1970s US sitcom Holmes & Yoyo, in which a cop with a habit of leaving his partners in hospital pairs up with an android specially programmed for police work. Since then other TV shows have embraced this premise including Future Cop, Mann & Machine, and most recently ongoing Fox series Almost Human, where in the year 2048 every cop is paired with an android. 
Given our fondness both for police dramas and for stories where humans work alongside humanoid machines (Data in Star Trek, David in AI, David in Prometheus, plus many others not brought to you by the letter D) it’s easy to see why television and movie executives keep going back to the same premise. And they’re not the only ones.
Will we ever see Robocops roaming our streets? (MGM/Columbia)
Go a-Googling and you’ll find many, many references to “real-life Robocops” and articles about how police forces and defence agencies are already following in his clanking metal footsteps. This is largely journalistic hyperbole. To the best of my knowledge there is no current research on melding man and circuitry to create cyborg cops. And no-one even has plans to put armed robots on the beat, primed to laser anyone caught littering. What is advancing at a breathtaking pace, though, is the increasing use of automation and autonomy in policing and surveillance. Less Robocop, more Robosnoop.
Several robotics companies already offer a range of “law enforcement machines” – non-humanoid devices often deployed for surveillance in dangerous situations such as getting up close with suspected bombs. That’s the robot as merely a tool, but there are plans to give machines a greater role in policing.
In December, California startup Knightscope unveiled the prototype of their K5 Autonomous Data Machine. An R2-D2 lookalike, it’s designed to combine sensory readings – not just sound and vision but touch and smell – with known social and financial data on its surroundings in order to “predict and prevent crime in your community”. Which puts it almost in the “pre-crime” territory of Spielberg’s Minority Report. If nothing else it’s five feet tall, so that should deter some potential wrong-doers.
Getting even closer to Robocop is the work going on at Florida University International, assessing the viability of hooking up disabled police officers (and soldiers) to “telebots”, so they can control them as they go on patrol.
Again, there’s a long way to go before this kind of technology is close to being deployed. But other advances are already on the street. Or – at least – looking down on the street from above. Although unmanned aircraft have been around for almost a century, it’s only since the original Robocop came out that we’ve become very familiar with the use of drones around the world. Some are purely for remote monitoring using cameras and sensors, others are heavily armed hunter-killers. The unsubtly named Reaper (more formally the MQ-9 Reaper from General Atomics) is already a veteran of numerous combat missions in Afghanistan, Iraq and beyond.
Drones being deployed in warzones and other hotspots are still a long way from the policing-by-machine depicted in Robocop. But wait. Following considerable pressure from the multi-billion-dollar Unmanned Aerial Systems industry, the Federal Aviation Authority (FAA) now have aCongressionally-approved mandate to integrate civilian drones into American airspace, with the FAA themselves estimating there could be “30,000 drones operating by 2020”.
While proponents have flagged up many positive uses – from being a cheaper, quieter alternative to police helicopters right down to them helping get packages and pizza delivered – there are numerous concerns about drone proliferation. Not just the obvious ones about privacy and civil rights, but also safety and security – Reapers and other drones already have a reputation for being accident prone, and there’s also the risk of them more deliberately going out of control through hacking.
If plans go ahead, US authorities estimate there could be 30,000 drones like the MQ-9 Reaper patrolling the skies by 2020 (Getty Images)
If there’s one thing science-fiction warns us about, it’s the potential for anything more sophisticated than a calculator to malfunction with homicidal consequences. So be very wary of computers and robots that are meant to protect us, especially if you’ve given them weaponry. From Skynet in Terminator to the Agents in The Matrix to the Cylons of the reimagined Battlestar Galactica, it’s always the same: smart becomes sentient, sentient becomes belligerent and the machines’ logical conclusion is to wipe out humanity. Or at least enslave us.
That doesn’t mean having ever more drones in our skies or even other more advanced autonomous system will inevitably lead to the Robocalypse. It means that before it’s too late and our skies are full of flying eyes, we need to make decisions about what we stand to lose as well as gain from all this electronic eternal vigilance.
As the original Robocop says: “Your move, creep”. 
*Yes, in the original movie it is the ED-209 robot that is originally described as the “future of law enforcement”. But it was also the film’s tagline, and the trailer ended with “Robocop: the future of law enforcement”.

Better Than Human: Why Robots Will — And Must — Take Our Jobs




It’s hard to believe you’d have an economy at all if you gave pink slips to more than half the labor force. But that—in slow motion—is what the industrial revolution did to the workforce of the early 19th century. Two hundred years ago, 70 percent of American workers lived on the farm. Today automation has eliminated all but 1 percent of their jobs, replacing them (and their work animals) with machines. But the displaced workers did not sit idle. Instead, automation created hundreds of millions of jobs in entirely new fields. Those who once farmed were now manning the legions of factories that churned out farm equipment, cars, and other industrial products. Since then, wave upon wave of new occupations have arrived—appliance repairman, offset printer, food chemist, photographer, web designer—each building on previous automation. Today, the vast majority of us are doing jobs that no farmer from the 1800s could have imagined.
It may be hard to believe, but before the end of this century, 70 percent of today’s occupations will likewise be replaced by automation. Yes, dear reader, even you will have your job taken away by machines. In other words, robot replacement is just a matter of time. This upheaval is being led by a second wave of automation, one that is centered on artificial cognition, cheap sensors, machine learning, and distributed smarts. This deep automation will touch all jobs, from manual labor to knowledge work.
First, machines will consolidate their gains in already-automated industries. After robots finish replacing assembly line workers, they will replace the workers in warehouses. Speedy bots able to lift 150 pounds all day long will retrieve boxes, sort them, and load them onto trucks. Fruit and vegetable picking will continue to be robotized until no humans pick outside of specialty farms. Pharmacies will feature a single pill-dispensing robot in the back while the pharmacists focus on patient consulting. Next, the more dexterous chores of cleaning in offices and schools will be taken over by late-night robots, starting with easy-to-do floors and windows and eventually getting to toilets. The highway legs of long-haul trucking routes will be driven by robots embedded in truck cabs.
All the while, robots will continue their migration into white-collar work. We already have artificial intelligence in many of our machines; we just don’t call it that. Witness one piece of software by Narrative Science (profiled in issue 20.05) that can write newspaper stories about sports games directly from the games’ stats or generate a synopsis of a company’s stock performance each day from bits of text around the web. Any job dealing with reams of paperwork will be taken over by bots, including much of medicine. Even those areas of medicine not defined by paperwork, such as surgery, are becoming increasingly robotic. The rote tasks of any information-intensive job can be automated. It doesn’t matter if you are a doctor, lawyer, architect, reporter, or even programmer: The robot takeover will be epic.
And it has already begun.
Baxter is an early example of a new class of industrial robots created to work alongside humans.
Here’s why we’re at the inflection point: Machines are acquiring smarts.
We have preconceptions about how an intelligent robot should look and act, and these can blind us to what is already happening around us. To demand that artificial intelligence be humanlike is the same flawed logic as demanding that artificial flying be birdlike, with flapping wings. Robots will think different. To see how far artificial intelligence has penetrated our lives, we need to shed the idea that they will be humanlike.
Consider Baxter, a revolutionary new workbot from Rethink Robotics. Designed by Rodney Brooks, the former MIT professor who invented the best-selling Roomba vacuum cleaner and its descendants, Baxter is an early example of a new class of industrial robots created to work alongside humans. Baxter does not look impressive. It’s got big strong arms and a flatscreen display like many industrial bots. And Baxter’s hands perform repetitive manual tasks, just as factory robots do. But it’s different in three significant ways.
First, it can look around and indicate where it is looking by shifting the cartoon eyes on its head. It can perceive humans working near it and avoid injuring them. And workers can see whether it sees them. Previous industrial robots couldn’t do this, which means that working robots have to be physically segregated from humans. The typical factory robot is imprisoned within a chain-link fence or caged in a glass case. They are simply too dangerous to be around, because they are oblivious to others. This isolation prevents such robots from working in a small shop, where isolation is not practical. Optimally, workers should be able to get materials to and from the robot or to tweak its controls by hand throughout the workday; isolation makes that difficult. Baxter, however, is aware. Using force-feedback technology to feel if it is colliding with a person or another bot, it is courteous. You can plug it into a wall socket in your garage and easily work right next to it.
Second, anyone can train Baxter. It is not as fast, strong, or precise as other industrial robots, but it is smarter. To train the bot you simply grab its arms and guide them in the correct motions and sequence. It’s a kind of “watch me do this” routine. Baxter learns the procedure and then repeats it. Any worker is capable of this show-and-tell; you don’t even have to be literate. Previous workbots required highly educated engineers and crack programmers to write thousands of lines of code (and then debug them) in order to instruct the robot in the simplest change of task. The code has to be loaded in batch mode, i.e., in large, infrequent batches, because the robot cannot be reprogrammed while it is being used. Turns out the real cost of the typical industrial robot is not its hardware but its operation. Industrial robots cost $100,000-plus to purchase but can require four times that amount over a lifespan to program, train, and maintain. The costs pile up until the average lifetime bill for an industrial robot is half a million dollars or more.
The third difference, then, is that Baxter is cheap. Priced at $22,000, it’s in a different league compared with the $500,000 total bill of its predecessors. It is as if those established robots, with their batch-mode programming, are the mainframe computers of the robot world, and Baxter is the first PC robot. It is likely to be dismissed as a hobbyist toy, missing key features like sub-millimeter precision, and not serious enough. But as with the PC, and unlike the mainframe, the user can interact with it directly, immediately, without waiting for experts to mediate—and use it for nonserious, even frivolous things. It’s cheap enough that small-time manufacturers can afford one to package up their wares or custom paint their product or run their 3-D printing machine. Or you could staff up a factory that makes iPhones.
Photo: Peter Yang
Baxter was invented in a century-old brick building near the Charles River in Boston. In 1895 the building was a manufacturing marvel in the very center of the new manufacturing world. It even generated its own electricity. For a hundred years the factories inside its walls changed the world around us. Now the capabilities of Baxter and the approaching cascade of superior robot workers spur Brooks to speculate on how these robots will shift manufacturing in a disruption greater than the last revolution. Looking out his office window at the former industrial neighborhood, he says, “Right now we think of manufacturing as happening in China. But as manufacturing costs sink because of robots, the costs of transportation become a far greater factor than the cost of production. Nearby will be cheap. So we’ll get this network of locally franchised factories, where most things will be made within 5 miles of where they are needed.”
That may be true of making stuff, but a lot of jobs left in the world for humans are service jobs. I ask Brooks to walk with me through a local McDonald’s and point out the jobs that his kind of robots can replace. He demurs and suggests it might be 30 years before robots will cook for us. “In a fast food place you’re not doing the same task very long. You’re always changing things on the fly, so you need special solutions. We are not trying to sell a specific solution. We are building a general-purpose machine that other workers can set up themselves and work alongside.” And once we can cowork with robots right next to us, it’s inevitable that our tasks will bleed together, and soon our old work will become theirs—and our new work will become something we can hardly imagine.
To understand how robot replacement will happen, it’s useful to break down our relationship with robots into four categories, as summed up in this chart:
The rows indicate whether robots will take over existing jobs or make new ones, and the columns indicate whether these jobs seem (at first) like jobs for humans or for machines.
Let’s begin with quadrant A: jobs humans can do but robots can do even better. Humans can weave cotton cloth with great effort, but automated looms make perfect cloth, by the mile, for a few cents. The only reason to buy handmade cloth today is because you want the imperfections humans introduce. We no longer value irregularities while traveling 70 miles per hour, though—so the fewer humans who touch our car as it is being made, the better.
And yet for more complicated chores, we still tend to believe computers and robots can’t be trusted. That’s why we’ve been slow to acknowledge how they’ve mastered some conceptual routines, in some cases even surpassing their mastery of physical routines. A computerized brain known as the autopilot can fly a 787 jet unaided, but irrationally we place human pilots in the cockpit to babysit the autopilot “just in case.” In the 1990s, computerized mortgage appraisals replaced human appraisers wholesale. Much tax preparation has gone to computers, as well as routine x-ray analysis and pretrial evidence-gathering—all once done by highly paid smart people. We’ve accepted utter reliability in robot manufacturing; soon we’ll accept it in robotic intelligence and service.
Next is quadrant B: jobs that humans can’t do but robots can. A trivial example: Humans have trouble making a single brass screw unassisted, but automation can produce a thousand exact ones per hour. Without automation, we could not make a single computer chip—a job that requires degrees of precision, control, and unwavering attention that our animal bodies don’t possess. Likewise no human, indeed no group of humans, no matter their education, can quickly search through all the web pages in the world to uncover the one page revealing the price of eggs in Katmandu yesterday. Every time you click on the search button you are employing a robot to do something we as a species are unable to do alone.
While the displacement of formerly human jobs gets all the headlines, the greatest benefits bestowed by robots and automation come from their occupation of jobs we are unable to do. We don’t have the attention span to inspect every square millimeter of every CAT scan looking for cancer cells. We don’t have the millisecond reflexes needed to inflate molten glass into the shape of a bottle. We don’t have an infallible memory to keep track of every pitch in Major League Baseball and calculate the probability of the next pitch in real time.
We aren’t giving “good jobs” to robots. Most of the time we are giving them jobs we could never do. Without them, these jobs would remain undone.
Now let’s consider quadrant C, the new jobs created by automation—including the jobs that we did not know we wanted done. This is the greatest genius of the robot takeover: With the assistance of robots and computerized intelligence, we already can do things we never imagined doing 150 years ago. We can remove a tumor in our gut through our navel, make a talking-picture video of our wedding, drive a cart on Mars, print a pattern on fabric that a friend mailed to us through the air. We are doing, and are sometimes paid for doing, a million new activities that would have dazzled and shocked the farmers of 1850. These new accomplishments are not merely chores that were difficult before. Rather they are dreams that are created chiefly by the capabilities of the machines that can do them. They are jobs the machines make up.
Before we invented automobiles, air-conditioning, flatscreen video displays, and animated cartoons, no one living in ancient Rome wished they could watch cartoons while riding to Athens in climate-controlled comfort. Two hundred years ago not a single citizen of Shanghai would have told you that they would buy a tiny slab that allowed them to talk to faraway friends before they would buy indoor plumbing. Crafty AIs embedded in first-person-shooter games have given millions of teenage boys the urge, the need, to become professional game designers—a dream that no boy in Victorian times ever had. In a very real way our inventions assign us our jobs. Each successful bit of automation generates new occupations—occupations we would not have fantasized about without the prompting of the automation.
To reiterate, the bulk of new tasks created by automation are tasks only other automation can handle. Now that we have search engines like Google, we set the servant upon a thousand new errands. Google, can you tell me where my phone is? Google, can you match the people suffering depression with the doctors selling pills? Google, can you predict when the next viral epidemic will erupt? Technology is indiscriminate this way, piling up possibilities and options for both humans and machines.
It is a safe bet that the highest-earning professions in the year 2050 will depend on automations and machines that have not been invented yet. That is, we can’t see these jobs from here, because we can’t yet see the machines and technologies that will make them possible. Robots create jobs that we did not even know we wanted done.
Photo: Peter Yang
Finally, that leaves us with quadrant D, the jobs that only humans can do—at first. The one thing humans can do that robots can’t (at least for a long while) is to decide what it is that humans want to do. This is not a trivial trick; our desires are inspired by our previous inventions, making this a circular question.
When robots and automation do our most basic work, making it relatively easy for us to be fed, clothed, and sheltered, then we are free to ask, “What are humans for?” Industrialization did more than just extend the average human lifespan. It led a greater percentage of the population to decide that humans were meant to be ballerinas, full-time musicians, mathematicians, athletes, fashion designers, yoga masters, fan-fiction authors, and folks with one-of-a kind titles on their business cards. With the help of our machines, we could take up these roles; but of course, over time, the machines will do these as well. We’ll then be empowered to dream up yet more answers to the question “What should we do?” It will be many generations before a robot can answer that.
This postindustrial economy will keep expanding, even though most of the work is done by bots, because part of your task tomorrow will be to find, make, and complete new things to do, new things that will later become repetitive jobs for the robots. In the coming years robot-driven cars and trucks will become ubiquitous; this automation will spawn the new human occupation of trip optimizer, a person who tweaks the traffic system for optimal energy and time usage. Routine robo-surgery will necessitate the new skills of keeping machines sterile. When automatic self-tracking of all your activities becomes the normal thing to do, a new breed of professional analysts will arise to help you make sense of the data. And of course we will need a whole army of robot nannies, dedicated to keeping your personal bots up and running. Each of these new vocations will in turn be taken over by robots later.
The real revolution erupts when everyone has personal workbots, the descendants of Baxter, at their beck and call. Imagine you run a small organic farm. Your fleet of worker bots do all the weeding, pest control, and harvesting of produce, as directed by an overseer bot, embodied by a mesh of probes in the soil. One day your task might be to research which variety of heirloom tomato to plant; the next day it might be to update your custom labels. The bots perform everything else that can be measured.
Right now it seems unthinkable: We can’t imagine a bot that can assemble a stack of ingredients into a gift or manufacture spare parts for our lawn mower or fabricate materials for our new kitchen. We can’t imagine our nephews and nieces running a dozen workbots in their garage, churning out inverters for their friend’s electric-vehicle startup. We can’t imagine our children becoming appliance designers, making custom batches of liquid-nitrogen dessert machines to sell to the millionaires in China. But that’s what personal robot automation will enable.
Everyone will have access to a personal robot, but simply owning one will not guarantee success. Rather, success will go to those who innovate in the organization, optimization, and customization of the process of getting work done with bots and machines. Geographical clusters of production will matter, not for any differential in labor costs but because of the differential in human expertise. It’s human-robot symbiosis. Our human assignment will be to keep making jobs for robots—and that is a task that will never be finished. So we will always have at least that one “job.”
In the coming years our relationships with robots will become ever more complex. But already a recurring pattern is emerging. No matter what your current job or your salary, you will progress through these Seven Stages of Robot Replacement, again and again:
  • 1. A robot/computer cannot possibly do the tasks I do.
    [Later.]
  • 2. OK, it can do a lot of them, but it can’t do everything I do.
    [Later.]
  • 3. OK, it can do everything I do, except it needs me when it breaks down, which is often.
    [Later.]
  • 4. OK, it operates flawlessly on routine stuff, but I need to train it for new tasks.
    [Later.]
  • 5. OK, it can have my old boring job, because it’s obvious that was not a job that humans were meant to do.
    [Later.]
  • 6. Wow, now that robots are doing my old job, my new job is much more fun and pays more!
    [Later.]
  • 7. I am so glad a robot/computer cannot possibly do what I do now.
  • This is not a race against the machines. If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots. Ninety percent of your coworkers will be unseen machines. Most of what you do will not be possible without them. And there will be a blurry line between what you do and what they do. You might no longer think of it as a job, at least at first, because anything that seems like drudgery will be done by robots.
    We need to let robots take over. They will do jobs we have been doing, and do them much better than we can. They will do jobs we can’t do at all. They will do jobs we never imagined even needed to be done. And they will help us discover new jobs for ourselves, new tasks that expand who we are. They will let us focus on becoming more human than we were.
    Let the robots take the jobs, and let them help us dream up new work that matters.

Disqus

comments powered by Disqus