A prize-nominated portrait of an android, discussed in a recent New York Times article here.
It isn’t necessarily the job of science fiction writers to predict the future, any more than it’s necessarily the job of other fiction writers to describe the present or reconstruct the past. I don’t know about you, but I can’t tell you what I’m going to wear to work tomorrow, much less tell you what anyone else will have on, and if I were a science fiction writer it’d be no different. As a second-look review of Isaac Asimov’s SF novel Foundation recently pointed out, that book premised on the possibility of predicting the future failed to anticipate changes in the social roles of women that were just around the corner. But then Asimov wasn’t trying to see the future; he was just playing the old “what if” game. That baseball-playing font of wisdom Yogi Berra hit something home when he remarked, “It’s tough to make predictions, especially about the future.” What’s more, we can’t always say after the fact whether somebody got it right. Novelist William Gibson is sometimes credited with foreseeing the Internet in Neuromancer, published in 1984, but networked computers already existed when he wrote that book, and no one yet experiences the Internet as the enveloping virtual space that he termed a “consensual hallucination.” In some cases we can’t even be sure about the past; about 10 years ago, no one knew whether that line about predictions should be traced to Berra or Niels Bohr. (Maybe that’s been settled now; to adapt another supposed Berra-ism, the past ain’t what it used to be, any more than the future is.)
Still, it can be fun to try predicting the future, Continue reading
In The Comic Book Story of Video Games, due out in a few weeks, the author and the artist present a history of video games that’s knowledgeable and wide-ranging but somewhat eccentric. Initially, Jonathan Hennessey focuses equally on “electronic games and electronic screen displays,” but much of the book covers the highways and byways of computer history, in which he finds that computers, which were “intended only for military, scientific, government, and industry use,” were soon used for games as well: a tennis game, a mouse-in-a-maze game, a billiards game, even a clever text-based game called Colossal Cave Adventure, which used only words on a screen. Much of this will be familiar to anyone who already knows the story of computers, but it’s presented in a rather colorful way. Continue reading
On January 28, 2011, just days after the first protest gathering, a man on Cairo’s Talaat Harb Street, near Tahrir Square, tries to pick up and throw back a tear-gas canister. (Photo by Alisdare Hickson. Original image here. Licensed under CC BY-SA 2.0.)
For the better part of a decade, we’ve been watching protest movements arise around the world and wondering what role was played by Twitter, Facebook, and the like. Did Facebook bring down the Egyptian government in 2011? How did the Tea Party movement in the United States elect sympathetic legislators while the Occupy Wall Street movement did not? Did Chinese government censorship of online platforms thwart the democracy activists in Hong Kong in 2014? Was it their methods or the activists themselves that succeeded in some cases and not in others? In Twitter and Tear Gas: The Power and Fragility of Networked Protest, Zeynep Tufekci, a sociologist who has been studying and often participating in digitally networked movements since the late 90s, discusses the new technologies, how they’re used by protest movements, and how they’re used as well as countered by governments and opposing groups.
A screenshot from a Slack demo.
Do you Slack? I didn’t used to, but I do now. And I’m pretty sure I’m getting more done and having more fun because of it. I like Slack. (So does the Church of the SubGenius, but that’s different.) Slack is spreading. If you don’t know about Slack but you use computers and work with more than a handful of people, you probably should know about it. Continue reading
How the news was made: Copies of the Times emerge from a cutting and folding machine, September 1942. (Photo by Marjory Collins)
When the Internet was young(er), publications, like other businesses, began establishing outposts there. This now seems like something of a recap of the original frontier experience: the Internet was fresh ground, unexplored territory, ripe for shaping, settling, colonizing, conquering. It may be going too far to say the whole thing exemplifies Frederick Jackson Turner’s Frontier Thesis (which in any case is still contested), but the expansion into the online realm has certainly been critical for periodicals.
Curiously, while most other businesses went to the Internet to sell, periodicals didn’t. Continue reading
The Amazon Echo, which came out in 2015, is a smart speaker that responds to voice input. Amazon just released an update, called Echo Look, which not only includes the Alexa voice-response system but also has a camera, so it can both listen to you and look at you. It’s designed to sit in your bedroom and serve as some kind of fashion aide. Here’s how Jessi Hempel of Backchannel described it at the start of a short discussion: “Speak to the white oblong assistant, and it will take selfies of your outfits and let you consult style experts to improve them.”
A good candidate for the person I’m most tired of hearing about lately: Elon Musk, who was described yesterday by technology writer Steven Levy, in a remark that may be half tongue-in-cheek and may be purely serious, as “our current Visionary In Chief.” (That phrase appeared here.)
In what sense is Musk a visionary? Continue reading