From “Seeing the Spectrum,” an article on autism, by Steven Shapin, in the 1/25/16 New Yorker:
There are obvious ways in which the history of autism can be seen as progressive: the quality of life for many people receiving a spectrum diagnosis has undoubtedly improved. Yet this same history has come under attack from proponents of so-called medicalization theory. This set of views, loosely linked to the work of Michel Foucault, criticizes the modern tendency to recategorize human behaviors as medical pathologies demanding expert diagnosis and care. For some writers and activists, medicalization is just a power grab, and its arch-villains are a greedy pharmaceutical industry and an arrogant psychiatric profession, which together have pushed pills for states of mind about which nothing can be done, or should be done, and which rightly belong to the realm of individual moral responsibility. The disease categories developed by modern psychiatry and psychology—such things as social anxiety disorder and mixed anxiety-depressive disorder—have been among the most popular targets for the critics of medicalization, as is autism.
This is roughly the view that has been called “therapy culture” in England, by Derek Summerfield and others. I don’t intend to go into it here, but one account of how it pertains to post-traumatic stress disorder can be found in this Independent report from 2011.
Something I found mentioned in passing in “Air Head,” an article on the influence of aviation, by Nathan Heller, in the 2/01/16 New Yorker:
Today, the New Journalism is often misremembered as a formal innovation, a convergence of novelistic reporting and voice-driven subjectivity. But these narrative techniques had been in use for decades, in magazines such as this one; what made the New Journalism new was its vigor as a literary life-style movement, based largely on the idea that professional process—the getting there, the rips between the coasts—was part of the essential story, too.
Okay, I can buy that, but now I want to know what the proponents of New Journalism thought they were doing at the time. My recollection is that all along they presented it pretty much as Heller says we “misremember” it. Come to think of it, I can also see some sense in a kind of converse proposition: that the 19th-century and early modern practitioners of Realism in fiction had incorporated into their work aspects of journalism. Didn’t Balzac write some stories that we would nowadays call “ripped from the headlines”? I seem to recall one about a woman who had thrown herself off the Pont Neuf. A study of the interplay between journalism and fiction would be fun.
George Musser recently posted a fascinating article in Aeon called “Consciousness creep,” the gist of which is given by the dek (as we journalists call the story description beneath the headline): “Our machines could become self-aware without our knowing it. We need a better way to define and test for consciousness.” Here’s Musser’s conclusion:
Tackling those big problems is important.… Building a consciousness detector is not just an intellectually fascinating idea. It is morally urgent—not so much because of what these systems could do to us, but what we could do to them. Dumb robots are plenty dangerous already, so conscious ones needn’t pose a special threat. To the contrary, they are just as likely to put us to shame by displaying higher forms of morality. And for want of recognising what we have brought into the world, we could be guilty of what [University of Oxford philosopher Nick] Bostrom calls ‘mind crime’—the creation of sentient beings for virtual enslavement. In fact, [philosopher Eric Schwitzgebel, of the University of California at Riverside,] argues that we have greater responsibility to intelligent machines than to our fellow human beings, in the way that the parent bears a special responsibility to the child.
We are already encountering systems that act as if they were conscious. Our reaction to them depends on whether we think they really are, so tools such as Integrated Information Theory will be our ethical lamplights. [University of Wisconsin neuroscientist Giulio] Tononi says: ‘The majority of people these days would still say, “Oh, no, no, it’s just a machine”, but they have just the wrong notion of a machine. They are still stuck with cold things sitting on the table or doing clunky things. They are not yet prepared for a machine that can really fool you. When that happens—and it shows emotion in a way that makes you cry and quotes poetry and this and that—I think there will be a gigantic switch. Everybody is going to say, “For God’s sake, how can we turn that thing off?”’
As I wrote in a comment to that piece, we’ve already begun to encounter this situation, in a manner of speaking: through our arts and entertainment. When I asked a friend with whom I often discuss movies what he thought of the one called A.I.: Artificial Intelligence, about a synthetic boy who—as far as we can see—has desires and feelings just as we do, I found that my friend hadn’t cared about the fate of the character, because, he said, “It’s just a machine.”
Science fiction allows us to test ourselves ahead of time in scenarios that may arise in the future; my friend’s reaction points to difficulties. I’m afraid we may have to do more than simply encounter a machine that “shows emotion…and quotes poetry and this and that” before people accept machine consciousness. We have enough trouble viewing other humans as like us.