Friday, March 29, 2013

Artificial Intelligence

 
Some artificial intelligence links. First up, this post by Kevin Drum seems to agree with the Singularitarians that AI is possible, it’s a game changer, and it will solve all our problems. He doesn’t seem to offer any evidence for this conclusion, though: “If AI is ubiquitous by 2040 or so, nearly every long-term problem we face right now—medical inflation, declining employment, Social Security financing, returns to education, global warming, etc. etc.—either goes away or is radically transformed in ways we can't even imagine.” Huh?

Stuart Staniford seems to agree that AI is possible, but not by 2040, and does not think it’s a miracle cure for our biggest problems. He concludes: “It follows pretty immediately that most of our environmental problems, for example, won't be going anywhere as a result of AI.” He concludes:
On the other hand, we are going to have to go through some massive wrenching cultural adjustments in our ideas of work and dependency and how we derive meaning from our lives.  Jamais Cascio recently coined the term the Burning Man Future, which I like.  In particular, it captures the idea that the entire culture is going to increasingly have to become like what is currently a hippy artist fringe.  Either that, or we need to decide that there are some things we really don't want to invent and stop working on this stuff.
Burning Man Future, I love it! Finally we have a shorthand term for what we need. And Ran has a couple of links arguing that corporations are a form of artificial intelligence. But the singularity may have happened longer ago than that – you could say that the  same could is true for civilization itself – new emergent properties occur because of large agglomerations of smaller units. One neuron in your brain isn’t much use, but billions are. The same with people. It could be said that all of this stuff we’ve created and are living in, from stone tools to cell phones are a creation not of people but of this artificial intelligence called “civilization.” Anyway – here is a quote:
It is pretty clear to anyone who’s paying attention that 1. a marketplace regime of firms dedicated to maximizing profit has—broadly speaking—added a lot of value to the world 2. there are a lot of important cases where corporate profit maximization causes harm to humans 3. corporations are—broadly speaking—really good at ensuring that their needs are met.

I don’t think that it’s all that far fetched to suggest that maybe they’re getting better and better at ensuring their needs are met. Pretty much the only thing that the left and right in America can agree on is that moneyed influence has corrupted American politics and yet neither side seems able to do much of anything about it.

What if the private pursuit of profit was—for a long time—proximate to improving the lot of humans but not identical to it? What if capitalism has gone feral, and started making moves that are obviously insane, but also inevitable?
http://mini.quietbabylon.com/post/44276219648/the-singularity-already-happened-we-got-corporations

And this post from Charlie Stross:
We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals. They have enormous media reach, which they use to distract attention from threats to their own survival. They also have an enormous ability to support litigation against public participation, except in the very limited circumstances where such action is forbidden. Individual atomized humans are thus either co-opted by these entities (you can live very nicely as a CEO or a politician, as long as you don't bite the feeding hand) or steamrollered if they try to resist.
In short, we are living in the aftermath of an alien invasion.


http://www.antipope.org/charlie/blog-static/2010/12/invaders-from-mars

Civilization itself seems to be much the same way – overall human happiness and well-being is sacrificed to the needs of top-down pyramidal civilization. And individual resistance is quickly stamped out. It's interesting to note that even in this advanced stage of collapse, corporate profits are at an all-time high. We're all just grist for the mill, or food shovelled into the maw of the beast.

And this article on Aeon magazine by Ross Anderson is too awesome to summarize well, but concerns cutting-edge philosophers pondering either human extinction or spreading out among the stars, and not much in between. These philosophers are thinking in cosmic time scales of billions of years, long enough for an asteroid to hit the earth or the sun to go supernova. Nuclear winter, bioweapons, it’s all there. And as for Peak Oil: 'There is a concern that civilisations might need a certain amount of easily accessible energy to ramp up,’ Bostrom told me. ‘By racing through Earth’s hydrocarbons, we might be depleting our planet’s civilization startup-kit. But, even if it took us 100,000 years to bounce back, that would be a brief pause on cosmic time scales.’”  But AI figures in their hypothesis:
An artificial intelligence wouldn’t need to better the brain by much to be risky. After all, small leaps in intelligence sometimes have extraordinary effects. Stuart Armstrong, a research fellow at the Future of Humanity Institute, once illustrated this phenomenon to me with a pithy take on recent primate evolution. ‘The difference in intelligence between humans and chimpanzees is tiny,’ he said. ‘But in that difference lies the contrast between 7 billion inhabitants and a permanent place on the endangered species list. That tells us it’s possible for a relatively small intelligence advantage to quickly compound and become decisive.’

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.

‘The basic problem is that the strong realisation of most motivations is incompatible with human existence,’ Dewey told me. ‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.’

It is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal — something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness. It might also predict that shortsighted humans will fail to see the wisdom of its interventions. It might plan out a sequence of cunning chess moves to insulate itself from resistance. Maybe it would surround itself with impenetrable defences, or maybe it would confine humans — in prisons of undreamt of efficiency.
Omens (Aeon Magazine)

Indeed, if the civilization/corporate hypothesis is correct, then this “AI” is already tearing the world apart and hurling towards extinction even as we speak.

2 comments:

  1. What gets me about AI discussions is that while we still don't even have a succinct definition of "intelligence" we think we are going to somehow recreate it in a computer. Not only that but that the thing we create is somehow based on the relative power of it's hardware going to be orders of magnitude more "intelligent" than we are. Why do we think this? AI is just the latest incarnation of the religious impulse, the desperate yearning for some "higher power" to come save us because we damn sure aren't as a group "intelligent" enough to save ourselves.

    ReplyDelete
  2. I don't agree that much with bob. I think that you can reacreate human head with software. I've been playing for a while with this and i've done a couple of experiments. The latest one has been my night project for this week, its an artificial intelligence made for writting stuff. You can take a look to its outputs in http://loquedicemipc.blogspot.com

    ReplyDelete

Note: Only a member of this blog may post a comment.