Michael Shermer - The Believing Brain

June 6, 2011

Host: Chris Mooney

Our guest this week is Michael Shermer, the publisher of Skeptic magazine and head of the Skeptics Society, and a longtime commentator on issues relating to science, critical thinking, and the paranormal.

Chris asked Michael on to discuss his new book, which is entitled The Believing Brain: From Ghosts and Gods to Politics and Conspiracies, How We Construct Beliefs and Reinforce Them As Truths.

Clearly, much of what Shermer has to say here will be of great relevance to skeptics and freethinkers—and along the way, Shermer also discusses his views on global warming (real, but not such a big deal) and how to promote evolution in a religious America.

In addition to publishing Skeptic, Michael Shermer is  a monthly columnist for Scientific American, the host of the Skeptics Distinguished Science Lecture Series at Caltech, and Adjunct Professor at Claremont Graduate University. His other books include Why People Believe in Weird Things and Why Darwin Matters.

Books Mentioned in This Episode:


Comments from the CFI Forums

If you would like to leave a comment about this episode of Point of Inquiry please visit the related thread on the CFI discussion forums

Interesting bit about global warming that I wanted to respond to, even though I’m not done listening yet. Even though he believes it, Shermer doesn’t seem worried about it. I’ve kind of been feeling the same way, which may explain why I’m not really bothered to debate AGW skeptics. Since the consequences of global warming are on the timescale of decades the effects of technological advance must be taken into account, and if Ray Kurzweil is even half-right in his predictions we should expect the pace of technological evolution to continue to accelerate, and considering the new focus on green technology, I feel we as a species are likely to come up with technological solutions before the consequences of global warming become too great. Maybe Shermer was not thinking along these lines, but I think many people, scientists and politicians included, don’t recognize the acceleration of the rate of technological advance and may not be taking it into account when they raise their alarm bells.

Posted on Jun 06, 2011 at 4:31pm by domokato Comment #1

Placing your bets on Ray Kurzweil being right is a sure-fire way to lose.

Posted on Jun 06, 2011 at 5:29pm by DarronS Comment #2

If we reach human-level artificial intelligence within the century (Kurzweil predicts 2045), it could be in time to have an impact on the consequences of global warming. But of course, lesser technologies will also have an effect.

History has shown Kurzweil’s predictions to be on the optimistic side (in terms of time horizon), but not by a large amount. The singularity-as-rapture stuff I don’t really buy, but I don’t see how one can rationally deny that machines are getting smarter at an accelerating pace.

Technological advance as a whole does seem to be accelerating for whatever reason, be it new technologies creating more innovation or population growth creating more inventors, or a mixture of the two. I agree we should take steps towards addressing global warming now, but I also think the future will provide new solutions. Look at the human genome project. They thought it would take much longer than it did, but thanks to technological progress it was completed way ahead of schedule.

Posted on Jun 06, 2011 at 6:00pm by domokato Comment #3

Couple of tweets of mine (@toxicpath):

.@ChrisMooney_ @michaelshermer you can fix medicare or medicaid with US legislation. There is no current mechanism to deal with A Global W.

.@ChrisMooney_ @michaelshermer and what’s with discounting risk in AGW but “becoming a Darwin award” after a rustle in the grass!

Posted on Jun 06, 2011 at 6:07pm by Somite Comment #4

History has shown Kurzweil’s predictions to be on the optimistic side (in terms of time horizon), but not by a large amount. The singularity-as-rapture stuff I don’t really buy, but I don’t see how one can rationally deny that machines are getting smarter at an accelerating pace.

How are machines getting smarter? Faster, yes, but smarter is more nebulous. Sure, computers can beat humans at chess and Jeopardy ®, but those tasks require processing power, not intelligence. I’ll believe computers are smarter than humans when one spits out a Unified Field Theory.

As for Kurzweil, his correct predictions were stuff many others also predicted. I’m not impressed that he claims to have foreseen portable computing devices. I foresaw those too, as did the people in the computer industry who were designing the devices.

Posted on Jun 07, 2011 at 6:28am by DarronS Comment #5

I don’t know that there’s a principled distinction to be made between processing power and intelligence, at least not when processing power is being put to tasks that involve learning, memory and reason. What I would say is that it’s clear increasing processing power is a relatively easy task to accomplish, when compared with increasing machine intelligence.

I’d agree with domokato that machines are getting smarter (they are able to perform more complex reasoning tasks), and that the pace of increase may be accelerating. But that’s not to say Kurzweil’s predictions are at all sensible.

Posted on Jun 07, 2011 at 6:51am by dougsmith Comment #6

Since the consequences of global warming are on the timescale of decades the effects of technological advance must be taken into account, and if Ray Kurzweil is even half-right in his predictions we should expect the pace of technological evolution to continue to accelerate, and considering the new focus on green technology, I feel we as a species are likely to come up with technological solutions before the consequences of global warming become too great.

AGW is accelerating too. It would go on even if we would stop producing CO2 now. But instead we are only reducing the acceleration of producing CO2. You want to bet, with our species as stake, what is faster? Technological evolution or AGW?

Posted on Jun 07, 2011 at 6:59am by GdB Comment #7

I don’t know that there’s a principled distinction to be made between processing power and intelligence, at least not when processing power is being put to tasks that involve learning, memory and reason. What I would say is that it’s clear increasing processing power is a relatively easy task to accomplish, when compared with increasing machine intelligence.

I don’t understand this. First you say that processing power and intelligence are not that different, with which I agree, but then you go on to say that one is easier to accomplish than the other. So are they different or not?

Posted on Jun 07, 2011 at 8:10am by George Comment #8

Placing your bets on Ray Kurzweil being right is a sure-fire way to lose.

I would say at least a 95% probability of losing.

People are adjusting to the climate change issue and since catastrophe is not right in their faces they can convince themselves that it will be OK.

Any sufficiently advanced technology is indistinguishable from magic.

It will solve all problems.  :lol:

psik

Posted on Jun 07, 2011 at 9:03am by psikeyhackr Comment #9

How are machines getting smarter? Faster, yes, but smarter is more nebulous.

I believe computers are getting smarter. Photoshop from ten years ago had only one undo, now that number is much higher. The illusion that one may “decide” to come up with a Unified Field Theory doesn’t really differ from Photoshop “being told” by a programmer to allow for multiple number of undos. I know it smells like another Free Will topic, but that’s the way things usually turn out when it comes to this kind of topics.

And if you don’t like calling today’s Photoshop smarter than Photoshop from ten years ago it may also be a mistake to call a person who may figure out a Unified Field Theory smarter than a burger flipper at McDonald’s. A burger flipper might not have the necessary brain connections to figure out a Unified Field Theory just like Photoshop from ten years ago didn’t have the necessary connections to perform more than one undo.

Posted on Jun 07, 2011 at 9:16am by George Comment #10

I don’t know that there’s a principled distinction to be made between processing power and intelligence, at least not when processing power is being put to tasks that involve learning, memory and reason. What I would say is that it’s clear increasing processing power is a relatively easy task to accomplish, when compared with increasing machine intelligence.

I don’t understand this. First you say that processing power and intelligence are not that different, with which I agree, but then you go on to say that one is easier to accomplish than the other. So are they different or not?

Well, it’s like stones and a house. A house is made of stones, but a house is more than just a pile of stones. Intelligence is made of processing power but intelligence is more than just raw processing power. It’s processing power organized in a certain way.

Posted on Jun 07, 2011 at 9:34am by dougsmith Comment #11

So I take it that in your opinion there is a principled distinction to be made between processing power and intelligence (?).

Posted on Jun 07, 2011 at 9:39am by George Comment #12

So I take it that in your opinion there is a principled distinction to be made between processing power and intelligence (?).

There isn’t “when processing power is being put to tasks that involve learning, memory and reason”, as I said before. Raw processing power isn’t the same as intelligence. Processing power used to learn, remember and reason is intelligence.

Posted on Jun 07, 2011 at 9:58am by dougsmith Comment #13

How are machines getting smarter? Faster, yes, but smarter is more nebulous.

I believe computers are getting smarter. Photoshop from ten years ago had only one undo, now that number is much higher.

And if you don’t like calling today’s Photoshop smarter than Photoshop from ten years ago it may also be a mistake to call a person who may figure out a Unified Field Theory smarter than a burger flipper at McDonald’s. A burger flipper might not have the necessary brain connections to figure out a Unified Field Theory just like Photoshop from ten years ago didn’t have the necessary connections to perform more than one undo.

Multiple undos could be 90% the same code as single undos just with recursion.  That could be a matter of hard disk drive size.  A 500 gigabyte drive today is the same price as a 20 gig drive back then and 500 gig drives did not exist then as far as I know.

I would just say faster and with more capacity not smarter and most comparisons of computers to the human brain/mind are nonsense.  von Neumann machines manipulate symbols in the form of bit combinations.  They do not comprehend what the bit combinations mean.

Can you load a picture of a herd of cows into Photoshop and have the computer explain what the picture is?  Do you expect 5 year old kid to be able to do that?

psik

Posted on Jun 07, 2011 at 10:09am by psikeyhackr Comment #14

So I take it that in your opinion there is a principled distinction to be made between processing power and intelligence (?).

There isn’t “when processing power is being put to tasks that involve learning, memory and reason”, as I said before. Raw processing power isn’t the same as intelligence. Processing power used to learn, remember and reason is intelligence.

Are you saying that computers are not intelligent (or not easily made to be intelligent) because they don’t (easily) learn, remember and reason?

Posted on Jun 07, 2011 at 10:22am by George Comment #15

They do not comprehend what the bit combinations mean.

Can you load a picture of a herd of cows into Photoshop and have the computer explain what the picture is?

Perhaps not Photoshop CS5, but who knows, maybe Photoshop CS50 will be able to accomplish this. The fact that it might not comprehend what it is doing is irrelevant. But perhaps when the programmers design Photoshop CS50 (with the ability of recognizing a picture of a herd of cows) and turn it on for the first time, we may find (to our surprise and to the computer’s) that the computer now acquires consciousness—or maybe it will happen with CS19 or CS2000. I still believe that consciousness (that is, being aware) is only a byproduct of a very complex calculating machine.

Posted on Jun 07, 2011 at 10:29am by George Comment #16

Processing power is like the size of an office building; the more you have, the more you can do with it.

Intelligence/smartness is like a business model; the better it is, the more effective the company will be.

Both are advancing. More processing power is allowing smarter applications to run on smaller devices. I know many applications are running up against the processing power limit of the device it’s running on (i.e. the intelligence is ahead of the processing power needed to run it). As processing power increases (per unit of size of the device), more intelligent applications can be made immediately: photoshop can handle more undos; speech-to-text and text-to-speech gets closer to real-time (conversation-time); websites load faster. Artificial intelligence is an emergent phenomenon. The pieces will come together slowly.

Another benefactor to AI is science (theory). Cognitive science, neuroscience, and even evolutionary biology. As these fields progress, theories of intelligence and learning will progress, and thus so will its application in AI.

Our brain is a big parallel processing machine. It has evolved very efficient “algorithms” to do what it does (vision processing, sound processing, etc.). Kurzweil predicts that by 2019 a $1000 desktop computer will have as much processing power as a human brain. Whether this is true or not, our algorithms (machine vision, natural language processing, etc.) in 2019 will probably not be as efficient as the human brain’s, so it may take some extra theory to get to human-level intelligence. Although, evolutionary algorithms may be able to take up some of the slack.

/speculation

Posted on Jun 07, 2011 at 10:30am by domokato Comment #17

von Neumann machines manipulate symbols in the form of bit combinations.  They do not comprehend what the bit combinations mean.

Neurons in our brains manipulate ones and zeroes too (firing vs not firing), and they also do not individually comprehend anything, yet as a whole they form a mind.

Can you load a picture of a herd of cows into Photoshop and have the computer explain what the picture is?  Do you expect 5 year old kid to be able to do that?

Can a 5 year old kid keep track of 100 undos in excruciating detail? Can he draw a photo-realistic lens flare?

Computers are great at some things, humans are great at others. It’s a mistake to think that human intelligence is the only kind

Posted on Jun 07, 2011 at 10:32am by domokato Comment #18

So I take it that in your opinion there is a principled distinction to be made between processing power and intelligence (?).

There isn’t “when processing power is being put to tasks that involve learning, memory and reason”, as I said before. Raw processing power isn’t the same as intelligence. Processing power used to learn, remember and reason is intelligence.

Are you saying that computers are not intelligent (or not easily made to be intelligent) because they don’t (easily) learn, remember and reason?

Well, what I’m saying is that it’s difficult to make computers intelligent (as opposed to giving them raw processing power) because it’s difficult to design systems that learn, remember and reason. Raw processing power is a comparatively easy problem to solve.

Posted on Jun 07, 2011 at 10:41am by dougsmith Comment #19

Neurons in our brains manipulate ones and zeroes too (firing vs not firing), ...

It’s true that neurons can be modeled as either firing or not firing, but seen on the micro-level neurons behave more like small factories than small switches. They can behave in quite complex ways and so are poorly modeled by semiconductors.

Posted on Jun 07, 2011 at 10:44am by dougsmith Comment #20

Yes, I know, but that doesn’t invalidate my overall point that they do not individually comprehend anything yet they form a mind together. Plus neurons can be simulated in computers (at least as far as our understanding of them goes), so even if our theory of intelligence/learning does not progress while our processing power does, eventually we will be able to simulate a whole brain and reach human-level AI that way.

Posted on Jun 07, 2011 at 10:49am by domokato Comment #21

Well, what I’m saying is that it’s difficult to make computers intelligent (as opposed to giving them raw processing power) because it’s difficult to design systems that learn, remember and reason. Raw processing power is a comparatively easy problem to solve.

Is Photoshop’s undo a raw processing operation or a matter of remembering? I know computer doesn’t remember the same way we do (how could it?) but in my opinion it remembers nevertheless.

Posted on Jun 07, 2011 at 11:01am by George Comment #22

Yes, I know, but that doesn’t invalidate my overall point that they do not individually comprehend anything yet they form a mind together. Plus neurons can be simulated in computers (at least as far as our understanding of them goes), so even if our theory of intelligence/learning does not progress while our processing power does, eventually we will be able to simulate a whole brain and reach human-level AI that way.

Right, although I’m not sure we’ll do it by simulating a brain. It could be that we’ll just end up designing something that acts the way a person does while what goes on inside is quite different.

Posted on Jun 07, 2011 at 11:05am by dougsmith Comment #23

Well, what I’m saying is that it’s difficult to make computers intelligent (as opposed to giving them raw processing power) because it’s difficult to design systems that learn, remember and reason. Raw processing power is a comparatively easy problem to solve.

Is Photoshop’s undo a raw processing operation or a matter of remembering? I know computer doesn’t remember the same way we do (how could it?) but in my opinion it remembers nevertheless.

It’s trivial for a computer to remember. The problem is to remember selectively those things that are important to the task, and make use of them only, while discarding the useless stuff. In other words, the problem is using memory to learn.

Posted on Jun 07, 2011 at 11:07am by dougsmith Comment #24

domokato, I probably shouldn’t have started my last post by saying “right”. I think it’s still very much an empirical question as to whether any feasibly constructable computing device could really simulate a human brain. It might be, for instance, that the chemical complexity and chaotic interactions between sub-neural elements are so vast that it is impossible to simulate one adequately. (We might have to know the initial conditions too precisely, or there might be too many interacting chemical elements to compute in a feasible amount of time).

But that doesn’t imply that we couldn’t construct something just as intelligent, or more intelligent, by doing something more like classic AI.

These are all empirical questions and I don’t think we can know the outcome beforehand.

Posted on Jun 07, 2011 at 11:13am by dougsmith Comment #25

I’m not sure our simulation needs to be that accurate to get the desired effect (learning - current artificial neural networks are capable of some degree of it), but you’re right, we can’t know until we know.

Posted on Jun 07, 2011 at 11:19am by domokato Comment #26

Well, what I’m saying is that it’s difficult to make computers intelligent (as opposed to giving them raw processing power) because it’s difficult to design systems that learn, remember and reason. Raw processing power is a comparatively easy problem to solve.

Is Photoshop’s undo a raw processing operation or a matter of remembering? I know computer doesn’t remember the same way we do (how could it?) but in my opinion it remembers nevertheless.

It’s trivial for a computer to remember. The problem is to remember selectively those things that are important to the task, and make use of them only, while discarding the useless stuff. In other words, the problem is using memory to learn.

Again, the difference here is that Photoshop is not designed to remember to learn the way we are, but that is only a minor difference between us and not enough of a reason, IMO, to refer to Photoshop’s task of remembering as not intelligent. Chess computers use memory to learn.

Posted on Jun 07, 2011 at 12:08pm by George Comment #27

von Neumann machines manipulate symbols in the form of bit combinations.  They do not comprehend what the bit combinations mean.

Neurons in our brains manipulate ones and zeroes too (firing vs not firing), and they also do not individually comprehend anything, yet as a whole they form a mind.

Our brains do not funnel bits into a central processor creating what is called a von Neumann bottleneck.

You can try comparing a neuron to a flip-flop all you want and assume you can extrapolate from there.  But so far we are just making smaller, faster, and more multi-connected von Neumann machines and approaching the point where the software required to coordinate the processors is choking off the power of the processors.

Have you ever written an assembly language program?

But a believing brain is one accepting something as true or false without understanding to so it is behaving like a stupid computer.

psik

Posted on Jun 07, 2011 at 1:25pm by psikeyhackr Comment #28

von Neumann machines manipulate symbols in the form of bit combinations.  They do not comprehend what the bit combinations mean.

Neurons in our brains manipulate ones and zeroes too (firing vs not firing), and they also do not individually comprehend anything, yet as a whole they form a mind.

Our brains do not funnel bits into a central processor creating what is called a von Neumann bottleneck.

You can try comparing a neuron to a flip-flop all you want and assume you can extrapolate from there.  But so far we are just making smaller, faster, and more multi-connected von Neumann machines and approaching the point where the software required to coordinate the processors is choking off the power of the processors.

But what matters is functional equivalence. If you can accurately simulate a brain in a computer then it’s functionally the same thing as a brain. A modern CPUs can perform more calculations per second than a neuron, which is what makes this kind of simulation possible, or at least possible in the near future.

Have you ever written an assembly language program?

Yes, I have a BS in computer science.

Posted on Jun 07, 2011 at 1:55pm by domokato Comment #29

If you can accurately simulate a brain in a computer then it’s functionally the same thing as a brain.

Tests have been done on people where electrical stimulation was applied to a person’s brain which caused the subject to have some specific recollection.  But how can stimulating the same place in another person’s brain bring up the same memory if they did not have the same experience.

Everybody’s brain would be wired somewhat differently so what kind of simulation can be done and how can the simulation be tested.

Where is the CPU in anybody’s brain?  :lol:

The metaphor that people have been insisting on making between computers and human brains for the last 60 years has been nonsense.  The problem is that computers manipulate symbols according to whatever program but understands nothing about what the symbols mean.  We do not know how our understanding corresponds to the signals moving in the brain.  Where is the memory of anybody’s high school graduation stored in their brain.

psik

Posted on Jun 07, 2011 at 5:14pm by psikeyhackr Comment #30

I felt that there was a bit of naivete, disinterest or cognitive dissonance regarding Mr. Shermer’s “who, me worried” stance on GW.  If he thinks we need to defer to the experts, then I don’t understand how we shouldn’t be worried about the latest likely predictions of 3 - 4M sea level rise by 2100…and the already apparent destabilization of climate and wacky weather. 

I completely understand not wanting to be alarmist, but for someone who doesn’t deny AGW, I found the attitude of wait and see completely in line with the worst of the conservatives.  We’ve waited long enough.  The magical market is unlikely to correct this issue by itself - and where was that magical market in the collapse of 2008?  And my last vent….the longer we wait, the more this is going to cost and the more the government will need to impose controls.  In other words, waiting longer means less of a libertarian system.

Anybody else feel that the dismissive attitude instead of a more solid, yes we need to be concerned, but not overreact attitude would have been more welcome, expected and truly skeptical?

Posted on Jun 07, 2011 at 7:36pm by mtnmann Comment #31

Just because the market is free does not mean it is not STUPID.

But one aspect that the free market advocates don’t advertise is that the SMART people in whatever market are hiding information from the DUMB people in the market.  Never give a sucker an even break.

So with information hiding that means the majority of people make mistakes the majority of the time.  So what do you expect from a free market with 6 billion people making mistakes most of the time?

How is it that double-entry accounting can be 700 years old and our so called educators TALK about preparing children to compete with kids in other countries in the future but never suggest that all of these kids know accounting?  They are supposed to be used by the free market not know enough to make the free market work for them.

So what effect does planned obsolescence have on CO2 production?  Who cares, it’s economic GROWTH.

psik

Posted on Jun 07, 2011 at 9:05pm by psikeyhackr Comment #32

I’ve blogged about my climate exchange with Shermer and why I am not satisfied with his responses, and would push him farther.

http://www.desmogblog.com/debating-michael-shermer-and-bjorn-lomborg-climate-risks

Posted on Jun 08, 2011 at 8:54am by CMooney Comment #33

You raise excellent points in your blog post, Chris. The “wait and see” attitude infuriates me almost as much as the deniers who call themselves skeptics. We don’t have to wait to see the effects of global warming, for global warming is not something that will happen in the future, it is happening now. Actually, global warming began in the 1980s, and we can see its effects all around us, starting with Arctic sea ice decline. Sitting around to see what happens means staying the present course: unchecked growth and pumping massive amounts of known greenhouse gases into our atmosphere. This is a very unwise course of inaction.

Posted on Jun 08, 2011 at 9:08am by DarronS Comment #34

Yep, ‘wait and see’ is the sort of lazy response that in this context rises to negligence. With the prospect of several billion people in India and China finally becoming able to afford a western lifestyle, there is literally no time to waste.

Posted on Jun 08, 2011 at 9:26am by dougsmith Comment #35

Yep, ‘wait and see’ is the sort of lazy response that in this context rises to negligence. With the prospect of several billion people in India and China finally becoming able to afford a western lifestyle, there is literally no time to waste.

Well, judging by this documentary, China’s Ghost Cities and Malls, it seems that the Chines are far from becoming to afford a western lifestyle. And the illusion of India of catching up to the west is probably even greater.

Posted on Jun 08, 2011 at 9:33am by George Comment #36

The metaphor that people have been insisting on making between computers and human brains for the last 60 years has been nonsense.

The brain has processing power, like a computer. So I wouldn’t call it nonsense. There’s some sense in it. The comparison is useful in the field of AI, for example when trying to estimate how much processing power a computer would have to have to match a human brain.

The problem is that computers manipulate symbols according to whatever program but understands nothing about what the symbols mean.  We do not know how our understanding corresponds to the signals moving in the brain.

Is understanding of the self necessary to compare brains’ and computers’ processing power? Or are you saying computers are stupid because they don’t understand themselves while we do (at least to some degree)? I don’t see why computers can’t eventually be made to understand themselves. A computer program is analogous to a person’s brain’s wiring, in this case. I guess my argument is, an intelligent computer and an intelligent human may not work the same way, and may not have the same kind of intelligence, but both should be considered intelligent anyway. In regards to comparing computers to brains, I would argue that although they differ in architecture, computers’ functionalities and capabilities are getting closer and closer to that of a brain’s, and so comparing the two makes more and more sense, at least on a high level. Of course, the metaphor breaks down at the low level.

So what effect does planned obsolescence have on CO2 production?

Planned obsolescence seems to be a very limited phenomenon. Having competition ensures it’s a losing strategy.

Posted on Jun 08, 2011 at 9:49am by domokato Comment #37

Yep, ‘wait and see’ is the sort of lazy response that in this context rises to negligence. With the prospect of several billion people in India and China finally becoming able to afford a western lifestyle, there is literally no time to waste.

Well, judging by this documentary, China’s Ghost Cities and Malls, it seems that the Chines are far from becoming to afford a western lifestyle. And the illusion of India of catching up to the west is probably even greater.

The point isn’t that tomorrow we’ll wake up and all of China and India will be on a par with the west. The point is that both countries’ standards of living are increasing at an accelerated pace, which means more cars per capita, larger power requirements, etc. If that documentary suggests otherwise, it’s BS.

Posted on Jun 08, 2011 at 10:05am by dougsmith Comment #38

The documentary doesn’t suggest anything. It simply shows (or maybe it’s a hoax and it was filmed in a secret NASA studio) that the great majority of people in China are nowhere close to becoming able to afford a western lifestyle. That’s all.

Posted on Jun 08, 2011 at 10:58am by George Comment #39

I don’t know that there’s a principled distinction to be made between processing power and intelligence, at least not when processing power is being put to tasks that involve learning, memory and reason. What I would say is that it’s clear increasing processing power is a relatively easy task to accomplish, when compared with increasing machine intelligence.

I’d agree with domokato that machines are getting smarter (they are able to perform more complex reasoning tasks), and that the pace of increase may be accelerating. But that’s not to say Kurzweil’s predictions are at all sensible.


Suppose we were to take the problem of computing the area of a rectangle.

Say we have an old 8-bit 8080 processor and a 16-bit 286 processor and a 2.5 ghz Pentium.

The 8080 did not have hardware multiply and divide so the calculation would be done in software via multiple steps.

The 286 would be a lot faster and the Pentium would be even faster.

But they all just multiply numbers.  Now do these numbers represent feet or miles or yards or kilometers?

The stupid computers can multiply the numbers faster than any human, even the 8080 doing it in software.  But none of those machines UNDERSTAND what the units mean or what AREA is.

Processing power is not intelligence even if processing power is required for intelligence.

psik

Posted on Jun 08, 2011 at 11:19am by psikeyhackr Comment #40

The documentary doesn’t suggest anything. It simply shows (or maybe it’s a hoax and it was filmed in a secret NASA studio) that the great majority of people in China are nowhere close to becoming able to afford a western lifestyle. That’s all.

Well that’s certainly correct; no hoaxes needed. But irrelevant to the main point re. AGW.

Posted on Jun 08, 2011 at 11:43am by dougsmith Comment #41

This show imo seemed to have a gapping inconsistency, in one breath we are told that there is an evolutionary advantage to reacting to patterns because they might represent danger, in the next breath we are told to do not about the acknowledge problem of global warming.  By this logic the human race is doomed to wait to long, which is analagious to seeing a tiger acknowledging the tiger but doing nothing about the tiger so that the tiger eats not only us as an individual but also a large percentage of our population.  Ostrichism at its worse.

Posted on Jun 09, 2011 at 4:38am by illrationalist Comment #42

There’s also nature itself, which has self-correcting mechanisms. I’m not sure if CO2 in the atmosphere represents an entropic bubble that can be exploited by some mutated organism, but it might be worth looking into. (How is the CO2 distributed, anyway? Is it in the upper atmosphere or spread out evenly? I read somewhere that some redwoods are growing taller because of the CO2, which should reduce CO2 somewhat). Examples of human messes nature is cleaning up:

http://www.cracked.com/article_19133_6-ways-nature-cleans-up-our-messes-better-than-we-do.html

Are climate scientists taking evolution into account? What do biologists have to say about climate change?

Posted on Jun 09, 2011 at 9:56am by domokato Comment #43

There’s also nature itself, which has self-correcting mechanisms.

Which are those?

Posted on Jun 09, 2011 at 10:00am by dougsmith Comment #44

There’s also nature itself, which has self-correcting mechanisms. I’m not sure if CO2 in the atmosphere represents an entropic bubble that can be exploited by some mutated organism, but it might be worth looking into. (How is the CO2 distributed, anyway? Is it in the upper atmosphere or spread out evenly? I read somewhere that some redwoods are growing taller because of the CO2, which should reduce CO2 somewhat)

“entropic bubble”?

Carbon dioxide is somewhat heavier than the diatomic oxygen molecules which is heavier than the diatomic nitrogen molecules.  I suppose if you had a sealed columns of air the might very slowly sort themselves out.  But there is so much wind energy nearly all of the time in the atmosphere they are very throughly mixed.  Plants on the ground and plankton in the oceans convert that CO2 into O2 so plenty of CO2 must get to the bottom of the atmosphere.  Animals on the ground including us must breathe the O2.

James Burke talked about growing lot more plants back in 1989.

http://video.google.com/videoplay?docid=6514270139930450081#

psik

Posted on Jun 09, 2011 at 11:01am by psikeyhackr Comment #45

I’m not sure if it’s a real term, but an entropic bubble is an abundance of a potential resource that an organism may evolve to exploit (and in doing so would increase the entropy production rate of the ecosystem). If there is a larger amount of CO2 in the atmosphere, and plants can reach it, doesn’t that mean plants have more CO2 to consume and grow? If this is a real phenomenon, is it taken into account in climate models?

There’s also nature itself, which has self-correcting mechanisms.

Which are those?

You’re right, I suppose those are still hypothetical. Here is at least some evidence in support: http://en.wikipedia.org/wiki/Daisyworld#Modifications_to_the_original_simulation

EDIT: some of my ideas come from this: http://www.centerforinquiry.net/forums/viewthread/10754/

Posted on Jun 09, 2011 at 11:56am by domokato Comment #46

You’re right, I suppose those are still hypothetical. Here is at least some evidence in support: http://en.wikipedia.org/wiki/Daisyworld#Modifications_to_the_original_simulation

EDIT: some of my ideas come from this: http://www.centerforinquiry.net/forums/viewthread/10754/

Yeah, I mean, that isn’t evidence in support. It’s a thought experiment showing planetary ‘self correction’ is conceivable for one variable under one absurdly simplistic condition. The background problem here is that the Gaia hypothesis is basically woo. A good discussion of it was published in 2005 in Skeptical Inquirer by Massimo Pigliucci. I find a copy of what he wrote available HERE.

Posted on Jun 09, 2011 at 12:52pm by dougsmith Comment #47

You’re right, I suppose those are still hypothetical. Here is at least some evidence in support: http://en.wikipedia.org/wiki/Daisyworld#Modifications_to_the_original_simulation

EDIT: some of my ideas come from this: http://www.centerforinquiry.net/forums/viewthread/10754/

Yeah, I mean, that isn’t evidence in support. It’s a thought experiment showing planetary ‘self correction’ is conceivable for one variable under one absurdly simplistic condition. The background problem here is that the Gaia hypothesis is basically woo. A good discussion of it was published in 2005 in Skeptical Inquirer by Massimo Pigliucci. I find a copy of what he wrote available HERE.

Thanks, but it occurs to me that I had something more simple in mind. As I went over in a previous post, CO2 increase means more resources for plants. So shouldn’t plant growth create a negative feedback loop for CO2 in the atmosphere? (More CO2 -> more plants -> less CO2)?

*does a google search*

http://www.sciencedaily.com/releases/2010/05/100503161435.htm

Interesting. Looks like they have taken plants into account in this model, and it seems to actually increase warming, although it does not say what it does to CO2 levels, nor does it take into account possible evolution of plants.

Edit: Google returns very limited results for this search. Essentially one article

A Google scholar search returned this: http://www.annualreviews.org/doi/abs/10.1146/annurev.arplant.48.1.609 , which looks relevant, but not free to read :(

Posted on Jun 09, 2011 at 2:29pm by domokato Comment #48

Well right, more CO2 will mean more resources for plants. But I don’t know anyone who has suggested the plants will be able to absorb all the additional CO2 we put out.

Posted on Jun 09, 2011 at 3:11pm by dougsmith Comment #49

Well right, more CO2 will mean more resources for plants. But I don’t know anyone who has suggested the plants will be able to absorb all the additional CO2 we put out.

http://en.wikipedia.org/wiki/Keeling_Curve

If you look at the Keeling curve you see a sine wave superimposed on the rising line.  That sine wave is caused by the life cycle of plants.  They absorb carbon during the spring and release most of it back in the fall.  We would need plants that absorb carbon and do not release it back.

psik

Posted on Jun 10, 2011 at 4:41am by psikeyhackr Comment #50

What’s so great about self-correction???  First of all, degenerative feedback need not null out a perturbation, but just reduce it—quite possibly not enough.  Meanwhile, through the course of self-correction, none of us might see preservation.

“The Earth has a skin and that skin has diseases, one of its diseases is called man.” —Nietzsche

Posted on Jun 10, 2011 at 1:17pm by jx Comment #51

Early on in the interview during the discussion of global warming, Shermer said that he prefers bottom-up approaches to top-down approaches—but it was interesting that he listed tax incentives as one of those bottom-up approaches.  I’d consider tax structure, even if the taxes are Pigovian taxes, to be a top-down approach, though they do offer more freedom in how to respond than direct regulation.

Does Shermer support a “cap-and-tax” or a “cap-and-trade” regime, or perhaps even a carbon tax?

Posted on Jun 11, 2011 at 12:07pm by Jim Lippard Comment #52

This show imo seemed to have a gapping inconsistency, in one breath we are told that there is an evolutionary advantage to reacting to patterns because they might represent danger, in the next breath we are told to do not about the acknowledge problem of global warming.  By this logic the human race is doomed to wait to long, which is analagious to seeing a tiger acknowledging the tiger but doing nothing about the tiger so that the tiger eats not only us as an individual but also a large percentage of our population.  Ostrichism at its worse.

Precisely the thought that I had while listening to this interview.  I’m surprised no one else pointed that out.

Posted on Jun 13, 2011 at 8:13am by scinquiry Comment #53

Thanks, but it occurs to me that I had something more simple in mind. As I went over in a previous post, CO2 increase means more resources for plants. So shouldn’t plant growth create a negative feedback loop for CO2 in the atmosphere? (More CO2 -> more plants -> less CO2)?

The way that plants and microscopic animals take CO2 out of the atmosphere in the long term is by plants using CO2 to create carbon based structures that get deposited at the bottom of swamps, lakes, or oceans and then get covered up by either more carbon based material or sediment.  Usually, plants decay and the carbon is released back into the environment.  It would take special conditions to get buried permanently (like a peat bog or shallow ocean where plants cannot decay).  Another way is for ocean based life forms to use carbon and calcium to make shells using Calcium Carbonate.  The old shells get deposited at the bottom of the ocean and build up over time to eventually form limestone.  Either of these two processes are slow, not as fast as man is putting the carbon back into the environment.

Posted on Jun 13, 2011 at 1:55pm by brightfut Comment #54

@brightfut. Thanks, that makes a lot of sense

Posted on Jun 13, 2011 at 1:56pm by domokato Comment #55

Why do AGW activists focus solely on the hard scientific problems raised by AGW and ignore recent social science (psychology, anthropology, etc.) research showing the futility of an activist approach to gaining the necessary public support for radical AGW solutions?  It should be obvious from this research that if the worst case or even near worst case scenarios for AGW are accurate then the case is simply hopeless since the impossibility of turning around global society to the necessary degree in the given time period is clear.  However, if the AGW problems are less severe and the AGW activists persist in pushing for solutions that cannot be made socially acceptable then they run the risk of antagonizing the powers-that-be to the extent that they provoke social upheavals (war, totalitarianism, etc.) that will themselves tip the AGW balance to worst case scenario levels.

So how do you apply the “precautionary principle”? Overactivism has its own risks even before we try to account for all the other problems - needing “precautionary principle” levels of attention - that could benefit if provided some of the resources lost due to mistaken overemphasis on AGW activism. 

Isn’t it time that “hard” and “soft” scientific approaches to the AGW problms were coordinated?

lff

Posted on Jul 15, 2011 at 11:08am by lff Comment #56

The podcast was well done and timely as it relates to our current political problems but I also really enjoyed the Shermer and Colbert dialogue on same.
http://www.colbertnation.com/video/tags/Michael+Shermer

Artificial intelligence is an oxymoron.

Posted on Jul 15, 2011 at 8:38pm by gray1 Comment #57

Artificial intelligence is an oxymoron.

An oxymoron is a hyperventilated idiot.

psik

Posted on Jul 16, 2011 at 9:08am by psikeyhackr Comment #58